mirror of
https://github.com/docker/compose.git
synced 2025-04-08 17:05:13 +02:00
commit
9503aa2b5f
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,6 +1,6 @@
|
||||
*.egg-info
|
||||
*.pyc
|
||||
/.coverage
|
||||
.coverage*
|
||||
/.tox
|
||||
/build
|
||||
/coverage-html
|
||||
|
@ -14,7 +14,12 @@
|
||||
- id: requirements-txt-fixer
|
||||
- id: trailing-whitespace
|
||||
- repo: git://github.com/asottile/reorder_python_imports
|
||||
sha: 3d86483455ab5bd06cc1069fdd5ac57be5463f10
|
||||
sha: v0.1.0
|
||||
hooks:
|
||||
- id: reorder-python-imports
|
||||
language_version: 'python2.7'
|
||||
args:
|
||||
- --add-import
|
||||
- from __future__ import absolute_import
|
||||
- --add-import
|
||||
- from __future__ import unicode_literals
|
||||
|
124
CHANGELOG.md
124
CHANGELOG.md
@ -1,6 +1,123 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
1.6.0 (2016-01-15)
|
||||
------------------
|
||||
|
||||
Major Features:
|
||||
|
||||
- Compose 1.6 introduces a new format for `docker-compose.yml` which lets
|
||||
you define networks and volumes in the Compose file as well as services. It
|
||||
also makes a few changes to the structure of some configuration options.
|
||||
|
||||
You don't have to use it - your existing Compose files will run on Compose
|
||||
1.6 exactly as they do today.
|
||||
|
||||
Check the upgrade guide for full details:
|
||||
https://docs.docker.com/compose/compose-file/upgrading
|
||||
|
||||
- Support for networking has exited experimental status and is the recommended
|
||||
way to enable communication between containers.
|
||||
|
||||
If you use the new file format, your app will use networking. If you aren't
|
||||
ready yet, just leave your Compose file as it is and it'll continue to work
|
||||
just the same.
|
||||
|
||||
By default, you don't have to configure any networks. In fact, using
|
||||
networking with Compose involves even less configuration than using links.
|
||||
Consult the networking guide for how to use it:
|
||||
https://docs.docker.com/compose/networking
|
||||
|
||||
The experimental flags `--x-networking` and `--x-network-driver`, introduced
|
||||
in Compose 1.5, have been removed.
|
||||
|
||||
- You can now pass arguments to a build if you're using the new file format:
|
||||
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
buildno: 1
|
||||
|
||||
- You can now specify both a `build` and an `image` key if you're using the
|
||||
new file format. `docker-compose build` will build the image and tag it with
|
||||
the name you've specified, while `docker-compose pull` will attempt to pull
|
||||
it.
|
||||
|
||||
- There's a new `events` command for monitoring container events from
|
||||
the application, much like `docker events`. This is a good primitive for
|
||||
building tools on top of Compose for performing actions when particular
|
||||
things happen, such as containers starting and stopping.
|
||||
|
||||
- There's a new `depends_on` option for specifying dependencies between
|
||||
services. This enforces the order of startup, and ensures that when you run
|
||||
`docker-compose up SERVICE` on a service with dependencies, those are started
|
||||
as well.
|
||||
|
||||
New Features:
|
||||
|
||||
- Added a new command `config` which validates and prints the Compose
|
||||
configuration after interpolating variables, resolving relative paths, and
|
||||
merging multiple files and `extends`.
|
||||
|
||||
- Added a new command `create` for creating containers without starting them.
|
||||
|
||||
- Added a new command `down` to stop and remove all the resources created by
|
||||
`up` in a single command.
|
||||
|
||||
- Added support for the `cpu_quota` configuration option.
|
||||
|
||||
- Added support for the `stop_signal` configuration option.
|
||||
|
||||
- Commands `start`, `restart`, `pause`, and `unpause` now exit with an
|
||||
error status code if no containers were modified.
|
||||
|
||||
- Added a new `--abort-on-container-exit` flag to `up` which causes `up` to
|
||||
stop all container and exit once the first container exits.
|
||||
|
||||
- Removed support for `FIG_FILE`, `FIG_PROJECT_NAME`, and no longer reads
|
||||
`fig.yml` as a default Compose file location.
|
||||
|
||||
- Removed the `migrate-to-labels` command.
|
||||
|
||||
- Removed the `--allow-insecure-ssl` flag.
|
||||
|
||||
|
||||
Bug Fixes:
|
||||
|
||||
- Fixed a validation bug that prevented the use of a range of ports in
|
||||
the `expose` field.
|
||||
|
||||
- Fixed a validation bug that prevented the use of arrays in the `entrypoint`
|
||||
field if they contained duplicate entries.
|
||||
|
||||
- Fixed a bug that caused `ulimits` to be ignored when used with `extends`.
|
||||
|
||||
- Fixed a bug that prevented ipv6 addresses in `extra_hosts`.
|
||||
|
||||
- Fixed a bug that caused `extends` to be ignored when included from
|
||||
multiple Compose files.
|
||||
|
||||
- Fixed an incorrect warning when a container volume was defined in
|
||||
the Compose file.
|
||||
|
||||
- Fixed a bug that prevented the force shutdown behaviour of `up` and
|
||||
`logs`.
|
||||
|
||||
- Fixed a bug that caused `None` to be printed as the network driver name
|
||||
when the default network driver was used.
|
||||
|
||||
- Fixed a bug where using the string form of `dns` or `dns_search` would
|
||||
cause an error.
|
||||
|
||||
- Fixed a bug where a container would be reported as "Up" when it was
|
||||
in the restarting state.
|
||||
|
||||
- Fixed a confusing error message when DOCKER_CERT_PATH was not set properly.
|
||||
|
||||
- Fixed a bug where attaching to a container would fail if it was using a
|
||||
non-standard logging driver (or none at all).
|
||||
|
||||
|
||||
1.5.2 (2015-12-03)
|
||||
------------------
|
||||
|
||||
@ -58,7 +175,7 @@ Change log
|
||||
- When printings logs during `up` or `logs`, flush the output buffer after
|
||||
each line to prevent buffering issues from hideing logs.
|
||||
|
||||
- Recreate a container if one of it's dependencies is being created.
|
||||
- Recreate a container if one of its dependencies is being created.
|
||||
Previously a container was only recreated if it's dependencies already
|
||||
existed, but were being recreated as well.
|
||||
|
||||
@ -177,7 +294,6 @@ Bug fixes:
|
||||
- `docker-compose build` can now be run successfully against a Swarm cluster.
|
||||
|
||||
|
||||
|
||||
1.4.2 (2015-09-22)
|
||||
------------------
|
||||
|
||||
@ -302,8 +418,8 @@ Several new configuration keys have been added to `docker-compose.yml`:
|
||||
- `pid: host`, like `docker run --pid=host`, lets you reuse the same PID namespace as the host machine.
|
||||
- `cpuset`, like `docker run --cpuset-cpus`, lets you specify which CPUs to allow execution in.
|
||||
- `read_only`, like `docker run --read-only`, lets you mount a container's filesystem as read-only.
|
||||
- `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/reference/run/#security-configuration).
|
||||
- `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/reference/run/#logging-drivers-log-driver).
|
||||
- `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/engine/reference/run/#security-configuration).
|
||||
- `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/engine/reference/run/#logging-drivers-log-driver).
|
||||
|
||||
Many bugs have been fixed, including the following:
|
||||
|
||||
|
@ -43,7 +43,7 @@ To run the style checks at any time run `tox -e pre-commit`.
|
||||
|
||||
## Submitting a pull request
|
||||
|
||||
See Docker's [basic contribution workflow](https://docs.docker.com/project/make-a-contribution/#the-basic-contribution-workflow) for a guide on how to submit a pull request for code or documentation.
|
||||
See Docker's [basic contribution workflow](https://docs.docker.com/opensource/workflow/make-a-contribution/#the-basic-contribution-workflow) for a guide on how to submit a pull request for code or documentation.
|
||||
|
||||
## Running the test suite
|
||||
|
||||
|
24
Dockerfile
24
Dockerfile
@ -16,52 +16,44 @@ RUN set -ex; \
|
||||
; \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest \
|
||||
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-1.8.3 \
|
||||
-o /usr/local/bin/docker && \
|
||||
chmod +x /usr/local/bin/docker
|
||||
|
||||
# Build Python 2.7.9 from source
|
||||
RUN set -ex; \
|
||||
curl -LO https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz; \
|
||||
tar -xzf Python-2.7.9.tgz; \
|
||||
curl -L https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz | tar -xz; \
|
||||
cd Python-2.7.9; \
|
||||
./configure --enable-shared; \
|
||||
make; \
|
||||
make install; \
|
||||
cd ..; \
|
||||
rm -rf /Python-2.7.9; \
|
||||
rm Python-2.7.9.tgz
|
||||
rm -rf /Python-2.7.9
|
||||
|
||||
# Build python 3.4 from source
|
||||
RUN set -ex; \
|
||||
curl -LO https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz; \
|
||||
tar -xzf Python-3.4.3.tgz; \
|
||||
curl -L https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz | tar -xz; \
|
||||
cd Python-3.4.3; \
|
||||
./configure --enable-shared; \
|
||||
make; \
|
||||
make install; \
|
||||
cd ..; \
|
||||
rm -rf /Python-3.4.3; \
|
||||
rm Python-3.4.3.tgz
|
||||
rm -rf /Python-3.4.3
|
||||
|
||||
# Make libpython findable
|
||||
ENV LD_LIBRARY_PATH /usr/local/lib
|
||||
|
||||
# Install setuptools
|
||||
RUN set -ex; \
|
||||
curl -LO https://bootstrap.pypa.io/ez_setup.py; \
|
||||
python ez_setup.py; \
|
||||
rm ez_setup.py
|
||||
curl -L https://bootstrap.pypa.io/ez_setup.py | python
|
||||
|
||||
# Install pip
|
||||
RUN set -ex; \
|
||||
curl -LO https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz; \
|
||||
tar -xzf pip-7.0.1.tar.gz; \
|
||||
curl -L https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz | tar -xz; \
|
||||
cd pip-7.0.1; \
|
||||
python setup.py install; \
|
||||
cd ..; \
|
||||
rm -rf pip-7.0.1; \
|
||||
rm pip-7.0.1.tar.gz
|
||||
rm -rf pip-7.0.1
|
||||
|
||||
# Python3 requires a valid locale
|
||||
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
|
||||
|
50
MAINTAINERS
50
MAINTAINERS
@ -1,4 +1,46 @@
|
||||
Aanand Prasad <aanand.prasad@gmail.com> (@aanand)
|
||||
Ben Firshman <ben@firshman.co.uk> (@bfirsh)
|
||||
Daniel Nephin <dnephin@gmail.com> (@dnephin)
|
||||
Mazz Mosley <mazz@houseofmnowster.com> (@mnowster)
|
||||
# Compose maintainers file
|
||||
#
|
||||
# This file describes who runs the docker/compose project and how.
|
||||
# This is a living document - if you see something out of date or missing, speak up!
|
||||
#
|
||||
# It is structured to be consumable by both humans and programs.
|
||||
# To extract its contents programmatically, use any TOML-compliant parser.
|
||||
#
|
||||
# This file is compiled into the MAINTAINERS file in docker/opensource.
|
||||
#
|
||||
[Org]
|
||||
[Org."Core maintainers"]
|
||||
people = [
|
||||
"aanand",
|
||||
"bfirsh",
|
||||
"dnephin",
|
||||
"mnowster",
|
||||
]
|
||||
|
||||
[people]
|
||||
|
||||
# A reference list of all people associated with the project.
|
||||
# All other sections should refer to people by their canonical key
|
||||
# in the people section.
|
||||
|
||||
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
|
||||
|
||||
[people.aanand]
|
||||
Name = "Aanand Prasad"
|
||||
Email = "aanand.prasad@gmail.com"
|
||||
GitHub = "aanand"
|
||||
|
||||
[people.bfirsh]
|
||||
Name = "Ben Firshman"
|
||||
Email = "ben@firshman.co.uk"
|
||||
GitHub = "bfirsh"
|
||||
|
||||
[people.dnephin]
|
||||
Name = "Daniel Nephin"
|
||||
Email = "dnephin@gmail.com"
|
||||
GitHub = "dnephin"
|
||||
|
||||
[people.mnowster]
|
||||
Name = "Mazz Mosley"
|
||||
Email = "mazz@houseofmnowster.com"
|
||||
GitHub = "mnowster"
|
||||
|
@ -46,8 +46,10 @@ Compose has commands for managing the whole lifecycle of your application:
|
||||
Installation and documentation
|
||||
------------------------------
|
||||
|
||||
- Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
|
||||
- Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
|
||||
- If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
|
||||
- Code repository for Compose is on [Github](https://github.com/docker/compose)
|
||||
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
@ -1,3 +1,4 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__version__ = '1.5.2'
|
||||
__version__ = '1.6.0'
|
||||
|
6
compose/__main__.py
Normal file
6
compose/__main__.py
Normal file
@ -0,0 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from compose.cli.main import main
|
||||
|
||||
main()
|
@ -1,3 +1,4 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
NAMES = [
|
||||
'grey',
|
||||
|
@ -13,6 +13,7 @@ from requests.exceptions import SSLError
|
||||
from . import errors
|
||||
from . import verbose_proxy
|
||||
from .. import config
|
||||
from ..const import API_VERSIONS
|
||||
from ..project import Project
|
||||
from .docker_client import docker_client
|
||||
from .utils import call_silently
|
||||
@ -46,23 +47,18 @@ def friendly_error_message():
|
||||
def project_from_options(base_dir, options):
|
||||
return get_project(
|
||||
base_dir,
|
||||
get_config_path(options.get('--file')),
|
||||
get_config_path_from_options(options),
|
||||
project_name=options.get('--project-name'),
|
||||
verbose=options.get('--verbose'),
|
||||
use_networking=options.get('--x-networking'),
|
||||
network_driver=options.get('--x-network-driver'),
|
||||
)
|
||||
|
||||
|
||||
def get_config_path(file_option):
|
||||
def get_config_path_from_options(options):
|
||||
file_option = options.get('--file')
|
||||
if file_option:
|
||||
return file_option
|
||||
|
||||
if 'FIG_FILE' in os.environ:
|
||||
log.warn('The FIG_FILE environment variable is deprecated.')
|
||||
log.warn('Please use COMPOSE_FILE instead.')
|
||||
|
||||
config_file = os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE')
|
||||
config_file = os.environ.get('COMPOSE_FILE')
|
||||
return [config_file] if config_file else None
|
||||
|
||||
|
||||
@ -78,32 +74,25 @@ def get_client(verbose=False, version=None):
|
||||
return client
|
||||
|
||||
|
||||
def get_project(base_dir, config_path=None, project_name=None, verbose=False,
|
||||
use_networking=False, network_driver=None):
|
||||
def get_project(base_dir, config_path=None, project_name=None, verbose=False):
|
||||
config_details = config.find(base_dir, config_path)
|
||||
project_name = get_project_name(config_details.working_dir, project_name)
|
||||
config_data = config.load(config_details)
|
||||
|
||||
api_version = '1.21' if use_networking else None
|
||||
return Project.from_dicts(
|
||||
get_project_name(config_details.working_dir, project_name),
|
||||
config.load(config_details),
|
||||
get_client(verbose=verbose, version=api_version),
|
||||
use_networking=use_networking,
|
||||
network_driver=network_driver)
|
||||
api_version = os.environ.get(
|
||||
'COMPOSE_API_VERSION',
|
||||
API_VERSIONS[config_data.version])
|
||||
client = get_client(verbose=verbose, version=api_version)
|
||||
|
||||
return Project.from_config(project_name, config_data, client)
|
||||
|
||||
|
||||
def get_project_name(working_dir, project_name=None):
|
||||
def normalize_name(name):
|
||||
return re.sub(r'[^a-z0-9]', '', name.lower())
|
||||
|
||||
if 'FIG_PROJECT_NAME' in os.environ:
|
||||
log.warn('The FIG_PROJECT_NAME environment variable is deprecated.')
|
||||
log.warn('Please use COMPOSE_PROJECT_NAME instead.')
|
||||
|
||||
project_name = (
|
||||
project_name or
|
||||
os.environ.get('COMPOSE_PROJECT_NAME') or
|
||||
os.environ.get('FIG_PROJECT_NAME'))
|
||||
if project_name is not None:
|
||||
project_name = project_name or os.environ.get('COMPOSE_PROJECT_NAME')
|
||||
if project_name:
|
||||
return normalize_name(project_name)
|
||||
|
||||
project = os.path.basename(os.path.abspath(working_dir))
|
||||
|
@ -1,17 +1,19 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import logging
|
||||
import os
|
||||
|
||||
from docker import Client
|
||||
from docker.errors import TLSParameterError
|
||||
from docker.utils import kwargs_from_env
|
||||
|
||||
from ..const import HTTP_TIMEOUT
|
||||
from .errors import UserError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
DEFAULT_API_VERSION = '1.19'
|
||||
|
||||
|
||||
def docker_client(version=None):
|
||||
"""
|
||||
Returns a docker-py client configured using environment variables
|
||||
@ -20,9 +22,16 @@ def docker_client(version=None):
|
||||
if 'DOCKER_CLIENT_TIMEOUT' in os.environ:
|
||||
log.warn('The DOCKER_CLIENT_TIMEOUT environment variable is deprecated. Please use COMPOSE_HTTP_TIMEOUT instead.')
|
||||
|
||||
kwargs = kwargs_from_env(assert_hostname=False)
|
||||
kwargs['version'] = version or os.environ.get(
|
||||
'COMPOSE_API_VERSION',
|
||||
DEFAULT_API_VERSION)
|
||||
try:
|
||||
kwargs = kwargs_from_env(assert_hostname=False)
|
||||
except TLSParameterError:
|
||||
raise UserError(
|
||||
'TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly.\n'
|
||||
'You might need to run `eval "$(docker-machine env default)"`')
|
||||
|
||||
if version:
|
||||
kwargs['version'] = version
|
||||
|
||||
kwargs['timeout'] = HTTP_TIMEOUT
|
||||
|
||||
return Client(**kwargs)
|
||||
|
@ -1,4 +1,5 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from textwrap import dedent
|
||||
|
||||
@ -27,7 +28,7 @@ class DockerNotFoundUbuntu(UserError):
|
||||
super(DockerNotFoundUbuntu, self).__init__("""
|
||||
Couldn't connect to Docker daemon. You might need to install Docker:
|
||||
|
||||
http://docs.docker.io/en/latest/installation/ubuntulinux/
|
||||
https://docs.docker.com/engine/installation/ubuntulinux/
|
||||
""")
|
||||
|
||||
|
||||
@ -36,7 +37,7 @@ class DockerNotFoundGeneric(UserError):
|
||||
super(DockerNotFoundGeneric, self).__init__("""
|
||||
Couldn't connect to Docker daemon. You might need to install Docker:
|
||||
|
||||
http://docs.docker.io/en/latest/installation/
|
||||
https://docs.docker.com/engine/installation/
|
||||
""")
|
||||
|
||||
|
||||
|
@ -13,10 +13,11 @@ from compose.utils import split_buffer
|
||||
class LogPrinter(object):
|
||||
"""Print logs from many containers to a single output stream."""
|
||||
|
||||
def __init__(self, containers, output=sys.stdout, monochrome=False):
|
||||
def __init__(self, containers, output=sys.stdout, monochrome=False, cascade_stop=False):
|
||||
self.containers = containers
|
||||
self.output = utils.get_output_stream(output)
|
||||
self.monochrome = monochrome
|
||||
self.cascade_stop = cascade_stop
|
||||
|
||||
def run(self):
|
||||
if not self.containers:
|
||||
@ -24,7 +25,7 @@ class LogPrinter(object):
|
||||
|
||||
prefix_width = max_name_width(self.containers)
|
||||
generators = list(self._make_log_generators(self.monochrome, prefix_width))
|
||||
for line in Multiplexer(generators).loop():
|
||||
for line in Multiplexer(generators, cascade_stop=self.cascade_stop).loop():
|
||||
self.output.write(line)
|
||||
self.output.flush()
|
||||
|
||||
|
@ -1,9 +1,11 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import contextlib
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import signal
|
||||
import sys
|
||||
from inspect import getdoc
|
||||
from operator import attrgetter
|
||||
@ -11,10 +13,12 @@ from operator import attrgetter
|
||||
from docker.errors import APIError
|
||||
from requests.exceptions import ReadTimeout
|
||||
|
||||
from . import signals
|
||||
from .. import __version__
|
||||
from .. import legacy
|
||||
from ..config import config
|
||||
from ..config import ConfigurationError
|
||||
from ..config import parse_environment
|
||||
from ..config.serialize import serialize_config
|
||||
from ..const import DEFAULT_TIMEOUT
|
||||
from ..const import HTTP_TIMEOUT
|
||||
from ..const import IS_WINDOWS_PLATFORM
|
||||
@ -22,8 +26,10 @@ from ..progress_stream import StreamOutputError
|
||||
from ..project import NoSuchService
|
||||
from ..service import BuildError
|
||||
from ..service import ConvergenceStrategy
|
||||
from ..service import ImageType
|
||||
from ..service import NeedsBuildError
|
||||
from .command import friendly_error_message
|
||||
from .command import get_config_path_from_options
|
||||
from .command import project_from_options
|
||||
from .docopt_command import DocoptCommand
|
||||
from .docopt_command import NoSuchCommand
|
||||
@ -36,16 +42,11 @@ from .utils import yesno
|
||||
|
||||
|
||||
if not IS_WINDOWS_PLATFORM:
|
||||
import dockerpty
|
||||
from dockerpty.pty import PseudoTerminal, RunOperation
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
console_handler = logging.StreamHandler(sys.stderr)
|
||||
|
||||
INSECURE_SSL_WARNING = """
|
||||
--allow-insecure-ssl is deprecated and has no effect.
|
||||
It will be removed in a future version of Compose.
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
setup_logging()
|
||||
@ -53,9 +54,9 @@ def main():
|
||||
command = TopLevelCommand()
|
||||
command.sys_dispatch()
|
||||
except KeyboardInterrupt:
|
||||
log.error("\nAborting.")
|
||||
log.error("Aborting.")
|
||||
sys.exit(1)
|
||||
except (UserError, NoSuchService, ConfigurationError, legacy.LegacyError) as e:
|
||||
except (UserError, NoSuchService, ConfigurationError) as e:
|
||||
log.error(e.msg)
|
||||
sys.exit(1)
|
||||
except NoSuchCommand as e:
|
||||
@ -123,15 +124,15 @@ class TopLevelCommand(DocoptCommand):
|
||||
Options:
|
||||
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
|
||||
-p, --project-name NAME Specify an alternate project name (default: directory name)
|
||||
--x-networking (EXPERIMENTAL) Use new Docker networking functionality.
|
||||
Requires Docker 1.9 or later.
|
||||
--x-network-driver DRIVER (EXPERIMENTAL) Specify a network driver (default: "bridge").
|
||||
Requires Docker 1.9 or later.
|
||||
--verbose Show more output
|
||||
-v, --version Print version and exit
|
||||
|
||||
Commands:
|
||||
build Build or rebuild services
|
||||
config Validate and view the compose file
|
||||
create Create services
|
||||
down Stop and remove containers, networks, images, and volumes
|
||||
events Receive real time events from containers
|
||||
help Get help on a command
|
||||
kill Kill containers
|
||||
logs View output from containers
|
||||
@ -147,9 +148,7 @@ class TopLevelCommand(DocoptCommand):
|
||||
stop Stop services
|
||||
unpause Unpause services
|
||||
up Create and start containers
|
||||
migrate-to-labels Recreate containers to add labels
|
||||
version Show the Docker-Compose version information
|
||||
|
||||
"""
|
||||
base_dir = '.'
|
||||
|
||||
@ -166,6 +165,10 @@ class TopLevelCommand(DocoptCommand):
|
||||
handler(None, command_options)
|
||||
return
|
||||
|
||||
if options['COMMAND'] == 'config':
|
||||
handler(options, command_options)
|
||||
return
|
||||
|
||||
project = project_from_options(self.base_dir, options)
|
||||
with friendly_error_message():
|
||||
handler(project, command_options)
|
||||
@ -191,6 +194,91 @@ class TopLevelCommand(DocoptCommand):
|
||||
pull=bool(options.get('--pull', False)),
|
||||
force_rm=bool(options.get('--force-rm', False)))
|
||||
|
||||
def config(self, config_options, options):
|
||||
"""
|
||||
Validate and view the compose file.
|
||||
|
||||
Usage: config [options]
|
||||
|
||||
Options:
|
||||
-q, --quiet Only validate the configuration, don't print
|
||||
anything.
|
||||
--services Print the service names, one per line.
|
||||
|
||||
"""
|
||||
config_path = get_config_path_from_options(config_options)
|
||||
compose_config = config.load(config.find(self.base_dir, config_path))
|
||||
|
||||
if options['--quiet']:
|
||||
return
|
||||
|
||||
if options['--services']:
|
||||
print('\n'.join(service['name'] for service in compose_config.services))
|
||||
return
|
||||
|
||||
print(serialize_config(compose_config))
|
||||
|
||||
def create(self, project, options):
|
||||
"""
|
||||
Creates containers for a service.
|
||||
|
||||
Usage: create [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--force-recreate Recreate containers even if their configuration and
|
||||
image haven't changed. Incompatible with --no-recreate.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
"""
|
||||
service_names = options['SERVICE']
|
||||
|
||||
project.create(
|
||||
service_names=service_names,
|
||||
strategy=convergence_strategy_from_opts(options),
|
||||
do_build=not options['--no-build']
|
||||
)
|
||||
|
||||
def down(self, project, options):
|
||||
"""
|
||||
Stop containers and remove containers, networks, volumes, and images
|
||||
created by `up`. Only containers and networks are removed by default.
|
||||
|
||||
Usage: down [options]
|
||||
|
||||
Options:
|
||||
--rmi type Remove images, type may be one of: 'all' to remove
|
||||
all images, or 'local' to remove only images that
|
||||
don't have an custom name set by the `image` field
|
||||
-v, --volumes Remove data volumes
|
||||
"""
|
||||
image_type = image_type_from_opt('--rmi', options['--rmi'])
|
||||
project.down(image_type, options['--volumes'])
|
||||
|
||||
def events(self, project, options):
|
||||
"""
|
||||
Receive real time events from containers.
|
||||
|
||||
Usage: events [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--json Output events as a stream of json objects
|
||||
"""
|
||||
def format_event(event):
|
||||
attributes = ["%s=%s" % item for item in event['attributes'].items()]
|
||||
return ("{time} {type} {action} {id} ({attrs})").format(
|
||||
attrs=", ".join(sorted(attributes)),
|
||||
**event)
|
||||
|
||||
def json_format_event(event):
|
||||
event['time'] = event['time'].isoformat()
|
||||
return json.dumps(event)
|
||||
|
||||
for event in project.events():
|
||||
formatter = json_format_event if options['--json'] else format_event
|
||||
print(formatter(event))
|
||||
sys.stdout.flush()
|
||||
|
||||
def help(self, project, options):
|
||||
"""
|
||||
Get help on a command.
|
||||
@ -235,7 +323,8 @@ class TopLevelCommand(DocoptCommand):
|
||||
|
||||
Usage: pause [SERVICE...]
|
||||
"""
|
||||
project.pause(service_names=options['SERVICE'])
|
||||
containers = project.pause(service_names=options['SERVICE'])
|
||||
exit_if(not containers, 'No containers to pause', 1)
|
||||
|
||||
def port(self, project, options):
|
||||
"""
|
||||
@ -303,11 +392,7 @@ class TopLevelCommand(DocoptCommand):
|
||||
|
||||
Options:
|
||||
--ignore-pull-failures Pull what it can and ignores images with pull failures.
|
||||
--allow-insecure-ssl Deprecated - no effect.
|
||||
"""
|
||||
if options['--allow-insecure-ssl']:
|
||||
log.warn(INSECURE_SSL_WARNING)
|
||||
|
||||
project.pull(
|
||||
service_names=options['SERVICE'],
|
||||
ignore_pull_failures=options.get('--ignore-pull-failures')
|
||||
@ -317,6 +402,11 @@ class TopLevelCommand(DocoptCommand):
|
||||
"""
|
||||
Remove stopped service containers.
|
||||
|
||||
By default, volumes attached to containers will not be removed. You can see all
|
||||
volumes with `docker volume ls`.
|
||||
|
||||
Any data which is not in a volume will be lost.
|
||||
|
||||
Usage: rm [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
@ -352,7 +442,6 @@ class TopLevelCommand(DocoptCommand):
|
||||
Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
|
||||
|
||||
Options:
|
||||
--allow-insecure-ssl Deprecated - no effect.
|
||||
-d Detached mode: Run container in the background, print
|
||||
new container name.
|
||||
--name NAME Assign a name to the container
|
||||
@ -376,9 +465,6 @@ class TopLevelCommand(DocoptCommand):
|
||||
"Please pass the -d flag when using `docker-compose run`."
|
||||
)
|
||||
|
||||
if options['--allow-insecure-ssl']:
|
||||
log.warn(INSECURE_SSL_WARNING)
|
||||
|
||||
if options['COMMAND']:
|
||||
command = [options['COMMAND']] + options['ARGS']
|
||||
else:
|
||||
@ -454,7 +540,8 @@ class TopLevelCommand(DocoptCommand):
|
||||
|
||||
Usage: start [SERVICE...]
|
||||
"""
|
||||
project.start(service_names=options['SERVICE'])
|
||||
containers = project.start(service_names=options['SERVICE'])
|
||||
exit_if(not containers, 'No containers to start', 1)
|
||||
|
||||
def stop(self, project, options):
|
||||
"""
|
||||
@ -482,7 +569,8 @@ class TopLevelCommand(DocoptCommand):
|
||||
(default: 10)
|
||||
"""
|
||||
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
|
||||
project.restart(service_names=options['SERVICE'], timeout=timeout)
|
||||
containers = project.restart(service_names=options['SERVICE'], timeout=timeout)
|
||||
exit_if(not containers, 'No containers to restart', 1)
|
||||
|
||||
def unpause(self, project, options):
|
||||
"""
|
||||
@ -490,7 +578,8 @@ class TopLevelCommand(DocoptCommand):
|
||||
|
||||
Usage: unpause [SERVICE...]
|
||||
"""
|
||||
project.unpause(service_names=options['SERVICE'])
|
||||
containers = project.unpause(service_names=options['SERVICE'])
|
||||
exit_if(not containers, 'No containers to unpause', 1)
|
||||
|
||||
def up(self, project, options):
|
||||
"""
|
||||
@ -514,67 +603,47 @@ class TopLevelCommand(DocoptCommand):
|
||||
Usage: up [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--allow-insecure-ssl Deprecated - no effect.
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--force-recreate Recreate containers even if their configuration and
|
||||
image haven't changed. Incompatible with --no-recreate.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
-t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
|
||||
when attached or when containers are already
|
||||
running. (default: 10)
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
Incompatible with --abort-on-container-exit.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--force-recreate Recreate containers even if their configuration
|
||||
and image haven't changed.
|
||||
Incompatible with --no-recreate.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
--abort-on-container-exit Stops all containers if any container was stopped.
|
||||
Incompatible with -d.
|
||||
-t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
|
||||
when attached or when containers are already
|
||||
running. (default: 10)
|
||||
"""
|
||||
if options['--allow-insecure-ssl']:
|
||||
log.warn(INSECURE_SSL_WARNING)
|
||||
|
||||
monochrome = options['--no-color']
|
||||
start_deps = not options['--no-deps']
|
||||
cascade_stop = options['--abort-on-container-exit']
|
||||
service_names = options['SERVICE']
|
||||
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
|
||||
detached = options.get('-d')
|
||||
|
||||
to_attach = project.up(
|
||||
service_names=service_names,
|
||||
start_deps=start_deps,
|
||||
strategy=convergence_strategy_from_opts(options),
|
||||
do_build=not options['--no-build'],
|
||||
timeout=timeout,
|
||||
detached=detached
|
||||
)
|
||||
if detached and cascade_stop:
|
||||
raise UserError("--abort-on-container-exit and -d cannot be combined.")
|
||||
|
||||
if not detached:
|
||||
log_printer = build_log_printer(to_attach, service_names, monochrome)
|
||||
attach_to_logs(project, log_printer, service_names, timeout)
|
||||
with up_shutdown_context(project, service_names, timeout, detached):
|
||||
to_attach = project.up(
|
||||
service_names=service_names,
|
||||
start_deps=start_deps,
|
||||
strategy=convergence_strategy_from_opts(options),
|
||||
do_build=not options['--no-build'],
|
||||
timeout=timeout,
|
||||
detached=detached)
|
||||
|
||||
def migrate_to_labels(self, project, _options):
|
||||
"""
|
||||
Recreate containers to add labels
|
||||
|
||||
If you're coming from Compose 1.2 or earlier, you'll need to remove or
|
||||
migrate your existing containers after upgrading Compose. This is
|
||||
because, as of version 1.3, Compose uses Docker labels to keep track
|
||||
of containers, and so they need to be recreated with labels added.
|
||||
|
||||
If Compose detects containers that were created without labels, it
|
||||
will refuse to run so that you don't end up with two sets of them. If
|
||||
you want to keep using your existing containers (for example, because
|
||||
they have data volumes you want to preserve) you can migrate them with
|
||||
the following command:
|
||||
|
||||
docker-compose migrate-to-labels
|
||||
|
||||
Alternatively, if you're not worried about keeping them, you can
|
||||
remove them - Compose will just create new ones.
|
||||
|
||||
docker rm -f myapp_web_1 myapp_db_1 ...
|
||||
|
||||
Usage: migrate-to-labels
|
||||
"""
|
||||
legacy.migrate_project_to_labels(project)
|
||||
if detached:
|
||||
return
|
||||
log_printer = build_log_printer(to_attach, service_names, monochrome, cascade_stop)
|
||||
print("Attaching to", list_containers(log_printer.containers))
|
||||
log_printer.run()
|
||||
|
||||
def version(self, project, options):
|
||||
"""
|
||||
@ -606,6 +675,15 @@ def convergence_strategy_from_opts(options):
|
||||
return ConvergenceStrategy.changed
|
||||
|
||||
|
||||
def image_type_from_opt(flag, value):
|
||||
if not value:
|
||||
return ImageType.none
|
||||
try:
|
||||
return ImageType[value]
|
||||
except KeyError:
|
||||
raise UserError("%s flag must be one of: all, local" % flag)
|
||||
|
||||
|
||||
def run_one_off_container(container_options, project, service, options):
|
||||
if not options['--no-deps']:
|
||||
deps = service.get_linked_service_names()
|
||||
@ -615,24 +693,15 @@ def run_one_off_container(container_options, project, service, options):
|
||||
start_deps=True,
|
||||
strategy=ConvergenceStrategy.never)
|
||||
|
||||
if project.use_networking:
|
||||
project.ensure_network_exists()
|
||||
project.initialize()
|
||||
|
||||
try:
|
||||
container = service.create_container(
|
||||
quiet=True,
|
||||
one_off=True,
|
||||
**container_options)
|
||||
except APIError:
|
||||
legacy.check_for_legacy_containers(
|
||||
project.client,
|
||||
project.name,
|
||||
[service.name],
|
||||
allow_one_off=False)
|
||||
raise
|
||||
container = service.create_container(
|
||||
quiet=True,
|
||||
one_off=True,
|
||||
**container_options)
|
||||
|
||||
if options['-d']:
|
||||
container.start()
|
||||
service.start_container(container)
|
||||
print(container.name)
|
||||
return
|
||||
|
||||
@ -640,53 +709,64 @@ def run_one_off_container(container_options, project, service, options):
|
||||
if options['--rm']:
|
||||
project.client.remove_container(container.id, force=True)
|
||||
|
||||
def force_shutdown(signal, frame):
|
||||
signals.set_signal_handler_to_shutdown()
|
||||
try:
|
||||
try:
|
||||
operation = RunOperation(
|
||||
project.client,
|
||||
container.id,
|
||||
interactive=not options['-T'],
|
||||
logs=False,
|
||||
)
|
||||
pty = PseudoTerminal(project.client, operation)
|
||||
sockets = pty.sockets()
|
||||
service.start_container(container)
|
||||
pty.start(sockets)
|
||||
exit_code = container.wait()
|
||||
except signals.ShutdownException:
|
||||
project.client.stop(container.id)
|
||||
exit_code = 1
|
||||
except signals.ShutdownException:
|
||||
project.client.kill(container.id)
|
||||
remove_container(force=True)
|
||||
sys.exit(2)
|
||||
|
||||
def shutdown(signal, frame):
|
||||
set_signal_handler(force_shutdown)
|
||||
project.client.stop(container.id)
|
||||
remove_container()
|
||||
sys.exit(1)
|
||||
|
||||
set_signal_handler(shutdown)
|
||||
dockerpty.start(project.client, container.id, interactive=not options['-T'])
|
||||
exit_code = container.wait()
|
||||
remove_container()
|
||||
sys.exit(exit_code)
|
||||
|
||||
|
||||
def build_log_printer(containers, service_names, monochrome):
|
||||
def build_log_printer(containers, service_names, monochrome, cascade_stop):
|
||||
if service_names:
|
||||
containers = [
|
||||
container
|
||||
for container in containers if container.service in service_names
|
||||
]
|
||||
return LogPrinter(containers, monochrome=monochrome)
|
||||
return LogPrinter(containers, monochrome=monochrome, cascade_stop=cascade_stop)
|
||||
|
||||
|
||||
def attach_to_logs(project, log_printer, service_names, timeout):
|
||||
@contextlib.contextmanager
|
||||
def up_shutdown_context(project, service_names, timeout, detached):
|
||||
if detached:
|
||||
yield
|
||||
return
|
||||
|
||||
def force_shutdown(signal, frame):
|
||||
signals.set_signal_handler_to_shutdown()
|
||||
try:
|
||||
try:
|
||||
yield
|
||||
except signals.ShutdownException:
|
||||
print("Gracefully stopping... (press Ctrl+C again to force)")
|
||||
project.stop(service_names=service_names, timeout=timeout)
|
||||
except signals.ShutdownException:
|
||||
project.kill(service_names=service_names)
|
||||
sys.exit(2)
|
||||
|
||||
def shutdown(signal, frame):
|
||||
set_signal_handler(force_shutdown)
|
||||
print("Gracefully stopping... (press Ctrl+C again to force)")
|
||||
project.stop(service_names=service_names, timeout=timeout)
|
||||
|
||||
print("Attaching to", list_containers(log_printer.containers))
|
||||
set_signal_handler(shutdown)
|
||||
log_printer.run()
|
||||
|
||||
|
||||
def set_signal_handler(handler):
|
||||
signal.signal(signal.SIGINT, handler)
|
||||
signal.signal(signal.SIGTERM, handler)
|
||||
|
||||
|
||||
def list_containers(containers):
|
||||
return ", ".join(c.name for c in containers)
|
||||
|
||||
|
||||
def exit_if(condition, message, exit_code):
|
||||
if condition:
|
||||
log.error(message)
|
||||
raise SystemExit(exit_code)
|
||||
|
@ -1,4 +1,5 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from threading import Thread
|
||||
|
||||
@ -19,8 +20,9 @@ class Multiplexer(object):
|
||||
parallel and yielding results as they come in.
|
||||
"""
|
||||
|
||||
def __init__(self, iterators):
|
||||
def __init__(self, iterators, cascade_stop=False):
|
||||
self.iterators = iterators
|
||||
self.cascade_stop = cascade_stop
|
||||
self._num_running = len(iterators)
|
||||
self.queue = Queue()
|
||||
|
||||
@ -35,7 +37,10 @@ class Multiplexer(object):
|
||||
raise exception
|
||||
|
||||
if item is STOP:
|
||||
self._num_running -= 1
|
||||
if self.cascade_stop is True:
|
||||
break
|
||||
else:
|
||||
self._num_running -= 1
|
||||
else:
|
||||
yield item
|
||||
except Empty:
|
||||
|
21
compose/cli/signals.py
Normal file
21
compose/cli/signals.py
Normal file
@ -0,0 +1,21 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import signal
|
||||
|
||||
|
||||
class ShutdownException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def shutdown(signal, frame):
|
||||
raise ShutdownException()
|
||||
|
||||
|
||||
def set_signal_handler(handler):
|
||||
signal.signal(signal.SIGINT, handler)
|
||||
signal.signal(signal.SIGTERM, handler)
|
||||
|
||||
|
||||
def set_signal_handler_to_shutdown():
|
||||
set_signal_handler(shutdown)
|
@ -1,3 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import functools
|
||||
import logging
|
||||
import pprint
|
||||
|
@ -1,4 +1,7 @@
|
||||
# flake8: noqa
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .config import ConfigurationError
|
||||
from .config import DOCKER_CONFIG_KEYS
|
||||
from .config import find
|
||||
|
@ -1,28 +1,43 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import codecs
|
||||
import functools
|
||||
import logging
|
||||
import operator
|
||||
import os
|
||||
import string
|
||||
import sys
|
||||
from collections import namedtuple
|
||||
|
||||
import six
|
||||
import yaml
|
||||
from cached_property import cached_property
|
||||
|
||||
from ..const import COMPOSEFILE_V1 as V1
|
||||
from ..const import COMPOSEFILE_V2_0 as V2_0
|
||||
from .errors import CircularReference
|
||||
from .errors import ComposeFileNotFound
|
||||
from .errors import ConfigurationError
|
||||
from .errors import VERSION_EXPLANATION
|
||||
from .interpolation import interpolate_environment_variables
|
||||
from .sort_services import get_service_name_from_net
|
||||
from .sort_services import get_container_name_from_network_mode
|
||||
from .sort_services import get_service_name_from_network_mode
|
||||
from .sort_services import sort_service_dicts
|
||||
from .types import parse_extra_hosts
|
||||
from .types import parse_restart_spec
|
||||
from .types import ServiceLink
|
||||
from .types import VolumeFromSpec
|
||||
from .types import VolumeSpec
|
||||
from .validation import match_named_volumes
|
||||
from .validation import validate_against_fields_schema
|
||||
from .validation import validate_against_service_schema
|
||||
from .validation import validate_depends_on
|
||||
from .validation import validate_extends_file_path
|
||||
from .validation import validate_network_mode
|
||||
from .validation import validate_top_level_object
|
||||
from .validation import validate_top_level_service_objects
|
||||
from .validation import validate_ulimits
|
||||
|
||||
|
||||
DOCKER_CONFIG_KEYS = [
|
||||
@ -30,6 +45,7 @@ DOCKER_CONFIG_KEYS = [
|
||||
'cap_drop',
|
||||
'cgroup_parent',
|
||||
'command',
|
||||
'cpu_quota',
|
||||
'cpu_shares',
|
||||
'cpuset',
|
||||
'detach',
|
||||
@ -46,8 +62,6 @@ DOCKER_CONFIG_KEYS = [
|
||||
'ipc',
|
||||
'labels',
|
||||
'links',
|
||||
'log_driver',
|
||||
'log_opt',
|
||||
'mac_address',
|
||||
'mem_limit',
|
||||
'memswap_limit',
|
||||
@ -59,6 +73,7 @@ DOCKER_CONFIG_KEYS = [
|
||||
'restart',
|
||||
'security_opt',
|
||||
'stdin_open',
|
||||
'stop_signal',
|
||||
'tty',
|
||||
'user',
|
||||
'volume_driver',
|
||||
@ -71,8 +86,7 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
|
||||
'build',
|
||||
'container_name',
|
||||
'dockerfile',
|
||||
'expose',
|
||||
'external_links',
|
||||
'logging',
|
||||
]
|
||||
|
||||
DOCKER_VALID_URL_PREFIXES = (
|
||||
@ -86,12 +100,11 @@ DOCKER_VALID_URL_PREFIXES = (
|
||||
SUPPORTED_FILENAMES = [
|
||||
'docker-compose.yml',
|
||||
'docker-compose.yaml',
|
||||
'fig.yml',
|
||||
'fig.yaml',
|
||||
]
|
||||
|
||||
DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml'
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -116,6 +129,64 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
|
||||
def from_filename(cls, filename):
|
||||
return cls(filename, load_yaml(filename))
|
||||
|
||||
@cached_property
|
||||
def version(self):
|
||||
if 'version' not in self.config:
|
||||
return V1
|
||||
|
||||
version = self.config['version']
|
||||
|
||||
if isinstance(version, dict):
|
||||
log.warn('Unexpected type for "version" key in "{}". Assuming '
|
||||
'"version" is the name of a service, and defaulting to '
|
||||
'Compose file version 1.'.format(self.filename))
|
||||
return V1
|
||||
|
||||
if not isinstance(version, six.string_types):
|
||||
raise ConfigurationError(
|
||||
'Version in "{}" is invalid - it should be a string.'
|
||||
.format(self.filename))
|
||||
|
||||
if version == '1':
|
||||
raise ConfigurationError(
|
||||
'Version in "{}" is invalid. {}'
|
||||
.format(self.filename, VERSION_EXPLANATION))
|
||||
|
||||
if version == '2':
|
||||
version = V2_0
|
||||
|
||||
if version != V2_0:
|
||||
raise ConfigurationError(
|
||||
'Version in "{}" is unsupported. {}'
|
||||
.format(self.filename, VERSION_EXPLANATION))
|
||||
|
||||
return version
|
||||
|
||||
def get_service(self, name):
|
||||
return self.get_service_dicts()[name]
|
||||
|
||||
def get_service_dicts(self):
|
||||
return self.config if self.version == V1 else self.config.get('services', {})
|
||||
|
||||
def get_volumes(self):
|
||||
return {} if self.version == V1 else self.config.get('volumes', {})
|
||||
|
||||
def get_networks(self):
|
||||
return {} if self.version == V1 else self.config.get('networks', {})
|
||||
|
||||
|
||||
class Config(namedtuple('_Config', 'version services volumes networks')):
|
||||
"""
|
||||
:param version: configuration version
|
||||
:type version: int
|
||||
:param services: List of service description dictionaries
|
||||
:type services: :class:`list`
|
||||
:param volumes: Dictionary mapping volume names to description dictionaries
|
||||
:type volumes: :class:`dict`
|
||||
:param networks: Dictionary mapping network names to description dictionaries
|
||||
:type networks: :class:`dict`
|
||||
"""
|
||||
|
||||
|
||||
class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')):
|
||||
|
||||
@ -148,6 +219,22 @@ def find(base_dir, filenames):
|
||||
[ConfigFile.from_filename(f) for f in filenames])
|
||||
|
||||
|
||||
def validate_config_version(config_files):
|
||||
main_file = config_files[0]
|
||||
validate_top_level_object(main_file)
|
||||
for next_file in config_files[1:]:
|
||||
validate_top_level_object(next_file)
|
||||
|
||||
if main_file.version != next_file.version:
|
||||
raise ConfigurationError(
|
||||
"Version mismatch: file {0} specifies version {1} but "
|
||||
"extension file {2} uses version {3}".format(
|
||||
main_file.filename,
|
||||
main_file.version,
|
||||
next_file.filename,
|
||||
next_file.version))
|
||||
|
||||
|
||||
def get_default_config_files(base_dir):
|
||||
(candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
|
||||
|
||||
@ -160,15 +247,6 @@ def get_default_config_files(base_dir):
|
||||
log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
|
||||
log.warn("Using %s\n", winner)
|
||||
|
||||
if winner == 'docker-compose.yaml':
|
||||
log.warn("Please be aware that .yml is the expected extension "
|
||||
"in most cases, and using .yaml can cause compatibility "
|
||||
"issues in future.\n")
|
||||
|
||||
if winner.startswith("fig."):
|
||||
log.warn("%s is deprecated and will not be supported in future. "
|
||||
"Please rename your config file to docker-compose.yml\n" % winner)
|
||||
|
||||
return [os.path.join(path, winner)] + get_default_override_file(path)
|
||||
|
||||
|
||||
@ -203,28 +281,82 @@ def load(config_details):
|
||||
|
||||
Return a fully interpolated, extended and validated configuration.
|
||||
"""
|
||||
validate_config_version(config_details.config_files)
|
||||
|
||||
def build_service(filename, service_name, service_dict):
|
||||
processed_files = [
|
||||
process_config_file(config_file)
|
||||
for config_file in config_details.config_files
|
||||
]
|
||||
config_details = config_details._replace(config_files=processed_files)
|
||||
|
||||
main_file = config_details.config_files[0]
|
||||
volumes = load_mapping(config_details.config_files, 'get_volumes', 'Volume')
|
||||
networks = load_mapping(config_details.config_files, 'get_networks', 'Network')
|
||||
service_dicts = load_services(
|
||||
config_details.working_dir,
|
||||
main_file,
|
||||
[file.get_service_dicts() for file in config_details.config_files])
|
||||
|
||||
if main_file.version != V1:
|
||||
for service_dict in service_dicts:
|
||||
match_named_volumes(service_dict, volumes)
|
||||
|
||||
return Config(main_file.version, service_dicts, volumes, networks)
|
||||
|
||||
|
||||
def load_mapping(config_files, get_func, entity_type):
|
||||
mapping = {}
|
||||
|
||||
for config_file in config_files:
|
||||
for name, config in getattr(config_file, get_func)().items():
|
||||
mapping[name] = config or {}
|
||||
if not config:
|
||||
continue
|
||||
|
||||
external = config.get('external')
|
||||
if external:
|
||||
if len(config.keys()) > 1:
|
||||
raise ConfigurationError(
|
||||
'{} {} declared as external but specifies'
|
||||
' additional attributes ({}). '.format(
|
||||
entity_type,
|
||||
name,
|
||||
', '.join([k for k in config.keys() if k != 'external'])
|
||||
)
|
||||
)
|
||||
if isinstance(external, dict):
|
||||
config['external_name'] = external.get('name')
|
||||
else:
|
||||
config['external_name'] = name
|
||||
|
||||
mapping[name] = config
|
||||
|
||||
return mapping
|
||||
|
||||
|
||||
def load_services(working_dir, config_file, service_configs):
|
||||
def build_service(service_name, service_dict, service_names):
|
||||
service_config = ServiceConfig.with_abs_paths(
|
||||
config_details.working_dir,
|
||||
filename,
|
||||
working_dir,
|
||||
config_file.filename,
|
||||
service_name,
|
||||
service_dict)
|
||||
resolver = ServiceExtendsResolver(service_config)
|
||||
resolver = ServiceExtendsResolver(service_config, config_file)
|
||||
service_dict = process_service(resolver.run())
|
||||
|
||||
# TODO: move to validate_service()
|
||||
validate_against_service_schema(service_dict, service_config.name)
|
||||
validate_paths(service_dict)
|
||||
|
||||
service_dict = finalize_service(service_config._replace(config=service_dict))
|
||||
service_dict['name'] = service_config.name
|
||||
service_config = service_config._replace(config=service_dict)
|
||||
validate_service(service_config, service_names, config_file.version)
|
||||
service_dict = finalize_service(
|
||||
service_config,
|
||||
service_names,
|
||||
config_file.version)
|
||||
return service_dict
|
||||
|
||||
def build_services(config_file):
|
||||
def build_services(service_config):
|
||||
service_names = service_config.keys()
|
||||
return sort_service_dicts([
|
||||
build_service(config_file.filename, name, service_dict)
|
||||
for name, service_dict in config_file.config.items()
|
||||
build_service(name, service_dict, service_names)
|
||||
for name, service_dict in service_config.items()
|
||||
])
|
||||
|
||||
def merge_services(base, override):
|
||||
@ -232,38 +364,52 @@ def load(config_details):
|
||||
return {
|
||||
name: merge_service_dicts_from_files(
|
||||
base.get(name, {}),
|
||||
override.get(name, {}))
|
||||
override.get(name, {}),
|
||||
config_file.version)
|
||||
for name in all_service_names
|
||||
}
|
||||
|
||||
config_file = process_config_file(config_details.config_files[0])
|
||||
for next_file in config_details.config_files[1:]:
|
||||
next_file = process_config_file(next_file)
|
||||
service_config = service_configs[0]
|
||||
for next_config in service_configs[1:]:
|
||||
service_config = merge_services(service_config, next_config)
|
||||
|
||||
config = merge_services(config_file.config, next_file.config)
|
||||
config_file = config_file._replace(config=config)
|
||||
|
||||
return build_services(config_file)
|
||||
return build_services(service_config)
|
||||
|
||||
|
||||
def process_config_file(config_file, service_name=None):
|
||||
validate_top_level_object(config_file)
|
||||
processed_config = interpolate_environment_variables(config_file.config)
|
||||
validate_against_fields_schema(processed_config, config_file.filename)
|
||||
service_dicts = config_file.get_service_dicts()
|
||||
validate_top_level_service_objects(config_file.filename, service_dicts)
|
||||
|
||||
if service_name and service_name not in processed_config:
|
||||
interpolated_config = interpolate_environment_variables(service_dicts, 'service')
|
||||
|
||||
if config_file.version == V2_0:
|
||||
processed_config = dict(config_file.config)
|
||||
processed_config['services'] = services = interpolated_config
|
||||
processed_config['volumes'] = interpolate_environment_variables(
|
||||
config_file.get_volumes(), 'volume')
|
||||
processed_config['networks'] = interpolate_environment_variables(
|
||||
config_file.get_networks(), 'network')
|
||||
|
||||
if config_file.version == V1:
|
||||
processed_config = services = interpolated_config
|
||||
|
||||
config_file = config_file._replace(config=processed_config)
|
||||
validate_against_fields_schema(config_file)
|
||||
|
||||
if service_name and service_name not in services:
|
||||
raise ConfigurationError(
|
||||
"Cannot extend service '{}' in {}: Service not found".format(
|
||||
service_name, config_file.filename))
|
||||
|
||||
return config_file._replace(config=processed_config)
|
||||
return config_file
|
||||
|
||||
|
||||
class ServiceExtendsResolver(object):
|
||||
def __init__(self, service_config, already_seen=None):
|
||||
def __init__(self, service_config, config_file, already_seen=None):
|
||||
self.service_config = service_config
|
||||
self.working_dir = service_config.working_dir
|
||||
self.already_seen = already_seen or []
|
||||
self.config_file = config_file
|
||||
|
||||
@property
|
||||
def signature(self):
|
||||
@ -290,10 +436,13 @@ class ServiceExtendsResolver(object):
|
||||
config_path = self.get_extended_config_path(extends)
|
||||
service_name = extends['service']
|
||||
|
||||
extends_file = ConfigFile.from_filename(config_path)
|
||||
validate_config_version([self.config_file, extends_file])
|
||||
extended_file = process_config_file(
|
||||
ConfigFile.from_filename(config_path),
|
||||
extends_file,
|
||||
service_name=service_name)
|
||||
service_config = extended_file.config[service_name]
|
||||
service_config = extended_file.get_service(service_name)
|
||||
|
||||
return config_path, service_config, service_name
|
||||
|
||||
def resolve_extends(self, extended_config_path, service_dict, service_name):
|
||||
@ -303,6 +452,7 @@ class ServiceExtendsResolver(object):
|
||||
extended_config_path,
|
||||
service_name,
|
||||
service_dict),
|
||||
self.config_file,
|
||||
already_seen=self.already_seen + [self.signature])
|
||||
|
||||
service_config = resolver.run()
|
||||
@ -310,10 +460,12 @@ class ServiceExtendsResolver(object):
|
||||
validate_extended_service_dict(
|
||||
other_service_dict,
|
||||
extended_config_path,
|
||||
service_name,
|
||||
)
|
||||
service_name)
|
||||
|
||||
return merge_service_dicts(other_service_dict, self.service_config.config)
|
||||
return merge_service_dicts(
|
||||
other_service_dict,
|
||||
self.service_config.config,
|
||||
self.config_file.version)
|
||||
|
||||
def get_extended_config_path(self, extends_options):
|
||||
"""Service we are extending either has a value for 'file' set, which we
|
||||
@ -342,6 +494,11 @@ def resolve_environment(service_dict):
|
||||
return dict(resolve_env_var(k, v) for k, v in six.iteritems(env))
|
||||
|
||||
|
||||
def resolve_build_args(build):
|
||||
args = parse_build_arguments(build.get('args'))
|
||||
return dict(resolve_env_var(k, v) for k, v in six.iteritems(args))
|
||||
|
||||
|
||||
def validate_extended_service_dict(service_dict, filename, service):
|
||||
error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
|
||||
|
||||
@ -354,21 +511,37 @@ def validate_extended_service_dict(service_dict, filename, service):
|
||||
"%s services with 'volumes_from' cannot be extended" % error_prefix)
|
||||
|
||||
if 'net' in service_dict:
|
||||
if get_service_name_from_net(service_dict['net']) is not None:
|
||||
if get_container_name_from_network_mode(service_dict['net']):
|
||||
raise ConfigurationError(
|
||||
"%s services with 'net: container' cannot be extended" % error_prefix)
|
||||
|
||||
if 'network_mode' in service_dict:
|
||||
if get_service_name_from_network_mode(service_dict['network_mode']):
|
||||
raise ConfigurationError(
|
||||
"%s services with 'network_mode: service' cannot be extended" % error_prefix)
|
||||
|
||||
def validate_ulimits(ulimit_config):
|
||||
for limit_name, soft_hard_values in six.iteritems(ulimit_config):
|
||||
if isinstance(soft_hard_values, dict):
|
||||
if not soft_hard_values['soft'] <= soft_hard_values['hard']:
|
||||
raise ConfigurationError(
|
||||
"ulimit_config \"{}\" cannot contain a 'soft' value higher "
|
||||
"than 'hard' value".format(ulimit_config))
|
||||
if 'depends_on' in service_dict:
|
||||
raise ConfigurationError(
|
||||
"%s services with 'depends_on' cannot be extended" % error_prefix)
|
||||
|
||||
|
||||
def validate_service(service_config, service_names, version):
|
||||
service_dict, service_name = service_config.config, service_config.name
|
||||
validate_against_service_schema(service_dict, service_name, version)
|
||||
validate_paths(service_dict)
|
||||
|
||||
validate_ulimits(service_config)
|
||||
validate_network_mode(service_config, service_names)
|
||||
validate_depends_on(service_config, service_names)
|
||||
|
||||
if not service_dict.get('image') and has_uppercase(service_name):
|
||||
raise ConfigurationError(
|
||||
"Service '{name}' contains uppercase characters which are not valid "
|
||||
"as part of an image name. Either use a lowercase service name or "
|
||||
"use the `image` field to set a custom name for the service image."
|
||||
.format(name=service_name))
|
||||
|
||||
|
||||
# TODO: rename to normalize_service
|
||||
def process_service(service_config):
|
||||
working_dir = service_config.working_dir
|
||||
service_dict = dict(service_config.config)
|
||||
@ -379,26 +552,30 @@ def process_service(service_config):
|
||||
for path in to_list(service_dict['env_file'])
|
||||
]
|
||||
|
||||
if 'build' in service_dict:
|
||||
if isinstance(service_dict['build'], six.string_types):
|
||||
service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
|
||||
elif isinstance(service_dict['build'], dict) and 'context' in service_dict['build']:
|
||||
path = service_dict['build']['context']
|
||||
service_dict['build']['context'] = resolve_build_path(working_dir, path)
|
||||
|
||||
if 'volumes' in service_dict and service_dict.get('volume_driver') is None:
|
||||
service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict)
|
||||
|
||||
if 'build' in service_dict:
|
||||
service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
|
||||
|
||||
if 'labels' in service_dict:
|
||||
service_dict['labels'] = parse_labels(service_dict['labels'])
|
||||
|
||||
if 'extra_hosts' in service_dict:
|
||||
service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
|
||||
|
||||
# TODO: move to a validate_service()
|
||||
if 'ulimits' in service_dict:
|
||||
validate_ulimits(service_dict['ulimits'])
|
||||
for field in ['dns', 'dns_search']:
|
||||
if field in service_dict:
|
||||
service_dict[field] = to_list(service_dict[field])
|
||||
|
||||
return service_dict
|
||||
|
||||
|
||||
def finalize_service(service_config):
|
||||
def finalize_service(service_config, service_names, version):
|
||||
service_dict = dict(service_config.config)
|
||||
|
||||
if 'environment' in service_dict or 'env_file' in service_dict:
|
||||
@ -407,78 +584,169 @@ def finalize_service(service_config):
|
||||
|
||||
if 'volumes_from' in service_dict:
|
||||
service_dict['volumes_from'] = [
|
||||
VolumeFromSpec.parse(vf) for vf in service_dict['volumes_from']]
|
||||
VolumeFromSpec.parse(vf, service_names, version)
|
||||
for vf in service_dict['volumes_from']
|
||||
]
|
||||
|
||||
if 'volumes' in service_dict:
|
||||
service_dict['volumes'] = [
|
||||
VolumeSpec.parse(v) for v in service_dict['volumes']]
|
||||
|
||||
if 'net' in service_dict:
|
||||
network_mode = service_dict.pop('net')
|
||||
container_name = get_container_name_from_network_mode(network_mode)
|
||||
if container_name and container_name in service_names:
|
||||
service_dict['network_mode'] = 'service:{}'.format(container_name)
|
||||
else:
|
||||
service_dict['network_mode'] = network_mode
|
||||
|
||||
if 'restart' in service_dict:
|
||||
service_dict['restart'] = parse_restart_spec(service_dict['restart'])
|
||||
|
||||
normalize_build(service_dict, service_config.working_dir)
|
||||
|
||||
service_dict['name'] = service_config.name
|
||||
return normalize_v1_service_format(service_dict)
|
||||
|
||||
|
||||
def normalize_v1_service_format(service_dict):
|
||||
if 'log_driver' in service_dict or 'log_opt' in service_dict:
|
||||
if 'logging' not in service_dict:
|
||||
service_dict['logging'] = {}
|
||||
if 'log_driver' in service_dict:
|
||||
service_dict['logging']['driver'] = service_dict['log_driver']
|
||||
del service_dict['log_driver']
|
||||
if 'log_opt' in service_dict:
|
||||
service_dict['logging']['options'] = service_dict['log_opt']
|
||||
del service_dict['log_opt']
|
||||
|
||||
if 'dockerfile' in service_dict:
|
||||
service_dict['build'] = service_dict.get('build', {})
|
||||
service_dict['build'].update({
|
||||
'dockerfile': service_dict.pop('dockerfile')
|
||||
})
|
||||
|
||||
return service_dict
|
||||
|
||||
|
||||
def merge_service_dicts_from_files(base, override):
|
||||
def merge_service_dicts_from_files(base, override, version):
|
||||
"""When merging services from multiple files we need to merge the `extends`
|
||||
field. This is not handled by `merge_service_dicts()` which is used to
|
||||
perform the `extends`.
|
||||
"""
|
||||
new_service = merge_service_dicts(base, override)
|
||||
new_service = merge_service_dicts(base, override, version)
|
||||
if 'extends' in override:
|
||||
new_service['extends'] = override['extends']
|
||||
elif 'extends' in base:
|
||||
new_service['extends'] = base['extends']
|
||||
return new_service
|
||||
|
||||
|
||||
def merge_service_dicts(base, override):
|
||||
d = base.copy()
|
||||
class MergeDict(dict):
|
||||
"""A dict-like object responsible for merging two dicts into one."""
|
||||
|
||||
if 'environment' in base or 'environment' in override:
|
||||
d['environment'] = merge_environment(
|
||||
base.get('environment'),
|
||||
override.get('environment'),
|
||||
)
|
||||
def __init__(self, base, override):
|
||||
self.base = base
|
||||
self.override = override
|
||||
|
||||
path_mapping_keys = ['volumes', 'devices']
|
||||
def needs_merge(self, field):
|
||||
return field in self.base or field in self.override
|
||||
|
||||
for key in path_mapping_keys:
|
||||
if key in base or key in override:
|
||||
d[key] = merge_path_mappings(
|
||||
base.get(key),
|
||||
override.get(key),
|
||||
)
|
||||
def merge_field(self, field, merge_func, default=None):
|
||||
if not self.needs_merge(field):
|
||||
return
|
||||
|
||||
if 'labels' in base or 'labels' in override:
|
||||
d['labels'] = merge_labels(
|
||||
base.get('labels'),
|
||||
override.get('labels'),
|
||||
)
|
||||
self[field] = merge_func(
|
||||
self.base.get(field, default),
|
||||
self.override.get(field, default))
|
||||
|
||||
if 'image' in override and 'build' in d:
|
||||
del d['build']
|
||||
def merge_mapping(self, field, parse_func):
|
||||
if not self.needs_merge(field):
|
||||
return
|
||||
|
||||
if 'build' in override and 'image' in d:
|
||||
del d['image']
|
||||
self[field] = parse_func(self.base.get(field))
|
||||
self[field].update(parse_func(self.override.get(field)))
|
||||
|
||||
list_keys = ['ports', 'expose', 'external_links']
|
||||
def merge_sequence(self, field, parse_func):
|
||||
def parse_sequence_func(seq):
|
||||
return to_mapping((parse_func(item) for item in seq), 'merge_field')
|
||||
|
||||
for key in list_keys:
|
||||
if key in base or key in override:
|
||||
d[key] = base.get(key, []) + override.get(key, [])
|
||||
if not self.needs_merge(field):
|
||||
return
|
||||
|
||||
list_or_string_keys = ['dns', 'dns_search', 'env_file']
|
||||
merged = parse_sequence_func(self.base.get(field, []))
|
||||
merged.update(parse_sequence_func(self.override.get(field, [])))
|
||||
self[field] = [item.repr() for item in merged.values()]
|
||||
|
||||
for key in list_or_string_keys:
|
||||
if key in base or key in override:
|
||||
d[key] = to_list(base.get(key)) + to_list(override.get(key))
|
||||
def merge_scalar(self, field):
|
||||
if self.needs_merge(field):
|
||||
self[field] = self.override.get(field, self.base.get(field))
|
||||
|
||||
already_merged_keys = ['environment', 'labels'] + path_mapping_keys + list_keys + list_or_string_keys
|
||||
|
||||
for k in set(ALLOWED_KEYS) - set(already_merged_keys):
|
||||
if k in override:
|
||||
d[k] = override[k]
|
||||
def merge_service_dicts(base, override, version):
|
||||
md = MergeDict(base, override)
|
||||
|
||||
return d
|
||||
md.merge_mapping('environment', parse_environment)
|
||||
md.merge_mapping('labels', parse_labels)
|
||||
md.merge_mapping('ulimits', parse_ulimits)
|
||||
md.merge_sequence('links', ServiceLink.parse)
|
||||
|
||||
for field in ['volumes', 'devices']:
|
||||
md.merge_field(field, merge_path_mappings)
|
||||
|
||||
for field in [
|
||||
'depends_on',
|
||||
'expose',
|
||||
'external_links',
|
||||
'ports',
|
||||
'volumes_from',
|
||||
]:
|
||||
md.merge_field(field, operator.add, default=[])
|
||||
|
||||
for field in ['dns', 'dns_search', 'env_file']:
|
||||
md.merge_field(field, merge_list_or_string)
|
||||
|
||||
for field in set(ALLOWED_KEYS) - set(md):
|
||||
md.merge_scalar(field)
|
||||
|
||||
if version == V1:
|
||||
legacy_v1_merge_image_or_build(md, base, override)
|
||||
else:
|
||||
merge_build(md, base, override)
|
||||
|
||||
return dict(md)
|
||||
|
||||
|
||||
def merge_build(output, base, override):
|
||||
build = {}
|
||||
|
||||
if 'build' in base:
|
||||
if isinstance(base['build'], six.string_types):
|
||||
build['context'] = base['build']
|
||||
else:
|
||||
build.update(base['build'])
|
||||
|
||||
if 'build' in override:
|
||||
if isinstance(override['build'], six.string_types):
|
||||
build['context'] = override['build']
|
||||
else:
|
||||
build.update(override['build'])
|
||||
|
||||
if build:
|
||||
output['build'] = build
|
||||
|
||||
|
||||
def legacy_v1_merge_image_or_build(output, base, override):
|
||||
output.pop('image', None)
|
||||
output.pop('build', None)
|
||||
if 'image' in override:
|
||||
output['image'] = override['image']
|
||||
elif 'build' in override:
|
||||
output['build'] = override['build']
|
||||
elif 'image' in base:
|
||||
output['image'] = base['image']
|
||||
elif 'build' in base:
|
||||
output['build'] = base['build']
|
||||
|
||||
|
||||
def merge_environment(base, override):
|
||||
@ -487,22 +755,6 @@ def merge_environment(base, override):
|
||||
return env
|
||||
|
||||
|
||||
def parse_environment(environment):
|
||||
if not environment:
|
||||
return {}
|
||||
|
||||
if isinstance(environment, list):
|
||||
return dict(split_env(e) for e in environment)
|
||||
|
||||
if isinstance(environment, dict):
|
||||
return dict(environment)
|
||||
|
||||
raise ConfigurationError(
|
||||
"environment \"%s\" must be a list or mapping," %
|
||||
environment
|
||||
)
|
||||
|
||||
|
||||
def split_env(env):
|
||||
if isinstance(env, six.binary_type):
|
||||
env = env.decode('utf-8', 'replace')
|
||||
@ -512,6 +764,42 @@ def split_env(env):
|
||||
return env, None
|
||||
|
||||
|
||||
def split_label(label):
|
||||
if '=' in label:
|
||||
return label.split('=', 1)
|
||||
else:
|
||||
return label, ''
|
||||
|
||||
|
||||
def parse_dict_or_list(split_func, type_name, arguments):
|
||||
if not arguments:
|
||||
return {}
|
||||
|
||||
if isinstance(arguments, list):
|
||||
return dict(split_func(e) for e in arguments)
|
||||
|
||||
if isinstance(arguments, dict):
|
||||
return dict(arguments)
|
||||
|
||||
raise ConfigurationError(
|
||||
"%s \"%s\" must be a list or mapping," %
|
||||
(type_name, arguments)
|
||||
)
|
||||
|
||||
|
||||
parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
|
||||
parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
|
||||
parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels')
|
||||
|
||||
|
||||
def parse_ulimits(ulimits):
|
||||
if not ulimits:
|
||||
return {}
|
||||
|
||||
if isinstance(ulimits, dict):
|
||||
return dict(ulimits)
|
||||
|
||||
|
||||
def resolve_env_var(key, val):
|
||||
if val is not None:
|
||||
return key, val
|
||||
@ -555,6 +843,21 @@ def resolve_volume_path(working_dir, volume):
|
||||
return container_path
|
||||
|
||||
|
||||
def normalize_build(service_dict, working_dir):
|
||||
|
||||
if 'build' in service_dict:
|
||||
build = {}
|
||||
# Shortcut where specifying a string is treated as the build context
|
||||
if isinstance(service_dict['build'], six.string_types):
|
||||
build['context'] = service_dict.pop('build')
|
||||
else:
|
||||
build.update(service_dict['build'])
|
||||
if 'args' in build:
|
||||
build['args'] = resolve_build_args(build)
|
||||
|
||||
service_dict['build'] = build
|
||||
|
||||
|
||||
def resolve_build_path(working_dir, build_path):
|
||||
if is_url(build_path):
|
||||
return build_path
|
||||
@ -567,7 +870,13 @@ def is_url(build_path):
|
||||
|
||||
def validate_paths(service_dict):
|
||||
if 'build' in service_dict:
|
||||
build_path = service_dict['build']
|
||||
build = service_dict.get('build', {})
|
||||
|
||||
if isinstance(build, six.string_types):
|
||||
build_path = build
|
||||
elif isinstance(build, dict) and 'context' in build:
|
||||
build_path = build['context']
|
||||
|
||||
if (
|
||||
not is_url(build_path) and
|
||||
(not os.path.exists(build_path) or not os.access(build_path, os.R_OK))
|
||||
@ -622,34 +931,14 @@ def join_path_mapping(pair):
|
||||
return ":".join((host, container))
|
||||
|
||||
|
||||
def merge_labels(base, override):
|
||||
labels = parse_labels(base)
|
||||
labels.update(parse_labels(override))
|
||||
return labels
|
||||
|
||||
|
||||
def parse_labels(labels):
|
||||
if not labels:
|
||||
return {}
|
||||
|
||||
if isinstance(labels, list):
|
||||
return dict(split_label(e) for e in labels)
|
||||
|
||||
if isinstance(labels, dict):
|
||||
return dict(labels)
|
||||
|
||||
|
||||
def split_label(label):
|
||||
if '=' in label:
|
||||
return label.split('=', 1)
|
||||
else:
|
||||
return label, ''
|
||||
|
||||
|
||||
def expand_path(working_dir, path):
|
||||
return os.path.abspath(os.path.join(working_dir, os.path.expanduser(path)))
|
||||
|
||||
|
||||
def merge_list_or_string(base, override):
|
||||
return to_list(base) + to_list(override)
|
||||
|
||||
|
||||
def to_list(value):
|
||||
if value is None:
|
||||
return []
|
||||
@ -659,6 +948,14 @@ def to_list(value):
|
||||
return value
|
||||
|
||||
|
||||
def to_mapping(sequence, key_field):
|
||||
return {getattr(item, key_field): item for item in sequence}
|
||||
|
||||
|
||||
def has_uppercase(name):
|
||||
return any(char in string.ascii_uppercase for char in name)
|
||||
|
||||
|
||||
def load_yaml(filename):
|
||||
try:
|
||||
with open(filename, 'r') as fh:
|
||||
|
@ -1,3 +1,15 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
VERSION_EXPLANATION = (
|
||||
'Either specify a version of "2" (or "2.0") and place your service '
|
||||
'definitions under the `services` key, or omit the `version` key and place '
|
||||
'your service definitions at the root of the file to use version 1.\n'
|
||||
'For more on the Compose file format versions, see '
|
||||
'https://docs.docker.com/compose/compose-file/')
|
||||
|
||||
|
||||
class ConfigurationError(Exception):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
13
compose/config/fields_schema_v1.json
Normal file
13
compose/config/fields_schema_v1.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
|
||||
"type": "object",
|
||||
"id": "fields_schema_v1.json",
|
||||
|
||||
"patternProperties": {
|
||||
"^[a-zA-Z0-9._-]+$": {
|
||||
"$ref": "service_schema_v1.json#/definitions/service"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
96
compose/config/fields_schema_v2.0.json
Normal file
96
compose/config/fields_schema_v2.0.json
Normal file
@ -0,0 +1,96 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"type": "object",
|
||||
"id": "fields_schema_v2.0.json",
|
||||
|
||||
"properties": {
|
||||
"version": {
|
||||
"type": "string"
|
||||
},
|
||||
"services": {
|
||||
"id": "#/properties/services",
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^[a-zA-Z0-9._-]+$": {
|
||||
"$ref": "service_schema_v2.0.json#/definitions/service"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"networks": {
|
||||
"id": "#/properties/networks",
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^[a-zA-Z0-9._-]+$": {
|
||||
"$ref": "#/definitions/network"
|
||||
}
|
||||
}
|
||||
},
|
||||
"volumes": {
|
||||
"id": "#/properties/volumes",
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^[a-zA-Z0-9._-]+$": {
|
||||
"$ref": "#/definitions/volume"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
|
||||
"definitions": {
|
||||
"network": {
|
||||
"id": "#/definitions/network",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"driver": {"type": "string"},
|
||||
"driver_opts": {
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^.+$": {"type": ["string", "number"]}
|
||||
}
|
||||
},
|
||||
"ipam": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"driver": {"type": "string"},
|
||||
"config": {
|
||||
"type": "array"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"external": {
|
||||
"type": ["boolean", "object"],
|
||||
"properties": {
|
||||
"name": {"type": "string"}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"volume": {
|
||||
"id": "#/definitions/volume",
|
||||
"type": ["object", "null"],
|
||||
"properties": {
|
||||
"driver": {"type": "string"},
|
||||
"driver_opts": {
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^.+$": {"type": ["string", "number"]}
|
||||
}
|
||||
},
|
||||
"external": {
|
||||
"type": ["boolean", "object"],
|
||||
"properties": {
|
||||
"name": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
@ -1,3 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import logging
|
||||
import os
|
||||
from string import Template
|
||||
@ -8,35 +11,32 @@ from .errors import ConfigurationError
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def interpolate_environment_variables(config):
|
||||
def interpolate_environment_variables(config, section):
|
||||
mapping = BlankDefaultDict(os.environ)
|
||||
|
||||
def process_item(name, config_dict):
|
||||
return dict(
|
||||
(key, interpolate_value(name, key, val, section, mapping))
|
||||
for key, val in (config_dict or {}).items()
|
||||
)
|
||||
|
||||
return dict(
|
||||
(service_name, process_service(service_name, service_dict, mapping))
|
||||
for (service_name, service_dict) in config.items()
|
||||
(name, process_item(name, config_dict))
|
||||
for name, config_dict in config.items()
|
||||
)
|
||||
|
||||
|
||||
def process_service(service_name, service_dict, mapping):
|
||||
return dict(
|
||||
(key, interpolate_value(service_name, key, val, mapping))
|
||||
for (key, val) in service_dict.items()
|
||||
)
|
||||
|
||||
|
||||
def interpolate_value(service_name, config_key, value, mapping):
|
||||
def interpolate_value(name, config_key, value, section, mapping):
|
||||
try:
|
||||
return recursive_interpolate(value, mapping)
|
||||
except InvalidInterpolation as e:
|
||||
raise ConfigurationError(
|
||||
'Invalid interpolation format for "{config_key}" option '
|
||||
'in service "{service_name}": "{string}"'
|
||||
.format(
|
||||
'in {section} "{name}": "{string}"'.format(
|
||||
config_key=config_key,
|
||||
service_name=service_name,
|
||||
string=e.string,
|
||||
)
|
||||
)
|
||||
name=name,
|
||||
section=section,
|
||||
string=e.string))
|
||||
|
||||
|
||||
def recursive_interpolate(obj, mapping):
|
||||
|
30
compose/config/serialize.py
Normal file
30
compose/config/serialize.py
Normal file
@ -0,0 +1,30 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import six
|
||||
import yaml
|
||||
|
||||
from compose.config import types
|
||||
|
||||
|
||||
def serialize_config_type(dumper, data):
|
||||
representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
|
||||
return representer(data.repr())
|
||||
|
||||
|
||||
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
|
||||
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
|
||||
|
||||
|
||||
def serialize_config(config):
|
||||
output = {
|
||||
'version': config.version,
|
||||
'services': {service.pop('name'): service for service in config.services},
|
||||
'networks': config.networks,
|
||||
'volumes': config.volumes,
|
||||
}
|
||||
return yaml.safe_dump(
|
||||
output,
|
||||
default_flow_style=False,
|
||||
indent=2,
|
||||
width=80)
|
@ -1,30 +0,0 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"id": "service_schema.json",
|
||||
|
||||
"type": "object",
|
||||
|
||||
"allOf": [
|
||||
{"$ref": "fields_schema.json#/definitions/service"},
|
||||
{"$ref": "#/definitions/constraints"}
|
||||
],
|
||||
|
||||
"definitions": {
|
||||
"constraints": {
|
||||
"id": "#/definitions/constraints",
|
||||
"anyOf": [
|
||||
{
|
||||
"required": ["build"],
|
||||
"not": {"required": ["image"]}
|
||||
},
|
||||
{
|
||||
"required": ["image"],
|
||||
"not": {"anyOf": [
|
||||
{"required": ["build"]},
|
||||
{"required": ["dockerfile"]}
|
||||
]}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
@ -1,15 +1,13 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"id": "service_schema_v1.json",
|
||||
|
||||
"type": "object",
|
||||
"id": "fields_schema.json",
|
||||
|
||||
"patternProperties": {
|
||||
"^[a-zA-Z0-9._-]+$": {
|
||||
"$ref": "#/definitions/service"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"allOf": [
|
||||
{"$ref": "#/definitions/service"},
|
||||
{"$ref": "#/definitions/constraints"}
|
||||
],
|
||||
|
||||
"definitions": {
|
||||
"service": {
|
||||
@ -29,13 +27,19 @@
|
||||
},
|
||||
"container_name": {"type": "string"},
|
||||
"cpu_shares": {"type": ["number", "string"]},
|
||||
"cpu_quota": {"type": ["number", "string"]},
|
||||
"cpuset": {"type": "string"},
|
||||
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"dns": {"$ref": "#/definitions/string_or_list"},
|
||||
"dns_search": {"$ref": "#/definitions/string_or_list"},
|
||||
"dockerfile": {"type": "string"},
|
||||
"domainname": {"type": "string"},
|
||||
"entrypoint": {"$ref": "#/definitions/string_or_list"},
|
||||
"entrypoint": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"type": "array", "items": {"type": "string"}}
|
||||
]
|
||||
},
|
||||
"env_file": {"$ref": "#/definitions/string_or_list"},
|
||||
"environment": {"$ref": "#/definitions/list_or_dict"},
|
||||
|
||||
@ -73,10 +77,8 @@
|
||||
"ipc": {"type": "string"},
|
||||
"labels": {"$ref": "#/definitions/list_or_dict"},
|
||||
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
|
||||
"log_driver": {"type": "string"},
|
||||
"log_opt": {"type": "object"},
|
||||
|
||||
"mac_address": {"type": "string"},
|
||||
"mem_limit": {"type": ["number", "string"]},
|
||||
"memswap_limit": {"type": ["number", "string"]},
|
||||
@ -97,6 +99,7 @@
|
||||
"restart": {"type": "string"},
|
||||
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"stdin_open": {"type": "boolean"},
|
||||
"stop_signal": {"type": "string"},
|
||||
"tty": {"type": "boolean"},
|
||||
"ulimits": {
|
||||
"type": "object",
|
||||
@ -157,6 +160,22 @@
|
||||
},
|
||||
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
|
||||
]
|
||||
},
|
||||
"constraints": {
|
||||
"id": "#/definitions/constraints",
|
||||
"anyOf": [
|
||||
{
|
||||
"required": ["build"],
|
||||
"not": {"required": ["image"]}
|
||||
},
|
||||
{
|
||||
"required": ["image"],
|
||||
"not": {"anyOf": [
|
||||
{"required": ["build"]},
|
||||
{"required": ["dockerfile"]}
|
||||
]}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
201
compose/config/service_schema_v2.0.json
Normal file
201
compose/config/service_schema_v2.0.json
Normal file
@ -0,0 +1,201 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"id": "service_schema_v2.0.json",
|
||||
|
||||
"type": "object",
|
||||
|
||||
"allOf": [
|
||||
{"$ref": "#/definitions/service"},
|
||||
{"$ref": "#/definitions/constraints"}
|
||||
],
|
||||
|
||||
"definitions": {
|
||||
"service": {
|
||||
"id": "#/definitions/service",
|
||||
"type": "object",
|
||||
|
||||
"properties": {
|
||||
"build": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"context": {"type": "string"},
|
||||
"dockerfile": {"type": "string"},
|
||||
"args": {"$ref": "#/definitions/list_or_dict"}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
]
|
||||
},
|
||||
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"cgroup_parent": {"type": "string"},
|
||||
"command": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"type": "array", "items": {"type": "string"}}
|
||||
]
|
||||
},
|
||||
"container_name": {"type": "string"},
|
||||
"cpu_shares": {"type": ["number", "string"]},
|
||||
"cpu_quota": {"type": ["number", "string"]},
|
||||
"cpuset": {"type": "string"},
|
||||
"depends_on": {"$ref": "#/definitions/list_of_strings"},
|
||||
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"dns": {"$ref": "#/definitions/string_or_list"},
|
||||
"dns_search": {"$ref": "#/definitions/string_or_list"},
|
||||
"domainname": {"type": "string"},
|
||||
"entrypoint": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"type": "array", "items": {"type": "string"}}
|
||||
]
|
||||
},
|
||||
"env_file": {"$ref": "#/definitions/string_or_list"},
|
||||
"environment": {"$ref": "#/definitions/list_or_dict"},
|
||||
|
||||
"expose": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": ["string", "number"],
|
||||
"format": "expose"
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
|
||||
"extends": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
|
||||
"properties": {
|
||||
"service": {"type": "string"},
|
||||
"file": {"type": "string"}
|
||||
},
|
||||
"required": ["service"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
|
||||
"hostname": {"type": "string"},
|
||||
"image": {"type": "string"},
|
||||
"ipc": {"type": "string"},
|
||||
"labels": {"$ref": "#/definitions/list_or_dict"},
|
||||
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
|
||||
"logging": {
|
||||
"type": "object",
|
||||
|
||||
"properties": {
|
||||
"driver": {"type": "string"},
|
||||
"options": {"type": "object"}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
|
||||
"mac_address": {"type": "string"},
|
||||
"mem_limit": {"type": ["number", "string"]},
|
||||
"memswap_limit": {"type": ["number", "string"]},
|
||||
"network_mode": {"type": "string"},
|
||||
|
||||
"networks": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"uniqueItems": true
|
||||
},
|
||||
|
||||
"pid": {"type": ["string", "null"]},
|
||||
|
||||
"ports": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": ["string", "number"],
|
||||
"format": "ports"
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
|
||||
"privileged": {"type": "boolean"},
|
||||
"read_only": {"type": "boolean"},
|
||||
"restart": {"type": "string"},
|
||||
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"stdin_open": {"type": "boolean"},
|
||||
"stop_signal": {"type": "string"},
|
||||
"tty": {"type": "boolean"},
|
||||
"ulimits": {
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^[a-z]+$": {
|
||||
"oneOf": [
|
||||
{"type": "integer"},
|
||||
{
|
||||
"type":"object",
|
||||
"properties": {
|
||||
"hard": {"type": "integer"},
|
||||
"soft": {"type": "integer"}
|
||||
},
|
||||
"required": ["soft", "hard"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"user": {"type": "string"},
|
||||
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"volume_driver": {"type": "string"},
|
||||
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"working_dir": {"type": "string"}
|
||||
},
|
||||
|
||||
"dependencies": {
|
||||
"memswap_limit": ["mem_limit"]
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
|
||||
"string_or_list": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"$ref": "#/definitions/list_of_strings"}
|
||||
]
|
||||
},
|
||||
|
||||
"list_of_strings": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"uniqueItems": true
|
||||
},
|
||||
|
||||
"list_or_dict": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
".+": {
|
||||
"type": ["string", "number", "boolean", "null"],
|
||||
"format": "bool-value-in-mapping"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
|
||||
]
|
||||
},
|
||||
"constraints": {
|
||||
"id": "#/definitions/constraints",
|
||||
"anyOf": [
|
||||
{"required": ["build"]},
|
||||
{"required": ["image"]}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
@ -1,14 +1,25 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from compose.config.errors import DependencyError
|
||||
|
||||
|
||||
def get_service_name_from_net(net_config):
|
||||
if not net_config:
|
||||
def get_service_name_from_network_mode(network_mode):
|
||||
return get_source_name_from_network_mode(network_mode, 'service')
|
||||
|
||||
|
||||
def get_container_name_from_network_mode(network_mode):
|
||||
return get_source_name_from_network_mode(network_mode, 'container')
|
||||
|
||||
|
||||
def get_source_name_from_network_mode(network_mode, source_type):
|
||||
if not network_mode:
|
||||
return
|
||||
|
||||
if not net_config.startswith('container:'):
|
||||
if not network_mode.startswith(source_type+':'):
|
||||
return
|
||||
|
||||
_, net_name = net_config.split(':', 1)
|
||||
_, net_name = network_mode.split(':', 1)
|
||||
return net_name
|
||||
|
||||
|
||||
@ -30,7 +41,8 @@ def sort_service_dicts(services):
|
||||
service for service in services
|
||||
if (name in get_service_names(service.get('links', [])) or
|
||||
name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or
|
||||
name == get_service_name_from_net(service.get('net')))
|
||||
name == get_service_name_from_network_mode(service.get('network_mode')) or
|
||||
name in service.get('depends_on', []))
|
||||
]
|
||||
|
||||
def visit(n):
|
||||
@ -39,8 +51,10 @@ def sort_service_dicts(services):
|
||||
raise DependencyError('A service can not link to itself: %s' % n['name'])
|
||||
if n['name'] in n.get('volumes_from', []):
|
||||
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
|
||||
else:
|
||||
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
|
||||
if n['name'] in n.get('depends_on', []):
|
||||
raise DependencyError('A service can not depend on itself: %s' % n['name'])
|
||||
raise DependencyError('Circular dependency between %s' % ' and '.join(temporary_marked))
|
||||
|
||||
if n in unmarked:
|
||||
temporary_marked.add(n['name'])
|
||||
for m in get_service_dependents(n, services):
|
||||
|
@ -7,14 +7,21 @@ from __future__ import unicode_literals
|
||||
import os
|
||||
from collections import namedtuple
|
||||
|
||||
from compose.config.config import V1
|
||||
from compose.config.errors import ConfigurationError
|
||||
from compose.const import IS_WINDOWS_PLATFORM
|
||||
|
||||
|
||||
class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode')):
|
||||
class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode type')):
|
||||
|
||||
# TODO: drop service_names arg when v1 is removed
|
||||
@classmethod
|
||||
def parse(cls, volume_from_config, service_names, version):
|
||||
func = cls.parse_v1 if version == V1 else cls.parse_v2
|
||||
return func(service_names, volume_from_config)
|
||||
|
||||
@classmethod
|
||||
def parse(cls, volume_from_config):
|
||||
def parse_v1(cls, service_names, volume_from_config):
|
||||
parts = volume_from_config.split(':')
|
||||
if len(parts) > 2:
|
||||
raise ConfigurationError(
|
||||
@ -27,7 +34,42 @@ class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode')):
|
||||
else:
|
||||
source, mode = parts
|
||||
|
||||
return cls(source, mode)
|
||||
type = 'service' if source in service_names else 'container'
|
||||
return cls(source, mode, type)
|
||||
|
||||
@classmethod
|
||||
def parse_v2(cls, service_names, volume_from_config):
|
||||
parts = volume_from_config.split(':')
|
||||
if len(parts) > 3:
|
||||
raise ConfigurationError(
|
||||
"volume_from {} has incorrect format, should be one of "
|
||||
"'<service name>[:<mode>]' or "
|
||||
"'container:<container name>[:<mode>]'".format(volume_from_config))
|
||||
|
||||
if len(parts) == 1:
|
||||
source = parts[0]
|
||||
return cls(source, 'rw', 'service')
|
||||
|
||||
if len(parts) == 2:
|
||||
if parts[0] == 'container':
|
||||
type, source = parts
|
||||
return cls(source, 'rw', type)
|
||||
|
||||
source, mode = parts
|
||||
return cls(source, mode, 'service')
|
||||
|
||||
if len(parts) == 3:
|
||||
type, source, mode = parts
|
||||
if type not in ('service', 'container'):
|
||||
raise ConfigurationError(
|
||||
"Unknown volumes_from type '{}' in '{}'".format(
|
||||
type,
|
||||
volume_from_config))
|
||||
|
||||
return cls(source, mode, type)
|
||||
|
||||
def repr(self):
|
||||
return '{v.type}:{v.source}:{v.mode}'.format(v=self)
|
||||
|
||||
|
||||
def parse_restart_spec(restart_config):
|
||||
@ -58,7 +100,7 @@ def parse_extra_hosts(extra_hosts_config):
|
||||
extra_hosts_dict = {}
|
||||
for extra_hosts_line in extra_hosts_config:
|
||||
# TODO: validate string contains ':' ?
|
||||
host, ip = extra_hosts_line.split(':')
|
||||
host, ip = extra_hosts_line.split(':', 1)
|
||||
extra_hosts_dict[host.strip()] = ip.strip()
|
||||
return extra_hosts_dict
|
||||
|
||||
@ -118,3 +160,30 @@ class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')):
|
||||
mode = parts[2]
|
||||
|
||||
return cls(external, internal, mode)
|
||||
|
||||
def repr(self):
|
||||
external = self.external + ':' if self.external else ''
|
||||
return '{ext}{v.internal}:{v.mode}'.format(ext=external, v=self)
|
||||
|
||||
@property
|
||||
def is_named_volume(self):
|
||||
return self.external and not self.external.startswith(('.', '/', '~'))
|
||||
|
||||
|
||||
class ServiceLink(namedtuple('_ServiceLink', 'target alias')):
|
||||
|
||||
@classmethod
|
||||
def parse(cls, link_spec):
|
||||
target, _, alias = link_spec.partition(':')
|
||||
if not alias:
|
||||
alias = target
|
||||
return cls(target, alias)
|
||||
|
||||
def repr(self):
|
||||
if self.target == self.alias:
|
||||
return self.target
|
||||
return '{s.target}:{s.alias}'.format(s=self)
|
||||
|
||||
@property
|
||||
def merge_field(self):
|
||||
return self.alias
|
||||
|
@ -1,3 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
@ -12,6 +15,8 @@ from jsonschema import RefResolver
|
||||
from jsonschema import ValidationError
|
||||
|
||||
from .errors import ConfigurationError
|
||||
from .errors import VERSION_EXPLANATION
|
||||
from .sort_services import get_service_name_from_network_mode
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
@ -35,7 +40,7 @@ DOCKER_CONFIG_HINTS = {
|
||||
|
||||
|
||||
VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]'
|
||||
VALID_EXPOSE_FORMAT = r'^\d+(\/[a-zA-Z]+)?$'
|
||||
VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$'
|
||||
|
||||
|
||||
@FormatChecker.cls_checks(format="ports", raises=ValidationError)
|
||||
@ -74,18 +79,30 @@ def format_boolean_in_environment(instance):
|
||||
return True
|
||||
|
||||
|
||||
def validate_top_level_service_objects(config_file):
|
||||
def match_named_volumes(service_dict, project_volumes):
|
||||
service_volumes = service_dict.get('volumes', [])
|
||||
for volume_spec in service_volumes:
|
||||
if volume_spec.is_named_volume and volume_spec.external not in project_volumes:
|
||||
raise ConfigurationError(
|
||||
'Named volume "{0}" is used in service "{1}" but no'
|
||||
' declaration was found in the volumes section.'.format(
|
||||
volume_spec.repr(), service_dict.get('name')
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def validate_top_level_service_objects(filename, service_dicts):
|
||||
"""Perform some high level validation of the service name and value.
|
||||
|
||||
This validation must happen before interpolation, which must happen
|
||||
before the rest of validation, which is why it's separate from the
|
||||
rest of the service validation.
|
||||
"""
|
||||
for service_name, service_dict in config_file.config.items():
|
||||
for service_name, service_dict in service_dicts.items():
|
||||
if not isinstance(service_name, six.string_types):
|
||||
raise ConfigurationError(
|
||||
"In file '{}' service name: {} needs to be a string, eg '{}'".format(
|
||||
config_file.filename,
|
||||
filename,
|
||||
service_name,
|
||||
service_name))
|
||||
|
||||
@ -94,18 +111,29 @@ def validate_top_level_service_objects(config_file):
|
||||
"In file '{}' service '{}' doesn\'t have any configuration options. "
|
||||
"All top level keys in your docker-compose.yml must map "
|
||||
"to a dictionary of configuration options.".format(
|
||||
config_file.filename,
|
||||
service_name))
|
||||
filename, service_name
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def validate_top_level_object(config_file):
|
||||
if not isinstance(config_file.config, dict):
|
||||
raise ConfigurationError(
|
||||
"Top level object in '{}' needs to be an object not '{}'. Check "
|
||||
"that you have defined a service at the top level.".format(
|
||||
"Top level object in '{}' needs to be an object not '{}'.".format(
|
||||
config_file.filename,
|
||||
type(config_file.config)))
|
||||
validate_top_level_service_objects(config_file)
|
||||
|
||||
|
||||
def validate_ulimits(service_config):
|
||||
ulimit_config = service_config.config.get('ulimits', {})
|
||||
for limit_name, soft_hard_values in six.iteritems(ulimit_config):
|
||||
if isinstance(soft_hard_values, dict):
|
||||
if not soft_hard_values['soft'] <= soft_hard_values['hard']:
|
||||
raise ConfigurationError(
|
||||
"Service '{s.name}' has invalid ulimit '{ulimit}'. "
|
||||
"'soft' value can not be greater than 'hard' value ".format(
|
||||
s=service_config,
|
||||
ulimit=ulimit_config))
|
||||
|
||||
|
||||
def validate_extends_file_path(service_name, extends_options, filename):
|
||||
@ -121,8 +149,34 @@ def validate_extends_file_path(service_name, extends_options, filename):
|
||||
)
|
||||
|
||||
|
||||
def get_unsupported_config_msg(service_name, error_key):
|
||||
msg = "Unsupported config option for '{}' service: '{}'".format(service_name, error_key)
|
||||
def validate_network_mode(service_config, service_names):
|
||||
network_mode = service_config.config.get('network_mode')
|
||||
if not network_mode:
|
||||
return
|
||||
|
||||
if 'networks' in service_config.config:
|
||||
raise ConfigurationError("'network_mode' and 'networks' cannot be combined")
|
||||
|
||||
dependency = get_service_name_from_network_mode(network_mode)
|
||||
if not dependency:
|
||||
return
|
||||
|
||||
if dependency not in service_names:
|
||||
raise ConfigurationError(
|
||||
"Service '{s.name}' uses the network stack of service '{dep}' which "
|
||||
"is undefined.".format(s=service_config, dep=dependency))
|
||||
|
||||
|
||||
def validate_depends_on(service_config, service_names):
|
||||
for dependency in service_config.config.get('depends_on', []):
|
||||
if dependency not in service_names:
|
||||
raise ConfigurationError(
|
||||
"Service '{s.name}' depends on service '{dep}' which is "
|
||||
"undefined.".format(s=service_config, dep=dependency))
|
||||
|
||||
|
||||
def get_unsupported_config_msg(path, error_key):
|
||||
msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key)
|
||||
if error_key in DOCKER_CONFIG_HINTS:
|
||||
msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key])
|
||||
return msg
|
||||
@ -134,73 +188,95 @@ def anglicize_validator(validator):
|
||||
return 'a ' + validator
|
||||
|
||||
|
||||
def handle_error_for_schema_with_id(error, service_name):
|
||||
def is_service_dict_schema(schema_id):
|
||||
return schema_id == 'fields_schema_v1.json' or schema_id == '#/properties/services'
|
||||
|
||||
|
||||
def handle_error_for_schema_with_id(error, path):
|
||||
schema_id = error.schema['id']
|
||||
|
||||
if schema_id == 'fields_schema.json' and error.validator == 'additionalProperties':
|
||||
if is_service_dict_schema(schema_id) and error.validator == 'additionalProperties':
|
||||
return "Invalid service name '{}' - only {} characters are allowed".format(
|
||||
# The service_name is the key to the json object
|
||||
list(error.instance)[0],
|
||||
VALID_NAME_CHARS)
|
||||
|
||||
if schema_id == '#/definitions/constraints':
|
||||
if 'image' in error.instance and 'build' in error.instance:
|
||||
return (
|
||||
"Service '{}' has both an image and build path specified. "
|
||||
"A service can either be built to image or use an existing "
|
||||
"image, not both.".format(service_name))
|
||||
if 'image' not in error.instance and 'build' not in error.instance:
|
||||
return (
|
||||
"Service '{}' has neither an image nor a build path "
|
||||
"specified. Exactly one must be provided.".format(service_name))
|
||||
if 'image' in error.instance and 'dockerfile' in error.instance:
|
||||
return (
|
||||
"Service '{}' has both an image and alternate Dockerfile. "
|
||||
"A service can either be built to image or use an existing "
|
||||
"image, not both.".format(service_name))
|
||||
# Build context could in 'build' or 'build.context' and dockerfile could be
|
||||
# in 'dockerfile' or 'build.dockerfile'
|
||||
context = False
|
||||
dockerfile = 'dockerfile' in error.instance
|
||||
if 'build' in error.instance:
|
||||
if isinstance(error.instance['build'], six.string_types):
|
||||
context = True
|
||||
else:
|
||||
context = 'context' in error.instance['build']
|
||||
dockerfile = dockerfile or 'dockerfile' in error.instance['build']
|
||||
|
||||
if schema_id == '#/definitions/service':
|
||||
if error.validator == 'additionalProperties':
|
||||
# TODO: only applies to v1
|
||||
if 'image' in error.instance and context:
|
||||
return (
|
||||
"{} has both an image and build path specified. "
|
||||
"A service can either be built to image or use an existing "
|
||||
"image, not both.".format(path_string(path)))
|
||||
if 'image' not in error.instance and not context:
|
||||
return (
|
||||
"{} has neither an image nor a build path specified. "
|
||||
"At least one must be provided.".format(path_string(path)))
|
||||
# TODO: only applies to v1
|
||||
if 'image' in error.instance and dockerfile:
|
||||
return (
|
||||
"{} has both an image and alternate Dockerfile. "
|
||||
"A service can either be built to image or use an existing "
|
||||
"image, not both.".format(path_string(path)))
|
||||
|
||||
if error.validator == 'additionalProperties':
|
||||
if schema_id == '#/definitions/service':
|
||||
invalid_config_key = parse_key_from_error_msg(error)
|
||||
return get_unsupported_config_msg(service_name, invalid_config_key)
|
||||
return get_unsupported_config_msg(path, invalid_config_key)
|
||||
|
||||
if not error.path:
|
||||
return '{}\n{}'.format(error.message, VERSION_EXPLANATION)
|
||||
|
||||
|
||||
def handle_generic_service_error(error, service_name):
|
||||
config_key = " ".join("'%s'" % k for k in error.path)
|
||||
def handle_generic_service_error(error, path):
|
||||
msg_format = None
|
||||
error_msg = error.message
|
||||
|
||||
if error.validator == 'oneOf':
|
||||
msg_format = "Service '{}' configuration key {} {}"
|
||||
error_msg = _parse_oneof_validator(error)
|
||||
msg_format = "{path} {msg}"
|
||||
config_key, error_msg = _parse_oneof_validator(error)
|
||||
if config_key:
|
||||
path.append(config_key)
|
||||
|
||||
elif error.validator == 'type':
|
||||
msg_format = ("Service '{}' configuration key {} contains an invalid "
|
||||
"type, it should be {}")
|
||||
msg_format = "{path} contains an invalid type, it should be {msg}"
|
||||
error_msg = _parse_valid_types_from_validator(error.validator_value)
|
||||
|
||||
# TODO: no test case for this branch, there are no config options
|
||||
# which exercise this branch
|
||||
elif error.validator == 'required':
|
||||
msg_format = "Service '{}' configuration key '{}' is invalid, {}"
|
||||
msg_format = "{path} is invalid, {msg}"
|
||||
|
||||
elif error.validator == 'dependencies':
|
||||
msg_format = "Service '{}' configuration key '{}' is invalid: {}"
|
||||
config_key = list(error.validator_value.keys())[0]
|
||||
required_keys = ",".join(error.validator_value[config_key])
|
||||
|
||||
msg_format = "{path} is invalid: {msg}"
|
||||
path.append(config_key)
|
||||
error_msg = "when defining '{}' you must set '{}' as well".format(
|
||||
config_key,
|
||||
required_keys)
|
||||
|
||||
elif error.cause:
|
||||
error_msg = six.text_type(error.cause)
|
||||
msg_format = "Service '{}' configuration key {} is invalid: {}"
|
||||
msg_format = "{path} is invalid: {msg}"
|
||||
|
||||
elif error.path:
|
||||
msg_format = "Service '{}' configuration key {} value {}"
|
||||
msg_format = "{path} value {msg}"
|
||||
|
||||
if msg_format:
|
||||
return msg_format.format(service_name, config_key, error_msg)
|
||||
return msg_format.format(path=path_string(path), msg=error_msg)
|
||||
|
||||
return error.message
|
||||
|
||||
@ -209,6 +285,10 @@ def parse_key_from_error_msg(error):
|
||||
return error.message.split("'")[1]
|
||||
|
||||
|
||||
def path_string(path):
|
||||
return ".".join(c for c in path if isinstance(c, six.string_types))
|
||||
|
||||
|
||||
def _parse_valid_types_from_validator(validator):
|
||||
"""A validator value can be either an array of valid types or a string of
|
||||
a valid type. Parse the valid types and prefix with the correct article.
|
||||
@ -234,74 +314,76 @@ def _parse_oneof_validator(error):
|
||||
for context in error.context:
|
||||
|
||||
if context.validator == 'required':
|
||||
return context.message
|
||||
return (None, context.message)
|
||||
|
||||
if context.validator == 'additionalProperties':
|
||||
invalid_config_key = parse_key_from_error_msg(context)
|
||||
return "contains unsupported option: '{}'".format(invalid_config_key)
|
||||
return (None, "contains unsupported option: '{}'".format(invalid_config_key))
|
||||
|
||||
if context.path:
|
||||
invalid_config_key = " ".join(
|
||||
"'{}' ".format(fragment) for fragment in context.path
|
||||
if isinstance(fragment, six.string_types)
|
||||
return (
|
||||
path_string(context.path),
|
||||
"contains {}, which is an invalid type, it should be {}".format(
|
||||
json.dumps(context.instance),
|
||||
_parse_valid_types_from_validator(context.validator_value)),
|
||||
)
|
||||
return "{}contains {}, which is an invalid type, it should be {}".format(
|
||||
invalid_config_key,
|
||||
context.instance,
|
||||
_parse_valid_types_from_validator(context.validator_value))
|
||||
|
||||
if context.validator == 'uniqueItems':
|
||||
return "contains non unique items, please remove duplicates from {}".format(
|
||||
context.instance)
|
||||
return (
|
||||
None,
|
||||
"contains non unique items, please remove duplicates from {}".format(
|
||||
context.instance),
|
||||
)
|
||||
|
||||
if context.validator == 'type':
|
||||
types.append(context.validator_value)
|
||||
|
||||
valid_types = _parse_valid_types_from_validator(types)
|
||||
return "contains an invalid type, it should be {}".format(valid_types)
|
||||
return (None, "contains an invalid type, it should be {}".format(valid_types))
|
||||
|
||||
|
||||
def process_errors(errors, service_name=None):
|
||||
def process_errors(errors, path_prefix=None):
|
||||
"""jsonschema gives us an error tree full of information to explain what has
|
||||
gone wrong. Process each error and pull out relevant information and re-write
|
||||
helpful error messages that are relevant.
|
||||
"""
|
||||
def format_error_message(error, service_name):
|
||||
if not service_name and error.path:
|
||||
# field_schema errors will have service name on the path
|
||||
service_name = error.path.popleft()
|
||||
path_prefix = path_prefix or []
|
||||
|
||||
def format_error_message(error):
|
||||
path = path_prefix + list(error.path)
|
||||
|
||||
if 'id' in error.schema:
|
||||
error_msg = handle_error_for_schema_with_id(error, service_name)
|
||||
error_msg = handle_error_for_schema_with_id(error, path)
|
||||
if error_msg:
|
||||
return error_msg
|
||||
|
||||
return handle_generic_service_error(error, service_name)
|
||||
return handle_generic_service_error(error, path)
|
||||
|
||||
return '\n'.join(format_error_message(error, service_name) for error in errors)
|
||||
return '\n'.join(format_error_message(error) for error in errors)
|
||||
|
||||
|
||||
def validate_against_fields_schema(config, filename):
|
||||
def validate_against_fields_schema(config_file):
|
||||
schema_filename = "fields_schema_v{0}.json".format(config_file.version)
|
||||
_validate_against_schema(
|
||||
config,
|
||||
"fields_schema.json",
|
||||
config_file.config,
|
||||
schema_filename,
|
||||
format_checker=["ports", "expose", "bool-value-in-mapping"],
|
||||
filename=filename)
|
||||
filename=config_file.filename)
|
||||
|
||||
|
||||
def validate_against_service_schema(config, service_name):
|
||||
def validate_against_service_schema(config, service_name, version):
|
||||
_validate_against_schema(
|
||||
config,
|
||||
"service_schema.json",
|
||||
"service_schema_v{0}.json".format(version),
|
||||
format_checker=["ports"],
|
||||
service_name=service_name)
|
||||
path_prefix=[service_name])
|
||||
|
||||
|
||||
def _validate_against_schema(
|
||||
config,
|
||||
schema_filename,
|
||||
format_checker=(),
|
||||
service_name=None,
|
||||
path_prefix=None,
|
||||
filename=None):
|
||||
config_source_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
@ -327,7 +409,7 @@ def _validate_against_schema(
|
||||
if not errors:
|
||||
return
|
||||
|
||||
error_msg = process_errors(errors, service_name)
|
||||
error_msg = process_errors(errors, path_prefix=path_prefix)
|
||||
file_msg = " in file '{}'".format(filename) if filename else ''
|
||||
raise ConfigurationError("Validation failed{}, reason(s):\n{}".format(
|
||||
file_msg,
|
||||
|
@ -1,8 +1,12 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
DEFAULT_TIMEOUT = 10
|
||||
HTTP_TIMEOUT = int(os.environ.get('COMPOSE_HTTP_TIMEOUT', os.environ.get('DOCKER_CLIENT_TIMEOUT', 60)))
|
||||
IMAGE_EVENTS = ['delete', 'import', 'pull', 'push', 'tag', 'untag']
|
||||
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
|
||||
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
|
||||
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
|
||||
@ -10,3 +14,11 @@ LABEL_PROJECT = 'com.docker.compose.project'
|
||||
LABEL_SERVICE = 'com.docker.compose.service'
|
||||
LABEL_VERSION = 'com.docker.compose.version'
|
||||
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
|
||||
|
||||
COMPOSEFILE_V1 = '1'
|
||||
COMPOSEFILE_V2_0 = '2.0'
|
||||
|
||||
API_VERSIONS = {
|
||||
COMPOSEFILE_V1: '1.21',
|
||||
COMPOSEFILE_V2_0: '1.22',
|
||||
}
|
||||
|
@ -107,6 +107,10 @@ class Container(object):
|
||||
def labels(self):
|
||||
return self.get('Config.Labels') or {}
|
||||
|
||||
@property
|
||||
def stop_signal(self):
|
||||
return self.get('Config.StopSignal')
|
||||
|
||||
@property
|
||||
def log_config(self):
|
||||
return self.get('HostConfig.LogConfig') or None
|
||||
@ -115,6 +119,8 @@ class Container(object):
|
||||
def human_readable_state(self):
|
||||
if self.is_paused:
|
||||
return 'Paused'
|
||||
if self.is_restarting:
|
||||
return 'Restarting'
|
||||
if self.is_running:
|
||||
return 'Ghost' if self.get('State.Ghost') else 'Up'
|
||||
else:
|
||||
@ -130,10 +136,18 @@ class Container(object):
|
||||
def environment(self):
|
||||
return dict(var.split("=", 1) for var in self.get('Config.Env') or [])
|
||||
|
||||
@property
|
||||
def exit_code(self):
|
||||
return self.get('State.ExitCode')
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self.get('State.Running')
|
||||
|
||||
@property
|
||||
def is_restarting(self):
|
||||
return self.get('State.Restarting')
|
||||
|
||||
@property
|
||||
def is_paused(self):
|
||||
return self.get('State.Paused')
|
||||
@ -171,6 +185,12 @@ class Container(object):
|
||||
port = self.ports.get("%s/%s" % (port, protocol))
|
||||
return "{HostIp}:{HostPort}".format(**port[0]) if port else None
|
||||
|
||||
def get_mount(self, mount_dest):
|
||||
for mount in self.get('Mounts'):
|
||||
if mount['Destination'] == mount_dest:
|
||||
return mount
|
||||
return None
|
||||
|
||||
def start(self, **options):
|
||||
return self.client.start(self.id, **options)
|
||||
|
||||
@ -216,16 +236,6 @@ class Container(object):
|
||||
self.has_been_inspected = True
|
||||
return self.dictionary
|
||||
|
||||
# TODO: only used by tests, move to test module
|
||||
def links(self):
|
||||
links = []
|
||||
for container in self.client.containers():
|
||||
for name in container['Names']:
|
||||
bits = name.split('/')
|
||||
if len(bits) > 2 and bits[1] == self.name:
|
||||
links.append(bits[2])
|
||||
return links
|
||||
|
||||
def attach(self, *args, **kwargs):
|
||||
return self.client.attach(self.id, *args, **kwargs)
|
||||
|
||||
|
@ -1,182 +0,0 @@
|
||||
import logging
|
||||
import re
|
||||
|
||||
from .const import LABEL_VERSION
|
||||
from .container import Container
|
||||
from .container import get_container_name
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# TODO: remove this section when migrate_project_to_labels is removed
|
||||
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
|
||||
|
||||
ERROR_MESSAGE_FORMAT = """
|
||||
Compose found the following containers without labels:
|
||||
|
||||
{names_list}
|
||||
|
||||
As of Compose 1.3.0, containers are identified with labels instead of naming
|
||||
convention. If you want to continue using these containers, run:
|
||||
|
||||
$ docker-compose migrate-to-labels
|
||||
|
||||
Alternatively, remove them:
|
||||
|
||||
$ docker rm -f {rm_args}
|
||||
"""
|
||||
|
||||
ONE_OFF_ADDENDUM_FORMAT = """
|
||||
You should also remove your one-off containers:
|
||||
|
||||
$ docker rm -f {rm_args}
|
||||
"""
|
||||
|
||||
ONE_OFF_ERROR_MESSAGE_FORMAT = """
|
||||
Compose found the following containers without labels:
|
||||
|
||||
{names_list}
|
||||
|
||||
As of Compose 1.3.0, containers are identified with labels instead of naming convention.
|
||||
|
||||
Remove them before continuing:
|
||||
|
||||
$ docker rm -f {rm_args}
|
||||
"""
|
||||
|
||||
|
||||
def check_for_legacy_containers(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
allow_one_off=True):
|
||||
"""Check if there are containers named using the old naming convention
|
||||
and warn the user that those containers may need to be migrated to
|
||||
using labels, so that compose can find them.
|
||||
"""
|
||||
containers = get_legacy_containers(client, project, services, one_off=False)
|
||||
|
||||
if containers:
|
||||
one_off_containers = get_legacy_containers(client, project, services, one_off=True)
|
||||
|
||||
raise LegacyContainersError(
|
||||
[c.name for c in containers],
|
||||
[c.name for c in one_off_containers],
|
||||
)
|
||||
|
||||
if not allow_one_off:
|
||||
one_off_containers = get_legacy_containers(client, project, services, one_off=True)
|
||||
|
||||
if one_off_containers:
|
||||
raise LegacyOneOffContainersError(
|
||||
[c.name for c in one_off_containers],
|
||||
)
|
||||
|
||||
|
||||
class LegacyError(Exception):
|
||||
def __unicode__(self):
|
||||
return self.msg
|
||||
|
||||
__str__ = __unicode__
|
||||
|
||||
|
||||
class LegacyContainersError(LegacyError):
|
||||
def __init__(self, names, one_off_names):
|
||||
self.names = names
|
||||
self.one_off_names = one_off_names
|
||||
|
||||
self.msg = ERROR_MESSAGE_FORMAT.format(
|
||||
names_list="\n".join(" {}".format(name) for name in names),
|
||||
rm_args=" ".join(names),
|
||||
)
|
||||
|
||||
if one_off_names:
|
||||
self.msg += ONE_OFF_ADDENDUM_FORMAT.format(rm_args=" ".join(one_off_names))
|
||||
|
||||
|
||||
class LegacyOneOffContainersError(LegacyError):
|
||||
def __init__(self, one_off_names):
|
||||
self.one_off_names = one_off_names
|
||||
|
||||
self.msg = ONE_OFF_ERROR_MESSAGE_FORMAT.format(
|
||||
names_list="\n".join(" {}".format(name) for name in one_off_names),
|
||||
rm_args=" ".join(one_off_names),
|
||||
)
|
||||
|
||||
|
||||
def add_labels(project, container):
|
||||
project_name, service_name, one_off, number = NAME_RE.match(container.name).groups()
|
||||
if project_name != project.name or service_name not in project.service_names:
|
||||
return
|
||||
service = project.get_service(service_name)
|
||||
service.recreate_container(container)
|
||||
|
||||
|
||||
def migrate_project_to_labels(project):
|
||||
log.info("Running migration to labels for project %s", project.name)
|
||||
|
||||
containers = get_legacy_containers(
|
||||
project.client,
|
||||
project.name,
|
||||
project.service_names,
|
||||
one_off=False,
|
||||
)
|
||||
|
||||
for container in containers:
|
||||
add_labels(project, container)
|
||||
|
||||
|
||||
def get_legacy_containers(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
one_off=False):
|
||||
|
||||
return list(_get_legacy_containers_iter(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
one_off=one_off,
|
||||
))
|
||||
|
||||
|
||||
def _get_legacy_containers_iter(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
one_off=False):
|
||||
|
||||
containers = client.containers(all=True)
|
||||
|
||||
for service in services:
|
||||
for container in containers:
|
||||
if LABEL_VERSION in (container.get('Labels') or {}):
|
||||
continue
|
||||
|
||||
name = get_container_name(container)
|
||||
if has_container(project, service, name, one_off=one_off):
|
||||
yield Container.from_ps(client, container)
|
||||
|
||||
|
||||
def has_container(project, service, name, one_off=False):
|
||||
if not name or not is_valid_name(name, one_off):
|
||||
return False
|
||||
container_project, container_service, _container_number = parse_name(name)
|
||||
return container_project == project and container_service == service
|
||||
|
||||
|
||||
def is_valid_name(name, one_off=False):
|
||||
match = NAME_RE.match(name)
|
||||
if match is None:
|
||||
return False
|
||||
if one_off:
|
||||
return match.group(3) == 'run_'
|
||||
else:
|
||||
return match.group(3) is None
|
||||
|
||||
|
||||
def parse_name(name):
|
||||
match = NAME_RE.match(name)
|
||||
(project, service_name, _, suffix) = match.groups()
|
||||
return (project, service_name, int(suffix))
|
179
compose/network.py
Normal file
179
compose/network.py
Normal file
@ -0,0 +1,179 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import logging
|
||||
|
||||
from docker.errors import NotFound
|
||||
from docker.utils import create_ipam_config
|
||||
from docker.utils import create_ipam_pool
|
||||
|
||||
from .config import ConfigurationError
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Network(object):
|
||||
def __init__(self, client, project, name, driver=None, driver_opts=None,
|
||||
ipam=None, external_name=None):
|
||||
self.client = client
|
||||
self.project = project
|
||||
self.name = name
|
||||
self.driver = driver
|
||||
self.driver_opts = driver_opts
|
||||
self.ipam = create_ipam_config_from_dict(ipam)
|
||||
self.external_name = external_name
|
||||
|
||||
def ensure(self):
|
||||
if self.external_name:
|
||||
try:
|
||||
self.inspect()
|
||||
log.debug(
|
||||
'Network {0} declared as external. No new '
|
||||
'network will be created.'.format(self.name)
|
||||
)
|
||||
except NotFound:
|
||||
raise ConfigurationError(
|
||||
'Network {name} declared as external, but could'
|
||||
' not be found. Please create the network manually'
|
||||
' using `{command} {name}` and try again.'.format(
|
||||
name=self.external_name,
|
||||
command='docker network create'
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
try:
|
||||
data = self.inspect()
|
||||
if self.driver and data['Driver'] != self.driver:
|
||||
raise ConfigurationError(
|
||||
'Network "{}" needs to be recreated - driver has changed'
|
||||
.format(self.full_name))
|
||||
if data['Options'] != (self.driver_opts or {}):
|
||||
raise ConfigurationError(
|
||||
'Network "{}" needs to be recreated - options have changed'
|
||||
.format(self.full_name))
|
||||
except NotFound:
|
||||
driver_name = 'the default driver'
|
||||
if self.driver:
|
||||
driver_name = 'driver "{}"'.format(self.driver)
|
||||
|
||||
log.info(
|
||||
'Creating network "{}" with {}'
|
||||
.format(self.full_name, driver_name)
|
||||
)
|
||||
|
||||
self.client.create_network(
|
||||
name=self.full_name,
|
||||
driver=self.driver,
|
||||
options=self.driver_opts,
|
||||
ipam=self.ipam,
|
||||
)
|
||||
|
||||
def remove(self):
|
||||
if self.external_name:
|
||||
log.info("Network %s is external, skipping", self.full_name)
|
||||
return
|
||||
|
||||
log.info("Removing network {}".format(self.full_name))
|
||||
self.client.remove_network(self.full_name)
|
||||
|
||||
def inspect(self):
|
||||
return self.client.inspect_network(self.full_name)
|
||||
|
||||
@property
|
||||
def full_name(self):
|
||||
if self.external_name:
|
||||
return self.external_name
|
||||
return '{0}_{1}'.format(self.project, self.name)
|
||||
|
||||
|
||||
def create_ipam_config_from_dict(ipam_dict):
|
||||
if not ipam_dict:
|
||||
return None
|
||||
|
||||
return create_ipam_config(
|
||||
driver=ipam_dict.get('driver'),
|
||||
pool_configs=[
|
||||
create_ipam_pool(
|
||||
subnet=config.get('subnet'),
|
||||
iprange=config.get('ip_range'),
|
||||
gateway=config.get('gateway'),
|
||||
aux_addresses=config.get('aux_addresses'),
|
||||
)
|
||||
for config in ipam_dict.get('config', [])
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def build_networks(name, config_data, client):
|
||||
network_config = config_data.networks or {}
|
||||
networks = {
|
||||
network_name: Network(
|
||||
client=client, project=name, name=network_name,
|
||||
driver=data.get('driver'),
|
||||
driver_opts=data.get('driver_opts'),
|
||||
ipam=data.get('ipam'),
|
||||
external_name=data.get('external_name'),
|
||||
)
|
||||
for network_name, data in network_config.items()
|
||||
}
|
||||
|
||||
if 'default' not in networks:
|
||||
networks['default'] = Network(client, name, 'default')
|
||||
|
||||
return networks
|
||||
|
||||
|
||||
class ProjectNetworks(object):
|
||||
|
||||
def __init__(self, networks, use_networking):
|
||||
self.networks = networks or {}
|
||||
self.use_networking = use_networking
|
||||
|
||||
@classmethod
|
||||
def from_services(cls, services, networks, use_networking):
|
||||
service_networks = {
|
||||
network: networks.get(network)
|
||||
for service in services
|
||||
for network in get_network_names_for_service(service)
|
||||
}
|
||||
unused = set(networks) - set(service_networks) - {'default'}
|
||||
if unused:
|
||||
log.warn(
|
||||
"Some networks were defined but are not used by any service: "
|
||||
"{}".format(", ".join(unused)))
|
||||
return cls(service_networks, use_networking)
|
||||
|
||||
def remove(self):
|
||||
if not self.use_networking:
|
||||
return
|
||||
for network in self.networks.values():
|
||||
network.remove()
|
||||
|
||||
def initialize(self):
|
||||
if not self.use_networking:
|
||||
return
|
||||
|
||||
for network in self.networks.values():
|
||||
network.ensure()
|
||||
|
||||
|
||||
def get_network_names_for_service(service_dict):
|
||||
if 'network_mode' in service_dict:
|
||||
return []
|
||||
return service_dict.get('networks', ['default'])
|
||||
|
||||
|
||||
def get_networks(service_dict, network_definitions):
|
||||
networks = []
|
||||
for name in get_network_names_for_service(service_dict):
|
||||
network = network_definitions.get(name)
|
||||
if network:
|
||||
networks.append(network.full_name)
|
||||
else:
|
||||
raise ConfigurationError(
|
||||
'Service "{}" uses an undefined network "{}"'
|
||||
.format(service_dict['name'], name))
|
||||
|
||||
return networks
|
135
compose/parallel.py
Normal file
135
compose/parallel.py
Normal file
@ -0,0 +1,135 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import operator
|
||||
import sys
|
||||
from threading import Thread
|
||||
|
||||
from docker.errors import APIError
|
||||
from six.moves.queue import Empty
|
||||
from six.moves.queue import Queue
|
||||
|
||||
from compose.utils import get_output_stream
|
||||
|
||||
|
||||
def perform_operation(func, arg, callback, index):
|
||||
try:
|
||||
callback((index, func(arg)))
|
||||
except Exception as e:
|
||||
callback((index, e))
|
||||
|
||||
|
||||
def parallel_execute(objects, func, index_func, msg):
|
||||
"""For a given list of objects, call the callable passing in the first
|
||||
object we give it.
|
||||
"""
|
||||
objects = list(objects)
|
||||
stream = get_output_stream(sys.stderr)
|
||||
writer = ParallelStreamWriter(stream, msg)
|
||||
|
||||
for obj in objects:
|
||||
writer.initialize(index_func(obj))
|
||||
|
||||
q = Queue()
|
||||
|
||||
# TODO: limit the number of threads #1828
|
||||
for obj in objects:
|
||||
t = Thread(
|
||||
target=perform_operation,
|
||||
args=(func, obj, q.put, index_func(obj)))
|
||||
t.daemon = True
|
||||
t.start()
|
||||
|
||||
done = 0
|
||||
errors = {}
|
||||
|
||||
while done < len(objects):
|
||||
try:
|
||||
msg_index, result = q.get(timeout=1)
|
||||
except Empty:
|
||||
continue
|
||||
|
||||
if isinstance(result, APIError):
|
||||
errors[msg_index] = "error", result.explanation
|
||||
writer.write(msg_index, 'error')
|
||||
elif isinstance(result, Exception):
|
||||
errors[msg_index] = "unexpected_exception", result
|
||||
else:
|
||||
writer.write(msg_index, 'done')
|
||||
done += 1
|
||||
|
||||
if not errors:
|
||||
return
|
||||
|
||||
stream.write("\n")
|
||||
for msg_index, (result, error) in errors.items():
|
||||
stream.write("ERROR: for {} {} \n".format(msg_index, error))
|
||||
if result == 'unexpected_exception':
|
||||
raise error
|
||||
|
||||
|
||||
class ParallelStreamWriter(object):
|
||||
"""Write out messages for operations happening in parallel.
|
||||
|
||||
Each operation has it's own line, and ANSI code characters are used
|
||||
to jump to the correct line, and write over the line.
|
||||
"""
|
||||
|
||||
def __init__(self, stream, msg):
|
||||
self.stream = stream
|
||||
self.msg = msg
|
||||
self.lines = []
|
||||
|
||||
def initialize(self, obj_index):
|
||||
self.lines.append(obj_index)
|
||||
self.stream.write("{} {} ... \r\n".format(self.msg, obj_index))
|
||||
self.stream.flush()
|
||||
|
||||
def write(self, obj_index, status):
|
||||
position = self.lines.index(obj_index)
|
||||
diff = len(self.lines) - position
|
||||
# move up
|
||||
self.stream.write("%c[%dA" % (27, diff))
|
||||
# erase
|
||||
self.stream.write("%c[2K\r" % 27)
|
||||
self.stream.write("{} {} ... {}\r".format(self.msg, obj_index, status))
|
||||
# move back down
|
||||
self.stream.write("%c[%dB" % (27, diff))
|
||||
self.stream.flush()
|
||||
|
||||
|
||||
def parallel_operation(containers, operation, options, message):
|
||||
parallel_execute(
|
||||
containers,
|
||||
operator.methodcaller(operation, **options),
|
||||
operator.attrgetter('name'),
|
||||
message)
|
||||
|
||||
|
||||
def parallel_remove(containers, options):
|
||||
stopped_containers = [c for c in containers if not c.is_running]
|
||||
parallel_operation(stopped_containers, 'remove', options, 'Removing')
|
||||
|
||||
|
||||
def parallel_stop(containers, options):
|
||||
parallel_operation(containers, 'stop', options, 'Stopping')
|
||||
|
||||
|
||||
def parallel_start(containers, options):
|
||||
parallel_operation(containers, 'start', options, 'Starting')
|
||||
|
||||
|
||||
def parallel_pause(containers, options):
|
||||
parallel_operation(containers, 'pause', options, 'Pausing')
|
||||
|
||||
|
||||
def parallel_unpause(containers, options):
|
||||
parallel_operation(containers, 'unpause', options, 'Unpausing')
|
||||
|
||||
|
||||
def parallel_kill(containers, options):
|
||||
parallel_operation(containers, 'kill', options, 'Killing')
|
||||
|
||||
|
||||
def parallel_restart(containers, options):
|
||||
parallel_operation(containers, 'restart', options, 'Restarting')
|
@ -1,3 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from compose import utils
|
||||
|
||||
|
||||
|
@ -1,26 +1,33 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
from functools import reduce
|
||||
|
||||
from docker.errors import APIError
|
||||
from docker.errors import NotFound
|
||||
|
||||
from . import parallel
|
||||
from .config import ConfigurationError
|
||||
from .config.sort_services import get_service_name_from_net
|
||||
from .config.config import V1
|
||||
from .config.sort_services import get_container_name_from_network_mode
|
||||
from .config.sort_services import get_service_name_from_network_mode
|
||||
from .const import DEFAULT_TIMEOUT
|
||||
from .const import IMAGE_EVENTS
|
||||
from .const import LABEL_ONE_OFF
|
||||
from .const import LABEL_PROJECT
|
||||
from .const import LABEL_SERVICE
|
||||
from .container import Container
|
||||
from .legacy import check_for_legacy_containers
|
||||
from .service import ContainerNet
|
||||
from .network import build_networks
|
||||
from .network import get_networks
|
||||
from .network import ProjectNetworks
|
||||
from .service import ContainerNetworkMode
|
||||
from .service import ConvergenceStrategy
|
||||
from .service import Net
|
||||
from .service import NetworkMode
|
||||
from .service import Service
|
||||
from .service import ServiceNet
|
||||
from .utils import parallel_execute
|
||||
from .service import ServiceNetworkMode
|
||||
from .utils import microseconds_from_time_nano
|
||||
from .volume import ProjectVolumes
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
@ -30,12 +37,12 @@ class Project(object):
|
||||
"""
|
||||
A collection of services.
|
||||
"""
|
||||
def __init__(self, name, services, client, use_networking=False, network_driver=None):
|
||||
def __init__(self, name, services, client, networks=None, volumes=None):
|
||||
self.name = name
|
||||
self.services = services
|
||||
self.client = client
|
||||
self.use_networking = use_networking
|
||||
self.network_driver = network_driver
|
||||
self.volumes = volumes or ProjectVolumes({})
|
||||
self.networks = networks or ProjectNetworks({}, False)
|
||||
|
||||
def labels(self, one_off=False):
|
||||
return [
|
||||
@ -44,29 +51,50 @@ class Project(object):
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def from_dicts(cls, name, service_dicts, client, use_networking=False, network_driver=None):
|
||||
def from_config(cls, name, config_data, client):
|
||||
"""
|
||||
Construct a ServiceCollection from a list of dicts representing services.
|
||||
Construct a Project from a config.Config object.
|
||||
"""
|
||||
project = cls(name, [], client, use_networking=use_networking, network_driver=network_driver)
|
||||
use_networking = (config_data.version and config_data.version != V1)
|
||||
networks = build_networks(name, config_data, client)
|
||||
project_networks = ProjectNetworks.from_services(
|
||||
config_data.services,
|
||||
networks,
|
||||
use_networking)
|
||||
volumes = ProjectVolumes.from_config(name, config_data, client)
|
||||
project = cls(name, [], client, project_networks, volumes)
|
||||
|
||||
if use_networking:
|
||||
remove_links(service_dicts)
|
||||
for service_dict in config_data.services:
|
||||
service_dict = dict(service_dict)
|
||||
if use_networking:
|
||||
service_networks = get_networks(service_dict, networks)
|
||||
else:
|
||||
service_networks = []
|
||||
|
||||
for service_dict in service_dicts:
|
||||
service_dict.pop('networks', None)
|
||||
links = project.get_links(service_dict)
|
||||
volumes_from = project.get_volumes_from(service_dict)
|
||||
net = project.get_net(service_dict)
|
||||
network_mode = project.get_network_mode(service_dict, service_networks)
|
||||
volumes_from = get_volumes_from(project, service_dict)
|
||||
|
||||
if config_data.version != V1:
|
||||
service_dict['volumes'] = [
|
||||
volumes.namespace_spec(volume_spec)
|
||||
for volume_spec in service_dict.get('volumes', [])
|
||||
]
|
||||
|
||||
project.services.append(
|
||||
Service(
|
||||
service_dict.pop('name'),
|
||||
client=client,
|
||||
project=name,
|
||||
use_networking=use_networking,
|
||||
networks=service_networks,
|
||||
links=links,
|
||||
net=net,
|
||||
network_mode=network_mode,
|
||||
volumes_from=volumes_from,
|
||||
**service_dict))
|
||||
**service_dict)
|
||||
)
|
||||
|
||||
return project
|
||||
|
||||
@property
|
||||
@ -109,20 +137,24 @@ class Project(object):
|
||||
Raises NoSuchService if any of the named services do not exist.
|
||||
"""
|
||||
if service_names is None or len(service_names) == 0:
|
||||
return self.get_services(
|
||||
service_names=self.service_names,
|
||||
include_deps=include_deps
|
||||
)
|
||||
else:
|
||||
unsorted = [self.get_service(name) for name in service_names]
|
||||
services = [s for s in self.services if s in unsorted]
|
||||
service_names = self.service_names
|
||||
|
||||
if include_deps:
|
||||
services = reduce(self._inject_deps, services, [])
|
||||
unsorted = [self.get_service(name) for name in service_names]
|
||||
services = [s for s in self.services if s in unsorted]
|
||||
|
||||
uniques = []
|
||||
[uniques.append(s) for s in services if s not in uniques]
|
||||
return uniques
|
||||
if include_deps:
|
||||
services = reduce(self._inject_deps, services, [])
|
||||
|
||||
uniques = []
|
||||
[uniques.append(s) for s in services if s not in uniques]
|
||||
|
||||
return uniques
|
||||
|
||||
def get_services_without_duplicate(self, service_names=None, include_deps=False):
|
||||
services = self.get_services(service_names, include_deps)
|
||||
for service in services:
|
||||
service.remove_duplicate_containers()
|
||||
return services
|
||||
|
||||
def get_links(self, service_dict):
|
||||
links = []
|
||||
@ -141,93 +173,72 @@ class Project(object):
|
||||
del service_dict['links']
|
||||
return links
|
||||
|
||||
def get_volumes_from(self, service_dict):
|
||||
volumes_from = []
|
||||
if 'volumes_from' in service_dict:
|
||||
for volume_from_spec in service_dict.get('volumes_from', []):
|
||||
# Get service
|
||||
try:
|
||||
service = self.get_service(volume_from_spec.source)
|
||||
volume_from_spec = volume_from_spec._replace(source=service)
|
||||
except NoSuchService:
|
||||
try:
|
||||
container = Container.from_id(self.client, volume_from_spec.source)
|
||||
volume_from_spec = volume_from_spec._replace(source=container)
|
||||
except APIError:
|
||||
raise ConfigurationError(
|
||||
'Service "%s" mounts volumes from "%s", which is '
|
||||
'not the name of a service or container.' % (
|
||||
service_dict['name'],
|
||||
volume_from_spec.source))
|
||||
volumes_from.append(volume_from_spec)
|
||||
del service_dict['volumes_from']
|
||||
return volumes_from
|
||||
def get_network_mode(self, service_dict, networks):
|
||||
network_mode = service_dict.pop('network_mode', None)
|
||||
if not network_mode:
|
||||
if self.networks.use_networking:
|
||||
return NetworkMode(networks[0]) if networks else NetworkMode('none')
|
||||
return NetworkMode(None)
|
||||
|
||||
def get_net(self, service_dict):
|
||||
net = service_dict.pop('net', None)
|
||||
if not net:
|
||||
if self.use_networking:
|
||||
return Net(self.name)
|
||||
return Net(None)
|
||||
service_name = get_service_name_from_network_mode(network_mode)
|
||||
if service_name:
|
||||
return ServiceNetworkMode(self.get_service(service_name))
|
||||
|
||||
net_name = get_service_name_from_net(net)
|
||||
if not net_name:
|
||||
return Net(net)
|
||||
container_name = get_container_name_from_network_mode(network_mode)
|
||||
if container_name:
|
||||
try:
|
||||
return ContainerNetworkMode(Container.from_id(self.client, container_name))
|
||||
except APIError:
|
||||
raise ConfigurationError(
|
||||
"Service '{name}' uses the network stack of container '{dep}' which "
|
||||
"does not exist.".format(name=service_dict['name'], dep=container_name))
|
||||
|
||||
try:
|
||||
return ServiceNet(self.get_service(net_name))
|
||||
except NoSuchService:
|
||||
pass
|
||||
try:
|
||||
return ContainerNet(Container.from_id(self.client, net_name))
|
||||
except APIError:
|
||||
raise ConfigurationError(
|
||||
'Service "%s" is trying to use the network of "%s", '
|
||||
'which is not the name of a service or container.' % (
|
||||
service_dict['name'],
|
||||
net_name))
|
||||
return NetworkMode(network_mode)
|
||||
|
||||
def start(self, service_names=None, **options):
|
||||
containers = []
|
||||
for service in self.get_services(service_names):
|
||||
service.start(**options)
|
||||
service_containers = service.start(**options)
|
||||
containers.extend(service_containers)
|
||||
return containers
|
||||
|
||||
def stop(self, service_names=None, **options):
|
||||
parallel_execute(
|
||||
objects=self.containers(service_names),
|
||||
obj_callable=lambda c: c.stop(**options),
|
||||
msg_index=lambda c: c.name,
|
||||
msg="Stopping"
|
||||
)
|
||||
parallel.parallel_stop(self.containers(service_names), options)
|
||||
|
||||
def pause(self, service_names=None, **options):
|
||||
for service in reversed(self.get_services(service_names)):
|
||||
service.pause(**options)
|
||||
containers = self.containers(service_names)
|
||||
parallel.parallel_pause(reversed(containers), options)
|
||||
return containers
|
||||
|
||||
def unpause(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
service.unpause(**options)
|
||||
containers = self.containers(service_names)
|
||||
parallel.parallel_unpause(containers, options)
|
||||
return containers
|
||||
|
||||
def kill(self, service_names=None, **options):
|
||||
parallel_execute(
|
||||
objects=self.containers(service_names),
|
||||
obj_callable=lambda c: c.kill(**options),
|
||||
msg_index=lambda c: c.name,
|
||||
msg="Killing"
|
||||
)
|
||||
parallel.parallel_kill(self.containers(service_names), options)
|
||||
|
||||
def remove_stopped(self, service_names=None, **options):
|
||||
all_containers = self.containers(service_names, stopped=True)
|
||||
stopped_containers = [c for c in all_containers if not c.is_running]
|
||||
parallel_execute(
|
||||
objects=stopped_containers,
|
||||
obj_callable=lambda c: c.remove(**options),
|
||||
msg_index=lambda c: c.name,
|
||||
msg="Removing"
|
||||
)
|
||||
parallel.parallel_remove(self.containers(service_names, stopped=True), options)
|
||||
|
||||
def down(self, remove_image_type, include_volumes):
|
||||
self.stop()
|
||||
self.remove_stopped(v=include_volumes)
|
||||
self.networks.remove()
|
||||
|
||||
if include_volumes:
|
||||
self.volumes.remove()
|
||||
|
||||
self.remove_images(remove_image_type)
|
||||
|
||||
def remove_images(self, remove_image_type):
|
||||
for service in self.get_services():
|
||||
service.remove_image(remove_image_type)
|
||||
|
||||
def restart(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
service.restart(**options)
|
||||
containers = self.containers(service_names, stopped=True)
|
||||
parallel.parallel_restart(containers, options)
|
||||
return containers
|
||||
|
||||
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False):
|
||||
for service in self.get_services(service_names):
|
||||
@ -236,6 +247,51 @@ class Project(object):
|
||||
else:
|
||||
log.info('%s uses an image, skipping' % service.name)
|
||||
|
||||
def create(self, service_names=None, strategy=ConvergenceStrategy.changed, do_build=True):
|
||||
services = self.get_services_without_duplicate(service_names, include_deps=True)
|
||||
|
||||
plans = self._get_convergence_plans(services, strategy)
|
||||
|
||||
for service in services:
|
||||
service.execute_convergence_plan(
|
||||
plans[service.name],
|
||||
do_build,
|
||||
detached=True,
|
||||
start=False)
|
||||
|
||||
def events(self):
|
||||
def build_container_event(event, container):
|
||||
time = datetime.datetime.fromtimestamp(event['time'])
|
||||
time = time.replace(
|
||||
microsecond=microseconds_from_time_nano(event['timeNano']))
|
||||
return {
|
||||
'time': time,
|
||||
'type': 'container',
|
||||
'action': event['status'],
|
||||
'id': container.id,
|
||||
'service': container.service,
|
||||
'attributes': {
|
||||
'name': container.name,
|
||||
'image': event['from'],
|
||||
}
|
||||
}
|
||||
|
||||
service_names = set(self.service_names)
|
||||
for event in self.client.events(
|
||||
filters={'label': self.labels()},
|
||||
decode=True
|
||||
):
|
||||
if event['status'] in IMAGE_EVENTS:
|
||||
# We don't receive any image events because labels aren't applied
|
||||
# to images
|
||||
continue
|
||||
|
||||
# TODO: get labels from the API v1.22 , see github issue 2618
|
||||
container = Container.from_id(self.client, event['id'])
|
||||
if container.service not in service_names:
|
||||
continue
|
||||
yield build_container_event(event, container)
|
||||
|
||||
def up(self,
|
||||
service_names=None,
|
||||
start_deps=True,
|
||||
@ -244,16 +300,12 @@ class Project(object):
|
||||
timeout=DEFAULT_TIMEOUT,
|
||||
detached=False):
|
||||
|
||||
services = self.get_services(service_names, include_deps=start_deps)
|
||||
|
||||
for service in services:
|
||||
service.remove_duplicate_containers()
|
||||
self.initialize()
|
||||
services = self.get_services_without_duplicate(
|
||||
service_names,
|
||||
include_deps=start_deps)
|
||||
|
||||
plans = self._get_convergence_plans(services, strategy)
|
||||
|
||||
if self.use_networking and self.uses_default_network():
|
||||
self.ensure_network_exists()
|
||||
|
||||
return [
|
||||
container
|
||||
for service in services
|
||||
@ -265,6 +317,10 @@ class Project(object):
|
||||
)
|
||||
]
|
||||
|
||||
def initialize(self):
|
||||
self.networks.initialize()
|
||||
self.volumes.initialize()
|
||||
|
||||
def _get_convergence_plans(self, services, strategy):
|
||||
plans = {}
|
||||
|
||||
@ -272,8 +328,8 @@ class Project(object):
|
||||
updated_dependencies = [
|
||||
name
|
||||
for name in service.get_dependency_names()
|
||||
if name in plans
|
||||
and plans[name].action in ('recreate', 'create')
|
||||
if name in plans and
|
||||
plans[name].action in ('recreate', 'create')
|
||||
]
|
||||
|
||||
if updated_dependencies and strategy.allows_recreate:
|
||||
@ -307,38 +363,8 @@ class Project(object):
|
||||
def matches_service_names(container):
|
||||
return container.labels.get(LABEL_SERVICE) in service_names
|
||||
|
||||
if not containers:
|
||||
check_for_legacy_containers(
|
||||
self.client,
|
||||
self.name,
|
||||
self.service_names,
|
||||
)
|
||||
|
||||
return [c for c in containers if matches_service_names(c)]
|
||||
|
||||
def get_network(self):
|
||||
try:
|
||||
return self.client.inspect_network(self.name)
|
||||
except NotFound:
|
||||
return None
|
||||
|
||||
def ensure_network_exists(self):
|
||||
# TODO: recreate network if driver has changed?
|
||||
if self.get_network() is None:
|
||||
log.info(
|
||||
'Creating network "{}" with driver "{}"'
|
||||
.format(self.name, self.network_driver)
|
||||
)
|
||||
self.client.create_network(self.name, driver=self.network_driver)
|
||||
|
||||
def remove_network(self):
|
||||
network = self.get_network()
|
||||
if network:
|
||||
self.client.remove_network(network['Id'])
|
||||
|
||||
def uses_default_network(self):
|
||||
return any(service.net.mode == self.name for service in self.services)
|
||||
|
||||
def _inject_deps(self, acc, service):
|
||||
dep_names = service.get_dependency_names()
|
||||
|
||||
@ -354,24 +380,32 @@ class Project(object):
|
||||
return acc + dep_services
|
||||
|
||||
|
||||
def remove_links(service_dicts):
|
||||
services_with_links = [s for s in service_dicts if 'links' in s]
|
||||
if not services_with_links:
|
||||
return
|
||||
def get_volumes_from(project, service_dict):
|
||||
volumes_from = service_dict.pop('volumes_from', None)
|
||||
if not volumes_from:
|
||||
return []
|
||||
|
||||
if len(services_with_links) == 1:
|
||||
prefix = '"{}" defines'.format(services_with_links[0]['name'])
|
||||
else:
|
||||
prefix = 'Some services ({}) define'.format(
|
||||
", ".join('"{}"'.format(s['name']) for s in services_with_links))
|
||||
def build_volume_from(spec):
|
||||
if spec.type == 'service':
|
||||
try:
|
||||
return spec._replace(source=project.get_service(spec.source))
|
||||
except NoSuchService:
|
||||
pass
|
||||
|
||||
log.warn(
|
||||
'\n{} links, which are not compatible with Docker networking and will be ignored.\n'
|
||||
'Future versions of Docker will not support links - you should remove them for '
|
||||
'forwards-compatibility.\n'.format(prefix))
|
||||
if spec.type == 'container':
|
||||
try:
|
||||
container = Container.from_id(project.client, spec.source)
|
||||
return spec._replace(source=container)
|
||||
except APIError:
|
||||
pass
|
||||
|
||||
for s in services_with_links:
|
||||
del s['links']
|
||||
raise ConfigurationError(
|
||||
"Service \"{}\" mounts volumes from \"{}\", which is not the name "
|
||||
"of a service or container.".format(
|
||||
service_dict['name'],
|
||||
spec.source))
|
||||
|
||||
return [build_volume_from(vf) for vf in volumes_from]
|
||||
|
||||
|
||||
class NoSuchService(Exception):
|
||||
|
@ -26,11 +26,11 @@ from .const import LABEL_PROJECT
|
||||
from .const import LABEL_SERVICE
|
||||
from .const import LABEL_VERSION
|
||||
from .container import Container
|
||||
from .legacy import check_for_legacy_containers
|
||||
from .parallel import parallel_execute
|
||||
from .parallel import parallel_start
|
||||
from .progress_stream import stream_output
|
||||
from .progress_stream import StreamOutputError
|
||||
from .utils import json_hash
|
||||
from .utils import parallel_execute
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
@ -47,7 +47,6 @@ DOCKER_START_KEYS = [
|
||||
'extra_hosts',
|
||||
'ipc',
|
||||
'read_only',
|
||||
'net',
|
||||
'log_driver',
|
||||
'log_opt',
|
||||
'mem_limit',
|
||||
@ -57,6 +56,7 @@ DOCKER_START_KEYS = [
|
||||
'restart',
|
||||
'volumes_from',
|
||||
'security_opt',
|
||||
'cpu_quota',
|
||||
]
|
||||
|
||||
|
||||
@ -95,6 +95,14 @@ class ConvergenceStrategy(enum.Enum):
|
||||
return self is not type(self).never
|
||||
|
||||
|
||||
@enum.unique
|
||||
class ImageType(enum.Enum):
|
||||
"""Enumeration for the types of images known to compose."""
|
||||
none = 0
|
||||
local = 1
|
||||
all = 2
|
||||
|
||||
|
||||
class Service(object):
|
||||
def __init__(
|
||||
self,
|
||||
@ -104,7 +112,8 @@ class Service(object):
|
||||
use_networking=False,
|
||||
links=None,
|
||||
volumes_from=None,
|
||||
net=None,
|
||||
network_mode=None,
|
||||
networks=None,
|
||||
**options
|
||||
):
|
||||
self.name = name
|
||||
@ -113,27 +122,19 @@ class Service(object):
|
||||
self.use_networking = use_networking
|
||||
self.links = links or []
|
||||
self.volumes_from = volumes_from or []
|
||||
self.net = net or Net(None)
|
||||
self.network_mode = network_mode or NetworkMode(None)
|
||||
self.networks = networks or []
|
||||
self.options = options
|
||||
|
||||
def containers(self, stopped=False, one_off=False, filters={}):
|
||||
filters.update({'label': self.labels(one_off=one_off)})
|
||||
|
||||
containers = list(filter(None, [
|
||||
return list(filter(None, [
|
||||
Container.from_ps(self.client, container)
|
||||
for container in self.client.containers(
|
||||
all=stopped,
|
||||
filters=filters)]))
|
||||
|
||||
if not containers:
|
||||
check_for_legacy_containers(
|
||||
self.client,
|
||||
self.project,
|
||||
[self.name],
|
||||
)
|
||||
|
||||
return containers
|
||||
|
||||
def get_container(self, number=1):
|
||||
"""Return a :class:`compose.container.Container` for this service. The
|
||||
container must be active, and match `number`.
|
||||
@ -145,36 +146,10 @@ class Service(object):
|
||||
raise ValueError("No container found for %s_%s" % (self.name, number))
|
||||
|
||||
def start(self, **options):
|
||||
for c in self.containers(stopped=True):
|
||||
containers = self.containers(stopped=True)
|
||||
for c in containers:
|
||||
self.start_container_if_stopped(c, **options)
|
||||
|
||||
# TODO: remove these functions, project takes care of starting/stopping,
|
||||
def stop(self, **options):
|
||||
for c in self.containers():
|
||||
log.info("Stopping %s" % c.name)
|
||||
c.stop(**options)
|
||||
|
||||
def pause(self, **options):
|
||||
for c in self.containers(filters={'status': 'running'}):
|
||||
log.info("Pausing %s" % c.name)
|
||||
c.pause(**options)
|
||||
|
||||
def unpause(self, **options):
|
||||
for c in self.containers(filters={'status': 'paused'}):
|
||||
log.info("Unpausing %s" % c.name)
|
||||
c.unpause()
|
||||
|
||||
def kill(self, **options):
|
||||
for c in self.containers():
|
||||
log.info("Killing %s" % c.name)
|
||||
c.kill(**options)
|
||||
|
||||
def restart(self, **options):
|
||||
for c in self.containers(stopped=True):
|
||||
log.info("Restarting %s" % c.name)
|
||||
c.restart(**options)
|
||||
|
||||
# end TODO
|
||||
return containers
|
||||
|
||||
def scale(self, desired_num, timeout=DEFAULT_TIMEOUT):
|
||||
"""
|
||||
@ -199,9 +174,13 @@ class Service(object):
|
||||
|
||||
def create_and_start(service, number):
|
||||
container = service.create_container(number=number, quiet=True)
|
||||
container.start()
|
||||
service.start_container(container)
|
||||
return container
|
||||
|
||||
def stop_and_remove(container):
|
||||
container.stop(timeout=timeout)
|
||||
container.remove()
|
||||
|
||||
running_containers = self.containers(stopped=False)
|
||||
num_running = len(running_containers)
|
||||
|
||||
@ -226,12 +205,7 @@ class Service(object):
|
||||
else:
|
||||
containers_to_start = stopped_containers
|
||||
|
||||
parallel_execute(
|
||||
objects=containers_to_start,
|
||||
obj_callable=lambda c: c.start(),
|
||||
msg_index=lambda c: c.name,
|
||||
msg="Starting"
|
||||
)
|
||||
parallel_start(containers_to_start, {})
|
||||
|
||||
num_running += len(containers_to_start)
|
||||
|
||||
@ -244,36 +218,26 @@ class Service(object):
|
||||
]
|
||||
|
||||
parallel_execute(
|
||||
objects=container_numbers,
|
||||
obj_callable=lambda n: create_and_start(service=self, number=n),
|
||||
msg_index=lambda n: n,
|
||||
msg="Creating and starting"
|
||||
container_numbers,
|
||||
lambda n: create_and_start(service=self, number=n),
|
||||
lambda n: n,
|
||||
"Creating and starting"
|
||||
)
|
||||
|
||||
if desired_num < num_running:
|
||||
num_to_stop = num_running - desired_num
|
||||
sorted_running_containers = sorted(running_containers, key=attrgetter('number'))
|
||||
containers_to_stop = sorted_running_containers[-num_to_stop:]
|
||||
|
||||
sorted_running_containers = sorted(
|
||||
running_containers,
|
||||
key=attrgetter('number'))
|
||||
|
||||
parallel_execute(
|
||||
objects=containers_to_stop,
|
||||
obj_callable=lambda c: c.stop(timeout=timeout),
|
||||
msg_index=lambda c: c.name,
|
||||
msg="Stopping"
|
||||
sorted_running_containers[-num_to_stop:],
|
||||
stop_and_remove,
|
||||
lambda c: c.name,
|
||||
"Stopping and removing",
|
||||
)
|
||||
|
||||
self.remove_stopped()
|
||||
|
||||
def remove_stopped(self, **options):
|
||||
containers = [c for c in self.containers(stopped=True) if not c.is_running]
|
||||
|
||||
parallel_execute(
|
||||
objects=containers,
|
||||
obj_callable=lambda c: c.remove(**options),
|
||||
msg_index=lambda c: c.name,
|
||||
msg="Removing"
|
||||
)
|
||||
|
||||
def create_container(self,
|
||||
one_off=False,
|
||||
do_build=True,
|
||||
@ -325,10 +289,7 @@ class Service(object):
|
||||
|
||||
@property
|
||||
def image_name(self):
|
||||
if self.can_be_built():
|
||||
return self.full_name
|
||||
else:
|
||||
return self.options['image']
|
||||
return self.options.get('image', '{s.project}_{s.name}'.format(s=self))
|
||||
|
||||
def convergence_plan(self, strategy=ConvergenceStrategy.changed):
|
||||
containers = self.containers(stopped=True)
|
||||
@ -381,7 +342,8 @@ class Service(object):
|
||||
plan,
|
||||
do_build=True,
|
||||
timeout=DEFAULT_TIMEOUT,
|
||||
detached=False):
|
||||
detached=False,
|
||||
start=True):
|
||||
(action, containers) = plan
|
||||
should_attach_logs = not detached
|
||||
|
||||
@ -391,7 +353,8 @@ class Service(object):
|
||||
if should_attach_logs:
|
||||
container.attach_log_stream()
|
||||
|
||||
container.start()
|
||||
if start:
|
||||
self.start_container(container)
|
||||
|
||||
return [container]
|
||||
|
||||
@ -401,14 +364,16 @@ class Service(object):
|
||||
container,
|
||||
do_build=do_build,
|
||||
timeout=timeout,
|
||||
attach_logs=should_attach_logs
|
||||
attach_logs=should_attach_logs,
|
||||
start_new_container=start
|
||||
)
|
||||
for container in containers
|
||||
]
|
||||
|
||||
elif action == 'start':
|
||||
for container in containers:
|
||||
self.start_container_if_stopped(container, attach_logs=should_attach_logs)
|
||||
if start:
|
||||
for container in containers:
|
||||
self.start_container_if_stopped(container, attach_logs=should_attach_logs)
|
||||
|
||||
return containers
|
||||
|
||||
@ -426,7 +391,8 @@ class Service(object):
|
||||
container,
|
||||
do_build=False,
|
||||
timeout=DEFAULT_TIMEOUT,
|
||||
attach_logs=False):
|
||||
attach_logs=False,
|
||||
start_new_container=True):
|
||||
"""Recreate a container.
|
||||
|
||||
The original container is renamed to a temporary name so that data
|
||||
@ -445,7 +411,8 @@ class Service(object):
|
||||
)
|
||||
if attach_logs:
|
||||
new_container.attach_log_stream()
|
||||
new_container.start()
|
||||
if start_new_container:
|
||||
self.start_container(new_container)
|
||||
container.remove()
|
||||
return new_container
|
||||
|
||||
@ -454,9 +421,27 @@ class Service(object):
|
||||
log.info("Starting %s" % container.name)
|
||||
if attach_logs:
|
||||
container.attach_log_stream()
|
||||
container.start()
|
||||
return self.start_container(container)
|
||||
|
||||
def start_container(self, container):
|
||||
self.connect_container_to_networks(container)
|
||||
container.start()
|
||||
return container
|
||||
|
||||
def connect_container_to_networks(self, container):
|
||||
connected_networks = container.get('NetworkSettings.Networks')
|
||||
|
||||
for network in self.networks:
|
||||
if network in connected_networks:
|
||||
self.client.disconnect_container_from_network(
|
||||
container.id, network)
|
||||
|
||||
self.client.connect_container_to_network(
|
||||
container.id, network,
|
||||
aliases=self._get_aliases(container),
|
||||
links=self._get_links(False),
|
||||
)
|
||||
|
||||
def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT):
|
||||
for c in self.duplicate_containers():
|
||||
log.info('Removing %s' % c.name)
|
||||
@ -486,17 +471,20 @@ class Service(object):
|
||||
'options': self.options,
|
||||
'image_id': self.image()['Id'],
|
||||
'links': self.get_link_names(),
|
||||
'net': self.net.id,
|
||||
'net': self.network_mode.id,
|
||||
'networks': self.networks,
|
||||
'volumes_from': [
|
||||
(v.source.name, v.mode) for v in self.volumes_from if isinstance(v.source, Service)
|
||||
(v.source.name, v.mode)
|
||||
for v in self.volumes_from if isinstance(v.source, Service)
|
||||
],
|
||||
}
|
||||
|
||||
def get_dependency_names(self):
|
||||
net_name = self.net.service_name
|
||||
net_name = self.network_mode.service_name
|
||||
return (self.get_linked_service_names() +
|
||||
self.get_volumes_from_names() +
|
||||
([net_name] if net_name else []))
|
||||
([net_name] if net_name else []) +
|
||||
self.options.get('depends_on', []))
|
||||
|
||||
def get_linked_service_names(self):
|
||||
return [service.name for (service, _) in self.links]
|
||||
@ -523,36 +511,41 @@ class Service(object):
|
||||
numbers = [c.number for c in containers]
|
||||
return 1 if not numbers else max(numbers) + 1
|
||||
|
||||
def _get_links(self, link_to_self):
|
||||
if self.use_networking:
|
||||
def _get_aliases(self, container):
|
||||
if container.labels.get(LABEL_ONE_OFF) == "True":
|
||||
return []
|
||||
|
||||
links = []
|
||||
return [self.name, container.short_id]
|
||||
|
||||
def _get_links(self, link_to_self):
|
||||
links = {}
|
||||
|
||||
for service, link_name in self.links:
|
||||
for container in service.containers():
|
||||
links.append((container.name, link_name or service.name))
|
||||
links.append((container.name, container.name))
|
||||
links.append((container.name, container.name_without_project))
|
||||
links[link_name or service.name] = container.name
|
||||
links[container.name] = container.name
|
||||
links[container.name_without_project] = container.name
|
||||
|
||||
if link_to_self:
|
||||
for container in self.containers():
|
||||
links.append((container.name, self.name))
|
||||
links.append((container.name, container.name))
|
||||
links.append((container.name, container.name_without_project))
|
||||
links[self.name] = container.name
|
||||
links[container.name] = container.name
|
||||
links[container.name_without_project] = container.name
|
||||
|
||||
for external_link in self.options.get('external_links') or []:
|
||||
if ':' not in external_link:
|
||||
link_name = external_link
|
||||
else:
|
||||
external_link, link_name = external_link.split(':')
|
||||
links.append((external_link, link_name))
|
||||
return links
|
||||
links[link_name] = external_link
|
||||
|
||||
return [
|
||||
(alias, container_name)
|
||||
for (container_name, alias) in links.items()
|
||||
]
|
||||
|
||||
def _get_volumes_from(self):
|
||||
volumes_from = []
|
||||
for volume_from_spec in self.volumes_from:
|
||||
volumes = build_volume_from(volume_from_spec)
|
||||
volumes_from.extend(volumes)
|
||||
|
||||
return volumes_from
|
||||
return [build_volume_from(spec) for spec in self.volumes_from]
|
||||
|
||||
def _get_container_create_options(
|
||||
self,
|
||||
@ -579,9 +572,9 @@ class Service(object):
|
||||
# unqualified hostname and a domainname unless domainname
|
||||
# was also given explicitly. This matches the behavior of
|
||||
# the official Docker CLI in that scenario.
|
||||
if ('hostname' in container_options
|
||||
and 'domainname' not in container_options
|
||||
and '.' in container_options['hostname']):
|
||||
if ('hostname' in container_options and
|
||||
'domainname' not in container_options and
|
||||
'.' in container_options['hostname']):
|
||||
parts = container_options['hostname'].partition('.')
|
||||
container_options['hostname'] = parts[0]
|
||||
container_options['domainname'] = parts[2]
|
||||
@ -634,17 +627,16 @@ class Service(object):
|
||||
def _get_container_host_config(self, override_options, one_off=False):
|
||||
options = dict(self.options, **override_options)
|
||||
|
||||
log_config = LogConfig(
|
||||
type=options.get('log_driver', ""),
|
||||
config=options.get('log_opt', None)
|
||||
)
|
||||
logging_dict = options.get('logging', None)
|
||||
log_config = get_log_config(logging_dict)
|
||||
|
||||
return self.client.create_host_config(
|
||||
links=self._get_links(link_to_self=one_off),
|
||||
port_bindings=build_port_bindings(options.get('ports') or []),
|
||||
binds=options.get('binds'),
|
||||
volumes_from=self._get_volumes_from(),
|
||||
privileged=options.get('privileged', False),
|
||||
network_mode=self.net.mode,
|
||||
network_mode=self.network_mode.mode,
|
||||
devices=options.get('devices'),
|
||||
dns=options.get('dns'),
|
||||
dns_search=options.get('dns_search'),
|
||||
@ -661,12 +653,14 @@ class Service(object):
|
||||
security_opt=options.get('security_opt'),
|
||||
ipc_mode=options.get('ipc'),
|
||||
cgroup_parent=options.get('cgroup_parent'),
|
||||
cpu_quota=options.get('cpu_quota'),
|
||||
)
|
||||
|
||||
def build(self, no_cache=False, pull=False, force_rm=False):
|
||||
log.info('Building %s' % self.name)
|
||||
|
||||
path = self.options['build']
|
||||
build_opts = self.options.get('build', {})
|
||||
path = build_opts.get('context')
|
||||
# python2 os.path() doesn't support unicode, so we need to encode it to
|
||||
# a byte string
|
||||
if not six.PY3:
|
||||
@ -680,7 +674,8 @@ class Service(object):
|
||||
forcerm=force_rm,
|
||||
pull=pull,
|
||||
nocache=no_cache,
|
||||
dockerfile=self.options.get('dockerfile', None),
|
||||
dockerfile=build_opts.get('dockerfile', None),
|
||||
buildargs=build_opts.get('args', None),
|
||||
)
|
||||
|
||||
try:
|
||||
@ -709,13 +704,6 @@ class Service(object):
|
||||
def can_be_built(self):
|
||||
return 'build' in self.options
|
||||
|
||||
@property
|
||||
def full_name(self):
|
||||
"""
|
||||
The tag to give to images built for this service.
|
||||
"""
|
||||
return '%s_%s' % (self.project, self.name)
|
||||
|
||||
def labels(self, one_off=False):
|
||||
return [
|
||||
'{0}={1}'.format(LABEL_PROJECT, self.project),
|
||||
@ -726,6 +714,20 @@ class Service(object):
|
||||
def custom_container_name(self):
|
||||
return self.options.get('container_name')
|
||||
|
||||
def remove_image(self, image_type):
|
||||
if not image_type or image_type == ImageType.none:
|
||||
return False
|
||||
if image_type == ImageType.local and self.options.get('image'):
|
||||
return False
|
||||
|
||||
log.info("Removing image %s", self.image_name)
|
||||
try:
|
||||
self.client.remove_image(self.image_name)
|
||||
return True
|
||||
except APIError as e:
|
||||
log.error("Failed to remove image for service %s: %s", self.name, e)
|
||||
return False
|
||||
|
||||
def specifies_host_port(self):
|
||||
def has_host_port(binding):
|
||||
_, external_bindings = split_port(binding)
|
||||
@ -772,22 +774,22 @@ class Service(object):
|
||||
log.error(six.text_type(e))
|
||||
|
||||
|
||||
class Net(object):
|
||||
class NetworkMode(object):
|
||||
"""A `standard` network mode (ex: host, bridge)"""
|
||||
|
||||
service_name = None
|
||||
|
||||
def __init__(self, net):
|
||||
self.net = net
|
||||
def __init__(self, network_mode):
|
||||
self.network_mode = network_mode
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return self.net
|
||||
return self.network_mode
|
||||
|
||||
mode = id
|
||||
|
||||
|
||||
class ContainerNet(object):
|
||||
class ContainerNetworkMode(object):
|
||||
"""A network mode that uses a container's network stack."""
|
||||
|
||||
service_name = None
|
||||
@ -804,7 +806,7 @@ class ContainerNet(object):
|
||||
return 'container:' + self.container.id
|
||||
|
||||
|
||||
class ServiceNet(object):
|
||||
class ServiceNetworkMode(object):
|
||||
"""A network mode that uses a service's network stack."""
|
||||
|
||||
def __init__(self, service):
|
||||
@ -892,7 +894,13 @@ def get_container_data_volumes(container, volumes_option):
|
||||
a mapping of volume bindings for those volumes.
|
||||
"""
|
||||
volumes = []
|
||||
container_volumes = container.get('Volumes') or {}
|
||||
volumes_option = volumes_option or []
|
||||
|
||||
container_mounts = dict(
|
||||
(mount['Destination'], mount)
|
||||
for mount in container.get('Mounts') or {}
|
||||
)
|
||||
|
||||
image_volumes = [
|
||||
VolumeSpec.parse(volume)
|
||||
for volume in
|
||||
@ -904,13 +912,18 @@ def get_container_data_volumes(container, volumes_option):
|
||||
if volume.external:
|
||||
continue
|
||||
|
||||
volume_path = container_volumes.get(volume.internal)
|
||||
mount = container_mounts.get(volume.internal)
|
||||
|
||||
# New volume, doesn't exist in the old container
|
||||
if not volume_path:
|
||||
if not mount:
|
||||
continue
|
||||
|
||||
# Volume was previously a host volume, now it's a container volume
|
||||
if not mount.get('Name'):
|
||||
continue
|
||||
|
||||
# Copy existing volume from old container
|
||||
volume = volume._replace(external=volume_path)
|
||||
volume = volume._replace(external=mount['Source'])
|
||||
volumes.append(volume)
|
||||
|
||||
return volumes
|
||||
@ -923,6 +936,7 @@ def warn_on_masked_volume(volumes_option, container_volumes, service):
|
||||
|
||||
for volume in volumes_option:
|
||||
if (
|
||||
volume.external and
|
||||
volume.internal in container_volumes and
|
||||
container_volumes.get(volume.internal) != volume.external
|
||||
):
|
||||
@ -938,7 +952,7 @@ def warn_on_masked_volume(volumes_option, container_volumes, service):
|
||||
|
||||
|
||||
def build_volume_binding(volume_spec):
|
||||
return volume_spec.internal, "{}:{}:{}".format(*volume_spec)
|
||||
return volume_spec.internal, volume_spec.repr()
|
||||
|
||||
|
||||
def build_volume_from(volume_from_spec):
|
||||
@ -949,12 +963,14 @@ def build_volume_from(volume_from_spec):
|
||||
if isinstance(volume_from_spec.source, Service):
|
||||
containers = volume_from_spec.source.containers(stopped=True)
|
||||
if not containers:
|
||||
return ["{}:{}".format(volume_from_spec.source.create_container().id, volume_from_spec.mode)]
|
||||
return "{}:{}".format(
|
||||
volume_from_spec.source.create_container().id,
|
||||
volume_from_spec.mode)
|
||||
|
||||
container = containers[0]
|
||||
return ["{}:{}".format(container.id, volume_from_spec.mode)]
|
||||
return "{}:{}".format(container.id, volume_from_spec.mode)
|
||||
elif isinstance(volume_from_spec.source, Container):
|
||||
return ["{}:{}".format(volume_from_spec.source.id, volume_from_spec.mode)]
|
||||
return "{}:{}".format(volume_from_spec.source.id, volume_from_spec.mode)
|
||||
|
||||
|
||||
# Labels
|
||||
@ -989,3 +1005,12 @@ def build_ulimits(ulimit_config):
|
||||
ulimits.append(ulimit_dict)
|
||||
|
||||
return ulimits
|
||||
|
||||
|
||||
def get_log_config(logging_dict):
|
||||
log_driver = logging_dict.get('driver', "") if logging_dict else ""
|
||||
log_options = logging_dict.get('options', None) if logging_dict else None
|
||||
return LogConfig(
|
||||
type=log_driver,
|
||||
config=log_options
|
||||
)
|
||||
|
102
compose/utils.py
102
compose/utils.py
@ -1,85 +1,17 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import codecs
|
||||
import hashlib
|
||||
import json
|
||||
import json.decoder
|
||||
import logging
|
||||
import sys
|
||||
from threading import Thread
|
||||
|
||||
import six
|
||||
from docker.errors import APIError
|
||||
from six.moves.queue import Empty
|
||||
from six.moves.queue import Queue
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
json_decoder = json.JSONDecoder()
|
||||
|
||||
|
||||
def parallel_execute(objects, obj_callable, msg_index, msg):
|
||||
"""
|
||||
For a given list of objects, call the callable passing in the first
|
||||
object we give it.
|
||||
"""
|
||||
stream = get_output_stream(sys.stdout)
|
||||
lines = []
|
||||
|
||||
for obj in objects:
|
||||
write_out_msg(stream, lines, msg_index(obj), msg)
|
||||
|
||||
q = Queue()
|
||||
|
||||
def inner_execute_function(an_callable, parameter, msg_index):
|
||||
error = None
|
||||
try:
|
||||
result = an_callable(parameter)
|
||||
except APIError as e:
|
||||
error = e.explanation
|
||||
result = "error"
|
||||
except Exception as e:
|
||||
error = e
|
||||
result = 'unexpected_exception'
|
||||
|
||||
q.put((msg_index, result, error))
|
||||
|
||||
for an_object in objects:
|
||||
t = Thread(
|
||||
target=inner_execute_function,
|
||||
args=(obj_callable, an_object, msg_index(an_object)),
|
||||
)
|
||||
t.daemon = True
|
||||
t.start()
|
||||
|
||||
done = 0
|
||||
errors = {}
|
||||
total_to_execute = len(objects)
|
||||
|
||||
while done < total_to_execute:
|
||||
try:
|
||||
msg_index, result, error = q.get(timeout=1)
|
||||
|
||||
if result == 'unexpected_exception':
|
||||
errors[msg_index] = result, error
|
||||
if result == 'error':
|
||||
errors[msg_index] = result, error
|
||||
write_out_msg(stream, lines, msg_index, msg, status='error')
|
||||
else:
|
||||
write_out_msg(stream, lines, msg_index, msg)
|
||||
done += 1
|
||||
except Empty:
|
||||
pass
|
||||
|
||||
if not errors:
|
||||
return
|
||||
|
||||
stream.write("\n")
|
||||
for msg_index, (result, error) in errors.items():
|
||||
stream.write("ERROR: for {} {} \n".format(msg_index, error))
|
||||
if result == 'unexpected_exception':
|
||||
raise error
|
||||
|
||||
|
||||
def get_output_stream(stream):
|
||||
if six.PY3:
|
||||
return stream
|
||||
@ -151,32 +83,12 @@ def json_stream(stream):
|
||||
return split_buffer(stream, json_splitter, json_decoder.decode)
|
||||
|
||||
|
||||
def write_out_msg(stream, lines, msg_index, msg, status="done"):
|
||||
"""
|
||||
Using special ANSI code characters we can write out the msg over the top of
|
||||
a previous status message, if it exists.
|
||||
"""
|
||||
obj_index = msg_index
|
||||
if msg_index in lines:
|
||||
position = lines.index(obj_index)
|
||||
diff = len(lines) - position
|
||||
# move up
|
||||
stream.write("%c[%dA" % (27, diff))
|
||||
# erase
|
||||
stream.write("%c[2K\r" % 27)
|
||||
stream.write("{} {} ... {}\r".format(msg, obj_index, status))
|
||||
# move back down
|
||||
stream.write("%c[%dB" % (27, diff))
|
||||
else:
|
||||
diff = 0
|
||||
lines.append(obj_index)
|
||||
stream.write("{} {} ... \r\n".format(msg, obj_index))
|
||||
|
||||
stream.flush()
|
||||
|
||||
|
||||
def json_hash(obj):
|
||||
dump = json.dumps(obj, sort_keys=True, separators=(',', ':'))
|
||||
h = hashlib.sha256()
|
||||
h.update(dump.encode('utf8'))
|
||||
return h.hexdigest()
|
||||
|
||||
|
||||
def microseconds_from_time_nano(time_nano):
|
||||
return int(time_nano % 1000000000 / 1000)
|
||||
|
122
compose/volume.py
Normal file
122
compose/volume.py
Normal file
@ -0,0 +1,122 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import logging
|
||||
|
||||
from docker.errors import APIError
|
||||
from docker.errors import NotFound
|
||||
|
||||
from .config import ConfigurationError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Volume(object):
|
||||
def __init__(self, client, project, name, driver=None, driver_opts=None,
|
||||
external_name=None):
|
||||
self.client = client
|
||||
self.project = project
|
||||
self.name = name
|
||||
self.driver = driver
|
||||
self.driver_opts = driver_opts
|
||||
self.external_name = external_name
|
||||
|
||||
def create(self):
|
||||
return self.client.create_volume(
|
||||
self.full_name, self.driver, self.driver_opts
|
||||
)
|
||||
|
||||
def remove(self):
|
||||
if self.external:
|
||||
log.info("Volume %s is external, skipping", self.full_name)
|
||||
return
|
||||
log.info("Removing volume %s", self.full_name)
|
||||
return self.client.remove_volume(self.full_name)
|
||||
|
||||
def inspect(self):
|
||||
return self.client.inspect_volume(self.full_name)
|
||||
|
||||
def exists(self):
|
||||
try:
|
||||
self.inspect()
|
||||
except NotFound:
|
||||
return False
|
||||
return True
|
||||
|
||||
@property
|
||||
def external(self):
|
||||
return bool(self.external_name)
|
||||
|
||||
@property
|
||||
def full_name(self):
|
||||
if self.external_name:
|
||||
return self.external_name
|
||||
return '{0}_{1}'.format(self.project, self.name)
|
||||
|
||||
|
||||
class ProjectVolumes(object):
|
||||
|
||||
def __init__(self, volumes):
|
||||
self.volumes = volumes
|
||||
|
||||
@classmethod
|
||||
def from_config(cls, name, config_data, client):
|
||||
config_volumes = config_data.volumes or {}
|
||||
volumes = {
|
||||
vol_name: Volume(
|
||||
client=client,
|
||||
project=name,
|
||||
name=vol_name,
|
||||
driver=data.get('driver'),
|
||||
driver_opts=data.get('driver_opts'),
|
||||
external_name=data.get('external_name'))
|
||||
for vol_name, data in config_volumes.items()
|
||||
}
|
||||
return cls(volumes)
|
||||
|
||||
def remove(self):
|
||||
for volume in self.volumes.values():
|
||||
volume.remove()
|
||||
|
||||
def initialize(self):
|
||||
try:
|
||||
for volume in self.volumes.values():
|
||||
if volume.external:
|
||||
log.debug(
|
||||
'Volume {0} declared as external. No new '
|
||||
'volume will be created.'.format(volume.name)
|
||||
)
|
||||
if not volume.exists():
|
||||
raise ConfigurationError(
|
||||
'Volume {name} declared as external, but could'
|
||||
' not be found. Please create the volume manually'
|
||||
' using `{command}{name}` and try again.'.format(
|
||||
name=volume.full_name,
|
||||
command='docker volume create --name='
|
||||
)
|
||||
)
|
||||
continue
|
||||
volume.create()
|
||||
except NotFound:
|
||||
raise ConfigurationError(
|
||||
'Volume %s specifies nonexistent driver %s' % (volume.name, volume.driver)
|
||||
)
|
||||
except APIError as e:
|
||||
if 'Choose a different volume name' in str(e):
|
||||
raise ConfigurationError(
|
||||
'Configuration for volume {0} specifies driver {1}, but '
|
||||
'a volume with the same name uses a different driver '
|
||||
'({3}). If you wish to use the new configuration, please '
|
||||
'remove the existing volume "{2}" first:\n'
|
||||
'$ docker volume rm {2}'.format(
|
||||
volume.name, volume.driver, volume.full_name,
|
||||
volume.inspect()['Driver']
|
||||
)
|
||||
)
|
||||
|
||||
def namespace_spec(self, volume_spec):
|
||||
if not volume_spec.is_named_volume:
|
||||
return volume_spec
|
||||
|
||||
volume = self.volumes[volume_spec.external]
|
||||
return volume_spec._replace(external=volume.full_name)
|
@ -17,12 +17,22 @@
|
||||
# . ~/.docker-compose-completion.sh
|
||||
|
||||
|
||||
__docker_compose_q() {
|
||||
docker-compose 2>/dev/null ${compose_file:+-f $compose_file} ${compose_project:+-p $compose_project} "$@"
|
||||
}
|
||||
|
||||
# suppress trailing whitespace
|
||||
__docker_compose_nospace() {
|
||||
# compopt is not available in ancient bash versions
|
||||
type compopt &>/dev/null && compopt -o nospace
|
||||
}
|
||||
|
||||
# For compatibility reasons, Compose and therefore its completion supports several
|
||||
# stack compositon files as listed here, in descending priority.
|
||||
# Support for these filenames might be dropped in some future version.
|
||||
__docker_compose_compose_file() {
|
||||
local file
|
||||
for file in docker-compose.y{,a}ml fig.y{,a}ml ; do
|
||||
for file in docker-compose.y{,a}ml ; do
|
||||
[ -e $file ] && {
|
||||
echo $file
|
||||
return
|
||||
@ -33,7 +43,7 @@ __docker_compose_compose_file() {
|
||||
|
||||
# Extracts all service names from the compose file.
|
||||
___docker_compose_all_services_in_compose_file() {
|
||||
awk -F: '/^[a-zA-Z0-9]/{print $1}' "${compose_file:-$(__docker_compose_compose_file)}" 2>/dev/null
|
||||
__docker_compose_q config --services
|
||||
}
|
||||
|
||||
# All services, even those without an existing container
|
||||
@ -43,8 +53,12 @@ __docker_compose_services_all() {
|
||||
|
||||
# All services that have an entry with the given key in their compose_file section
|
||||
___docker_compose_services_with_key() {
|
||||
# flatten sections to one line, then filter lines containing the key and return section name.
|
||||
awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' "${compose_file:-$(__docker_compose_compose_file)}" 2>/dev/null | awk -F: -v key=": +$1:" '$0 ~ key {print $1}'
|
||||
# flatten sections under "services" to one line, then filter lines containing the key and return section name
|
||||
__docker_compose_q config \
|
||||
| sed -n -e '/^services:/,/^[^ ]/p' \
|
||||
| sed -n 's/^ //p' \
|
||||
| awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' \
|
||||
| awk -F: -v key=": +$1:" '$0 ~ key {print $1}'
|
||||
}
|
||||
|
||||
# All services that are defined by a Dockerfile reference
|
||||
@ -61,11 +75,9 @@ __docker_compose_services_from_image() {
|
||||
# by a boolean expression passed in as argument.
|
||||
__docker_compose_services_with() {
|
||||
local containers names
|
||||
containers="$(docker-compose 2>/dev/null ${compose_file:+-f $compose_file} ${compose_project:+-p $compose_project} ps -q)"
|
||||
names=( $(docker 2>/dev/null inspect --format "{{if ${1:-true}}} {{ .Name }} {{end}}" $containers) )
|
||||
names=( ${names[@]%_*} ) # strip trailing numbers
|
||||
names=( ${names[@]#*_} ) # strip project name
|
||||
COMPREPLY=( $(compgen -W "${names[*]}" -- "$cur") )
|
||||
containers="$(__docker_compose_q ps -q)"
|
||||
names=$(docker 2>/dev/null inspect -f "{{if ${1:-true}}}{{range \$k, \$v := .Config.Labels}}{{if eq \$k \"com.docker.compose.service\"}}{{\$v}}{{end}}{{end}}{{end}}" $containers)
|
||||
COMPREPLY=( $(compgen -W "$names" -- "$cur") )
|
||||
}
|
||||
|
||||
# The services for which at least one paused container exists
|
||||
@ -96,6 +108,23 @@ _docker_compose_build() {
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_config() {
|
||||
COMPREPLY=( $( compgen -W "--help --quiet -q --services" -- "$cur" ) )
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_create() {
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--force-recreate --help --no-build --no-recreate" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker_compose_services_all
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_docker_compose() {
|
||||
case "$prev" in
|
||||
--file|-f)
|
||||
@ -105,18 +134,48 @@ _docker_compose_docker_compose() {
|
||||
--project-name|-p)
|
||||
return
|
||||
;;
|
||||
--x-network-driver)
|
||||
COMPREPLY=( $( compgen -W "bridge host none overlay" -- "$cur" ) )
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--file -f --help -h --project-name -p --verbose --version -v" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_down() {
|
||||
case "$prev" in
|
||||
--rmi)
|
||||
COMPREPLY=( $( compgen -W "all local" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--file -f --help -h --project-name -p --verbose --version -v --x-networking --x-network-driver" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--help --rmi --volumes -v" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_events() {
|
||||
case "$prev" in
|
||||
--json)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--help --json" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) )
|
||||
__docker_compose_services_all
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@ -158,15 +217,6 @@ _docker_compose_logs() {
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_migrate_to_labels() {
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
|
||||
_docker_compose_pause() {
|
||||
case "$cur" in
|
||||
-*)
|
||||
@ -259,7 +309,7 @@ _docker_compose_run() {
|
||||
case "$prev" in
|
||||
-e)
|
||||
COMPREPLY=( $( compgen -e -- "$cur" ) )
|
||||
compopt -o nospace
|
||||
__docker_compose_nospace
|
||||
return
|
||||
;;
|
||||
--entrypoint|--name|--user|-u)
|
||||
@ -295,7 +345,7 @@ _docker_compose_scale() {
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $(compgen -S "=" -W "$(___docker_compose_all_services_in_compose_file)" -- "$cur") )
|
||||
compopt -o nospace
|
||||
__docker_compose_nospace
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@ -352,7 +402,7 @@ _docker_compose_up() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "-d --help --no-build --no-color --no-deps --no-recreate --force-recreate --timeout -t" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--abort-on-container-exit -d --force-recreate --help --no-build --no-color --no-deps --no-recreate --timeout -t" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker_compose_services_all
|
||||
@ -376,10 +426,13 @@ _docker_compose() {
|
||||
|
||||
local commands=(
|
||||
build
|
||||
config
|
||||
create
|
||||
down
|
||||
events
|
||||
help
|
||||
kill
|
||||
logs
|
||||
migrate-to-labels
|
||||
pause
|
||||
port
|
||||
ps
|
||||
@ -414,9 +467,6 @@ _docker_compose() {
|
||||
(( counter++ ))
|
||||
compose_project="${words[$counter]}"
|
||||
;;
|
||||
--x-network-driver)
|
||||
(( counter++ ))
|
||||
;;
|
||||
-*)
|
||||
;;
|
||||
*)
|
||||
|
@ -24,7 +24,7 @@
|
||||
# Support for these filenames might be dropped in some future version.
|
||||
__docker-compose_compose_file() {
|
||||
local file
|
||||
for file in docker-compose.y{,a}ml fig.y{,a}ml ; do
|
||||
for file in docker-compose.y{,a}ml ; do
|
||||
[ -e $file ] && {
|
||||
echo $file
|
||||
return
|
||||
@ -179,7 +179,7 @@ __docker-compose_commands() {
|
||||
local -a lines
|
||||
lines=(${(f)"$(_call_program commands docker-compose 2>&1)"})
|
||||
_docker_compose_subcommands=(${${${lines[$((${lines[(i)Commands:]} + 1)),${lines[(I) *]}]}## #}/ ##/:})
|
||||
_store_cache docker_compose_subcommands _docker_compose_subcommands
|
||||
(( $#_docker_compose_subcommands > 0 )) && _store_cache docker_compose_subcommands _docker_compose_subcommands
|
||||
fi
|
||||
_describe -t docker-compose-commands "docker-compose command" _docker_compose_subcommands
|
||||
}
|
||||
@ -197,6 +197,32 @@ __docker-compose_subcommand() {
|
||||
'--pull[Always attempt to pull a newer version of the image.]' \
|
||||
'*:services:__docker-compose_services_from_build' && ret=0
|
||||
;;
|
||||
(config)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
'(--quiet -q)'{--quiet,-q}"[Only validate the configuration, don't print anything.]" \
|
||||
'--services[Print the service names, one per line.]' && ret=0
|
||||
;;
|
||||
(create)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
"(--no-recreate --no-build)--force-recreate[Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate.]" \
|
||||
"(--force-recreate)--no-build[If containers already exist, don't recreate them. Incompatible with --force-recreate.]" \
|
||||
"(--force-recreate)--no-recreate[Don't build an image, even if it's missing]" \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
(down)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
"--rmi[Remove images, type may be one of: 'all' to remove all images, or 'local' to remove only images that don't have an custom name set by the 'image' field]:type:(all local)" \
|
||||
'(-v --volumes)'{-v,--volumes}"[Remove data volumes]" && ret=0
|
||||
;;
|
||||
(events)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
'--json[Output events as a stream of json objects.]' \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
(help)
|
||||
_arguments ':subcommand:__docker-compose_commands' && ret=0
|
||||
;;
|
||||
@ -212,11 +238,6 @@ __docker-compose_subcommand() {
|
||||
'--no-color[Produce monochrome output.]' \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
(migrate-to-labels)
|
||||
_arguments -A '-*' \
|
||||
$opts_help \
|
||||
'(-):Recreate containers to add labels' && ret=0
|
||||
;;
|
||||
(pause)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
@ -291,12 +312,13 @@ __docker-compose_subcommand() {
|
||||
(up)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
'-d[Detached mode: Run containers in the background, print new container names.]' \
|
||||
'(--abort-on-container-exit)-d[Detached mode: Run containers in the background, print new container names.]' \
|
||||
'--no-color[Produce monochrome output.]' \
|
||||
"--no-deps[Don't start linked services.]" \
|
||||
"--force-recreate[Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate.]" \
|
||||
"--no-recreate[If containers already exist, don't recreate them.]" \
|
||||
"--no-build[Don't build an image, even if it's missing]" \
|
||||
"(-d)--abort-on-container-exit[Stops all containers if any container was stopped. Incompatible with -d.]" \
|
||||
'(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
@ -331,8 +353,6 @@ _docker-compose() {
|
||||
'(- :)'{-v,--version}'[Print version and exit]' \
|
||||
'(-f --file)'{-f,--file}'[Specify an alternate docker-compose file (default: docker-compose.yml)]:file:_files -g "*.yml"' \
|
||||
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
|
||||
'--x-networking[(EXPERIMENTAL) Use new Docker networking functionality. Requires Docker 1.9 or later.]' \
|
||||
'--x-network-driver[(EXPERIMENTAL) Specify a network driver (default: "bridge"). Requires Docker 1.9 or later.]:Network Driver:(bridge host none overlay)' \
|
||||
'(-): :->command' \
|
||||
'(-)*:: :->option-or-argument' && ret=0
|
||||
|
||||
|
173
contrib/migration/migrate-compose-file-v1-to-v2.py
Executable file
173
contrib/migration/migrate-compose-file-v1-to-v2.py
Executable file
@ -0,0 +1,173 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Migrate a Compose file from the V1 format in Compose 1.5 to the V2 format
|
||||
supported by Compose 1.6+
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import sys
|
||||
|
||||
import ruamel.yaml
|
||||
|
||||
from compose.config.types import VolumeSpec
|
||||
|
||||
|
||||
log = logging.getLogger('migrate')
|
||||
|
||||
|
||||
def migrate(content):
|
||||
data = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
|
||||
|
||||
service_names = data.keys()
|
||||
|
||||
for name, service in data.items():
|
||||
warn_for_links(name, service)
|
||||
warn_for_external_links(name, service)
|
||||
rewrite_net(service, service_names)
|
||||
rewrite_build(service)
|
||||
rewrite_logging(service)
|
||||
rewrite_volumes_from(service, service_names)
|
||||
|
||||
services = {name: data.pop(name) for name in data.keys()}
|
||||
|
||||
data['version'] = 2
|
||||
data['services'] = services
|
||||
create_volumes_section(data)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def warn_for_links(name, service):
|
||||
links = service.get('links')
|
||||
if links:
|
||||
example_service = links[0].partition(':')[0]
|
||||
log.warn(
|
||||
"Service {name} has links, which no longer create environment "
|
||||
"variables such as {example_service_upper}_PORT. "
|
||||
"If you are using those in your application code, you should "
|
||||
"instead connect directly to the hostname, e.g. "
|
||||
"'{example_service}'."
|
||||
.format(name=name, example_service=example_service,
|
||||
example_service_upper=example_service.upper()))
|
||||
|
||||
|
||||
def warn_for_external_links(name, service):
|
||||
external_links = service.get('external_links')
|
||||
if external_links:
|
||||
log.warn(
|
||||
"Service {name} has external_links: {ext}, which now work "
|
||||
"slightly differently. In particular, two containers must be "
|
||||
"connected to at least one network in common in order to "
|
||||
"communicate, even if explicitly linked together.\n\n"
|
||||
"Either connect the external container to your app's default "
|
||||
"network, or connect both the external container and your "
|
||||
"service's containers to a pre-existing network. See "
|
||||
"https://docs.docker.com/compose/networking/ "
|
||||
"for more on how to do this."
|
||||
.format(name=name, ext=external_links))
|
||||
|
||||
|
||||
def rewrite_net(service, service_names):
|
||||
if 'net' in service:
|
||||
network_mode = service.pop('net')
|
||||
|
||||
# "container:<service name>" is now "service:<service name>"
|
||||
if network_mode.startswith('container:'):
|
||||
name = network_mode.partition(':')[2]
|
||||
if name in service_names:
|
||||
network_mode = 'service:{}'.format(name)
|
||||
|
||||
service['network_mode'] = network_mode
|
||||
|
||||
|
||||
def rewrite_build(service):
|
||||
if 'dockerfile' in service:
|
||||
service['build'] = {
|
||||
'context': service.pop('build'),
|
||||
'dockerfile': service.pop('dockerfile'),
|
||||
}
|
||||
|
||||
|
||||
def rewrite_logging(service):
|
||||
if 'log_driver' in service:
|
||||
service['logging'] = {'driver': service.pop('log_driver')}
|
||||
if 'log_opt' in service:
|
||||
service['logging']['options'] = service.pop('log_opt')
|
||||
|
||||
|
||||
def rewrite_volumes_from(service, service_names):
|
||||
for idx, volume_from in enumerate(service.get('volumes_from', [])):
|
||||
if volume_from.split(':', 1)[0] not in service_names:
|
||||
service['volumes_from'][idx] = 'container:%s' % volume_from
|
||||
|
||||
|
||||
def create_volumes_section(data):
|
||||
named_volumes = get_named_volumes(data['services'])
|
||||
if named_volumes:
|
||||
log.warn(
|
||||
"Named volumes ({names}) must be explicitly declared. Creating a "
|
||||
"'volumes' section with declarations.\n\n"
|
||||
"For backwards-compatibility, they've been declared as external. "
|
||||
"If you don't mind the volume names being prefixed with the "
|
||||
"project name, you can remove the 'external' option from each one."
|
||||
.format(names=', '.join(list(named_volumes))))
|
||||
|
||||
data['volumes'] = named_volumes
|
||||
|
||||
|
||||
def get_named_volumes(services):
|
||||
volume_specs = [
|
||||
VolumeSpec.parse(volume)
|
||||
for service in services.values()
|
||||
for volume in service.get('volumes', [])
|
||||
]
|
||||
names = {
|
||||
spec.external
|
||||
for spec in volume_specs
|
||||
if spec.is_named_volume
|
||||
}
|
||||
return {name: {'external': True} for name in names}
|
||||
|
||||
|
||||
def write(stream, new_format, indent, width):
|
||||
ruamel.yaml.dump(
|
||||
new_format,
|
||||
stream,
|
||||
Dumper=ruamel.yaml.RoundTripDumper,
|
||||
indent=indent,
|
||||
width=width)
|
||||
|
||||
|
||||
def parse_opts(args):
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("filename", help="Compose file filename.")
|
||||
parser.add_argument("-i", "--in-place", action='store_true')
|
||||
parser.add_argument(
|
||||
"--indent", type=int, default=2,
|
||||
help="Number of spaces used to indent the output yaml.")
|
||||
parser.add_argument(
|
||||
"--width", type=int, default=80,
|
||||
help="Number of spaces used as the output width.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main(args):
|
||||
logging.basicConfig(format='\033[33m%(levelname)s:\033[37m %(message)s\n')
|
||||
|
||||
opts = parse_opts(args)
|
||||
|
||||
with open(opts.filename, 'r') as fh:
|
||||
new_format = migrate(fh.read())
|
||||
|
||||
if opts.in_place:
|
||||
output = open(opts.filename, 'w')
|
||||
else:
|
||||
output = sys.stdout
|
||||
write(output, new_format, opts.indent, opts.width)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(sys.argv)
|
@ -18,13 +18,23 @@ exe = EXE(pyz,
|
||||
a.datas,
|
||||
[
|
||||
(
|
||||
'compose/config/fields_schema.json',
|
||||
'compose/config/fields_schema.json',
|
||||
'compose/config/fields_schema_v1.json',
|
||||
'compose/config/fields_schema_v1.json',
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
'compose/config/service_schema.json',
|
||||
'compose/config/service_schema.json',
|
||||
'compose/config/fields_schema_v2.0.json',
|
||||
'compose/config/fields_schema_v2.0.json',
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
'compose/config/service_schema_v1.json',
|
||||
'compose/config/service_schema_v1.json',
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
'compose/config/service_schema_v2.0.json',
|
||||
'compose/config/service_schema_v2.0.json',
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
@ -33,6 +43,7 @@ exe = EXE(pyz,
|
||||
'DATA'
|
||||
)
|
||||
],
|
||||
|
||||
name='docker-compose',
|
||||
debug=False,
|
||||
strip=None,
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM docs/base:hugo-github-linking
|
||||
FROM docs/base:latest
|
||||
MAINTAINER Mary Anthony <mary@docker.com> (@moxiegirl)
|
||||
|
||||
RUN svn checkout https://github.com/docker/docker/trunk/docs /docs/content/engine
|
||||
@ -9,7 +9,8 @@ RUN svn checkout https://github.com/kitematic/kitematic/trunk/docs /docs/content
|
||||
RUN svn checkout https://github.com/docker/tutorials/trunk/docs /docs/content/tutorials
|
||||
RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content
|
||||
|
||||
ENV PROJECT=compose
|
||||
# To get the git info for this repo
|
||||
COPY . /src
|
||||
|
||||
COPY . /docs/content/compose/
|
||||
COPY . /docs/content/$PROJECT/
|
||||
|
@ -1,3 +1,12 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
draft = true
|
||||
title = "Compose README"
|
||||
description = "Compose README"
|
||||
keywords = ["Docker, documentation, manual, guide, reference, api"]
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Contributing to the Docker Compose documentation
|
||||
|
||||
The documentation in this directory is part of the [https://docs.docker.com](https://docs.docker.com) website. Docker uses [the Hugo static generator](http://gohugo.io/overview/introduction/) to convert project Markdown files to a static HTML site.
|
||||
@ -49,7 +58,7 @@ The top of each Docker Compose documentation file contains TOML metadata. The me
|
||||
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
parent="workw_compose"
|
||||
weight=2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
@ -61,7 +70,7 @@ The metadata alone has this structure:
|
||||
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
parent="workw_compose"
|
||||
weight=2
|
||||
+++
|
||||
|
||||
|
@ -4,8 +4,8 @@ title = "Command-line Completion"
|
||||
description = "Compose CLI reference"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=10
|
||||
parent="workw_compose"
|
||||
weight=88
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
@ -23,7 +23,7 @@ On a Mac, install with `brew install bash-completion`
|
||||
|
||||
Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g.
|
||||
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
|
||||
|
||||
Completion will be available upon next login.
|
||||
|
||||
@ -32,7 +32,7 @@ Completion will be available upon next login.
|
||||
Place the completion script in your `/path/to/zsh/completion`, using e.g. `~/.zsh/completion/`
|
||||
|
||||
mkdir -p ~/.zsh/completion
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose
|
||||
|
||||
Include the directory in your `$fpath`, e.g. by adding in `~/.zshrc`
|
||||
|
||||
|
@ -1,36 +1,76 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Compose file reference"
|
||||
title = "Compose File Reference"
|
||||
description = "Compose file reference"
|
||||
keywords = ["fig, composition, compose, docker"]
|
||||
aliases = ["/compose/yml"]
|
||||
[menu.main]
|
||||
parent="smn_compose_ref"
|
||||
parent="workw_compose"
|
||||
weight=70
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Compose file reference
|
||||
|
||||
The compose file is a [YAML](http://yaml.org/) file where all the top level
|
||||
keys are the name of a service, and the values are the service definition.
|
||||
The default path for a compose file is `./docker-compose.yml`.
|
||||
The Compose file is a [YAML](http://yaml.org/) file defining
|
||||
[services](#service-configuration-reference),
|
||||
[networks](#network-configuration-reference) and
|
||||
[volumes](#volume-configuration-reference).
|
||||
The default path for a Compose file is `./docker-compose.yml`.
|
||||
|
||||
Each service defined in `docker-compose.yml` must specify exactly one of
|
||||
`image` or `build`. Other keys are optional, and are analogous to their
|
||||
`docker run` command-line counterparts.
|
||||
A service definition contains configuration which will be applied to each
|
||||
container started for that service, much like passing command-line parameters to
|
||||
`docker run`. Likewise, network and volume definitions are analogous to
|
||||
`docker network create` and `docker volume create`.
|
||||
|
||||
As with `docker run`, options specified in the Dockerfile (e.g., `CMD`,
|
||||
`EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to
|
||||
specify them again in `docker-compose.yml`.
|
||||
|
||||
You can use environment variables in configuration values with a Bash-like
|
||||
`${VARIABLE}` syntax - see [variable substitution](#variable-substitution) for
|
||||
full details.
|
||||
|
||||
|
||||
## Service configuration reference
|
||||
|
||||
> **Note:** There are two versions of the Compose file format – version 1 (the
|
||||
> legacy format, which does not support volumes or networks) and version 2 (the
|
||||
> most up-to-date). For more information, see the [Versioning](#versioning)
|
||||
> section.
|
||||
|
||||
This section contains a list of all configuration options supported by a service
|
||||
definition.
|
||||
|
||||
### build
|
||||
|
||||
Configuration options that are applied at build time.
|
||||
|
||||
`build` can be specified either as a string containing a path to the build
|
||||
context, or an object with the path specified under [context](#context) and
|
||||
optionally [dockerfile](#dockerfile) and [args](#args).
|
||||
|
||||
build: ./dir
|
||||
|
||||
build:
|
||||
context: ./dir
|
||||
dockerfile: Dockerfile-alternate
|
||||
args:
|
||||
buildno: 1
|
||||
|
||||
> **Note**: In the [version 1 file format](#version-1), `build` is different in
|
||||
> two ways:
|
||||
>
|
||||
> - Only the string form (`build: .`) is allowed - not the object form.
|
||||
> - Using `build` together with `image` is not allowed. Attempting to do so
|
||||
> results in an error.
|
||||
|
||||
#### context
|
||||
|
||||
> [Version 2 file format](#version-2) only. In version 1, just use
|
||||
> [build](#build).
|
||||
|
||||
Either a path to a directory containing a Dockerfile, or a url to a git repository.
|
||||
|
||||
When the value supplied is a relative path, it is interpreted as relative to the
|
||||
@ -39,10 +79,50 @@ sent to the Docker daemon.
|
||||
|
||||
Compose will build and tag it with a generated name, and use that image thereafter.
|
||||
|
||||
build: /path/to/build/dir
|
||||
build:
|
||||
context: ./dir
|
||||
|
||||
Using `build` together with `image` is not allowed. Attempting to do so results in
|
||||
an error.
|
||||
#### dockerfile
|
||||
|
||||
Alternate Dockerfile.
|
||||
|
||||
Compose will use an alternate file to build with. A build path must also be
|
||||
specified.
|
||||
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile-alternate
|
||||
|
||||
> **Note**: In the [version 1 file format](#version-1), `dockerfile` is
|
||||
> different in two ways:
|
||||
>
|
||||
> - It appears alongside `build`, not as a sub-option:
|
||||
>
|
||||
> build: .
|
||||
> dockerfile: Dockerfile-alternate
|
||||
> - Using `dockerfile` together with `image` is not allowed. Attempting to do
|
||||
> so results in an error.
|
||||
|
||||
#### args
|
||||
|
||||
> [Version 2 file format](#version-2) only.
|
||||
|
||||
Add build arguments. You can use either an array or a dictionary. Any
|
||||
boolean values; true, false, yes, no, need to be enclosed in quotes to ensure
|
||||
they are not converted to True or False by the YML parser.
|
||||
|
||||
Build arguments with only a key are resolved to their environment value on the
|
||||
machine Compose is running on.
|
||||
|
||||
build:
|
||||
args:
|
||||
buildno: 1
|
||||
user: someuser
|
||||
|
||||
build:
|
||||
args:
|
||||
- buildno=1
|
||||
- user=someuser
|
||||
|
||||
### cap_add, cap_drop
|
||||
|
||||
@ -62,6 +142,10 @@ Override the default command.
|
||||
|
||||
command: bundle exec thin -p 3000
|
||||
|
||||
The command can also be a list, in a manner similar to [dockerfile](https://docs.docker.com/engine/reference/builder/#cmd):
|
||||
|
||||
command: [bundle, exec, thin, -p, 3000]
|
||||
|
||||
### cgroup_parent
|
||||
|
||||
Specify an optional parent cgroup for the container.
|
||||
@ -86,6 +170,31 @@ client create option.
|
||||
devices:
|
||||
- "/dev/ttyUSB0:/dev/ttyUSB0"
|
||||
|
||||
### depends_on
|
||||
|
||||
Express dependency between services, which has two effects:
|
||||
|
||||
- `docker-compose up` will start services in dependency order. In the following
|
||||
example, `db` and `redis` will be started before `web`.
|
||||
|
||||
- `docker-compose up SERVICE` will automatically include `SERVICE`'s
|
||||
dependencies. In the following example, `docker-compose up web` will also
|
||||
create and start `db` and `redis`.
|
||||
|
||||
Simple example:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
depends_on:
|
||||
- db
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
db:
|
||||
image: postgres
|
||||
|
||||
### dns
|
||||
|
||||
Custom DNS servers. Can be a single value or a list.
|
||||
@ -104,17 +213,22 @@ Custom DNS search domains. Can be a single value or a list.
|
||||
- dc1.example.com
|
||||
- dc2.example.com
|
||||
|
||||
### dockerfile
|
||||
### entrypoint
|
||||
|
||||
Alternate Dockerfile.
|
||||
Override the default entrypoint.
|
||||
|
||||
Compose will use an alternate file to build with. A build path must also be
|
||||
specified using the `build` key.
|
||||
entrypoint: /code/entrypoint.sh
|
||||
|
||||
build: /path/to/build/dir
|
||||
dockerfile: Dockerfile-alternate
|
||||
The entrypoint can also be a list, in a manner similar to [dockerfile](https://docs.docker.com/engine/reference/builder/#entrypoint):
|
||||
|
||||
entrypoint:
|
||||
- php
|
||||
- -d
|
||||
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so
|
||||
- -d
|
||||
- memory_limit=-1
|
||||
- vendor/bin/phpunit
|
||||
|
||||
Using `dockerfile` together with `image` is not allowed. Attempting to do so results in an error.
|
||||
|
||||
### env_file
|
||||
|
||||
@ -207,6 +321,10 @@ container name and the link alias (`CONTAINER:ALIAS`).
|
||||
- project_db_1:mysql
|
||||
- project_db_1:postgresql
|
||||
|
||||
> **Note:** If you're using the [version 2 file format](#version-2), the
|
||||
> externally-created containers must be connected to at least one of the same
|
||||
> networks as the service which is linking to them.
|
||||
|
||||
### extra_hosts
|
||||
|
||||
Add hostname mappings. Use the same values as the docker client `--add-host` parameter.
|
||||
@ -231,7 +349,7 @@ pull if it doesn't exist locally.
|
||||
|
||||
### labels
|
||||
|
||||
Add metadata to containers using [Docker labels](http://docs.docker.com/userguide/labels-custom-metadata/). You can use either an array or a dictionary.
|
||||
Add metadata to containers using [Docker labels](https://docs.docker.com/engine/userguide/labels-custom-metadata/). You can use either an array or a dictionary.
|
||||
|
||||
It's recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software.
|
||||
|
||||
@ -248,57 +366,114 @@ It's recommended that you use reverse-DNS notation to prevent your labels from c
|
||||
### links
|
||||
|
||||
Link to containers in another service. Either specify both the service name and
|
||||
the link alias (`SERVICE:ALIAS`), or just the service name (which will also be
|
||||
used for the alias).
|
||||
a link alias (`SERVICE:ALIAS`), or just the service name.
|
||||
|
||||
links:
|
||||
- db
|
||||
- db:database
|
||||
- redis
|
||||
web:
|
||||
links:
|
||||
- db
|
||||
- db:database
|
||||
- redis
|
||||
|
||||
An entry with the alias' name will be created in `/etc/hosts` inside containers
|
||||
for this service, e.g:
|
||||
Containers for the linked service will be reachable at a hostname identical to
|
||||
the alias, or the service name if no alias was specified.
|
||||
|
||||
172.17.2.186 db
|
||||
172.17.2.186 database
|
||||
172.17.2.187 redis
|
||||
Links also express dependency between services in the same way as
|
||||
[depends_on](#depends-on), so they determine the order of service startup.
|
||||
|
||||
Environment variables will also be created - see the [environment variable
|
||||
reference](env.md) for details.
|
||||
> **Note:** If you define both links and [networks](#networks), services with
|
||||
> links between them must share at least one network in common in order to
|
||||
> communicate.
|
||||
|
||||
### log_driver
|
||||
### logging
|
||||
|
||||
Specify a logging driver for the service's containers, as with the ``--log-driver``
|
||||
option for docker run ([documented here](https://docs.docker.com/reference/logging/overview/)).
|
||||
> [Version 2 file format](#version-2) only. In version 1, use
|
||||
> [log_driver](#log_driver) and [log_opt](#log_opt).
|
||||
|
||||
Logging configuration for the service.
|
||||
|
||||
logging:
|
||||
driver: syslog
|
||||
options:
|
||||
syslog-address: "tcp://192.168.0.42:123"
|
||||
|
||||
The `driver` name specifies a logging driver for the service's
|
||||
containers, as with the ``--log-driver`` option for docker run
|
||||
([documented here](https://docs.docker.com/engine/reference/logging/overview/)).
|
||||
|
||||
The default value is json-file.
|
||||
|
||||
log_driver: "json-file"
|
||||
log_driver: "syslog"
|
||||
log_driver: "none"
|
||||
driver: "json-file"
|
||||
driver: "syslog"
|
||||
driver: "none"
|
||||
|
||||
> **Note:** Only the `json-file` driver makes the logs available directly from
|
||||
> `docker-compose up` and `docker-compose logs`. Using any other driver will not
|
||||
> print any logs.
|
||||
|
||||
Specify logging options for the logging driver with the ``options`` key, as with the ``--log-opt`` option for `docker run`.
|
||||
|
||||
Logging options are key-value pairs. An example of `syslog` options:
|
||||
|
||||
driver: "syslog"
|
||||
options:
|
||||
syslog-address: "tcp://192.168.0.42:123"
|
||||
|
||||
### log_driver
|
||||
|
||||
> [Version 1 file format](#version-1) only. In version 2, use
|
||||
> [logging](#logging).
|
||||
|
||||
Specify a log driver. The default is `json-file`.
|
||||
|
||||
log_driver: syslog
|
||||
|
||||
### log_opt
|
||||
|
||||
Specify logging options with `log_opt` for the logging driver, as with the ``--log-opt`` option for `docker run`.
|
||||
> [Version 1 file format](#version-1) only. In version 2, use
|
||||
> [logging](#logging).
|
||||
|
||||
Logging options are key value pairs. An example of `syslog` options:
|
||||
Specify logging options as key-value pairs. An example of `syslog` options:
|
||||
|
||||
log_driver: "syslog"
|
||||
log_opt:
|
||||
syslog-address: "tcp://192.168.0.42:123"
|
||||
|
||||
### net
|
||||
|
||||
Networking mode. Use the same values as the docker client `--net` parameter.
|
||||
> [Version 1 file format](#version-1) only. In version 2, use
|
||||
> [network_mode](#network_mode).
|
||||
|
||||
Network mode. Use the same values as the docker client `--net` parameter.
|
||||
The `container:...` form can take a service name instead of a container name or
|
||||
id.
|
||||
|
||||
net: "bridge"
|
||||
net: "none"
|
||||
net: "container:[name or id]"
|
||||
net: "host"
|
||||
net: "none"
|
||||
net: "container:[service name or container name/id]"
|
||||
|
||||
### network_mode
|
||||
|
||||
> [Version 2 file format](#version-1) only. In version 1, use [net](#net).
|
||||
|
||||
Network mode. Use the same values as the docker client `--net` parameter, plus
|
||||
the special form `service:[service name]`.
|
||||
|
||||
network_mode: "bridge"
|
||||
network_mode: "host"
|
||||
network_mode: "none"
|
||||
network_mode: "service:[service name]"
|
||||
network_mode: "container:[container name/id]"
|
||||
|
||||
### networks
|
||||
|
||||
> [Version 2 file format](#version-2) only. In version 1, use [net](#net).
|
||||
|
||||
Networks to join, referencing entries under the
|
||||
[top-level `networks` key](#network-configuration-reference).
|
||||
|
||||
networks:
|
||||
- some-network
|
||||
- other-network
|
||||
|
||||
### pid
|
||||
|
||||
@ -332,9 +507,17 @@ port (a random host port will be chosen).
|
||||
|
||||
Override the default labeling scheme for each container.
|
||||
|
||||
security_opt:
|
||||
- label:user:USER
|
||||
- label:role:ROLE
|
||||
security_opt:
|
||||
- label:user:USER
|
||||
- label:role:ROLE
|
||||
|
||||
### stop_signal
|
||||
|
||||
Sets an alternative signal to stop the container. By default `stop` uses
|
||||
SIGTERM. Setting an alternative signal using `stop_signal` will cause
|
||||
`stop` to send that signal instead.
|
||||
|
||||
stop_signal: SIGUSR1
|
||||
|
||||
### ulimits
|
||||
|
||||
@ -342,37 +525,50 @@ Override the default ulimits for a container. You can either specify a single
|
||||
limit as an integer or soft/hard limits as a mapping.
|
||||
|
||||
|
||||
ulimits:
|
||||
nproc: 65535
|
||||
nofile:
|
||||
soft: 20000
|
||||
hard: 40000
|
||||
ulimits:
|
||||
nproc: 65535
|
||||
nofile:
|
||||
soft: 20000
|
||||
hard: 40000
|
||||
|
||||
### volumes, volume\_driver
|
||||
|
||||
Mount paths as volumes, optionally specifying a path on the host machine
|
||||
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
|
||||
|
||||
volumes:
|
||||
- /var/lib/mysql
|
||||
- ./cache:/tmp/cache
|
||||
- ~/configs:/etc/configs/:ro
|
||||
Mount paths or named volumes, optionally specifying a path on the host machine
|
||||
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`). Named volumes can
|
||||
be specified with the
|
||||
[top-level `volumes` key](#volume-configuration-reference), but this is
|
||||
optional - the Docker Engine will create the volume if it doesn't exist.
|
||||
|
||||
You can mount a relative path on the host, which will expand relative to
|
||||
the directory of the Compose configuration file being used. Relative paths
|
||||
should always begin with `.` or `..`.
|
||||
|
||||
volumes:
|
||||
# Just specify a path and let the Engine create a volume
|
||||
- /var/lib/mysql
|
||||
|
||||
# Specify an absolute path mapping
|
||||
- /opt/data:/var/lib/mysql
|
||||
|
||||
# Path on the host, relative to the Compose file
|
||||
- ./cache:/tmp/cache
|
||||
|
||||
# User-relative path
|
||||
- ~/configs:/etc/configs/:ro
|
||||
|
||||
# Named volume
|
||||
- datavolume:/var/lib/mysql
|
||||
|
||||
If you use a volume name (instead of a volume path), you may also specify
|
||||
a `volume_driver`.
|
||||
|
||||
volume_driver: mydriver
|
||||
|
||||
|
||||
> Note: No path expansion will be done if you have also specified a
|
||||
> `volume_driver`.
|
||||
|
||||
See [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/) and
|
||||
[Volume Plugins](https://docs.docker.com/extend/plugins_volume/) for more
|
||||
See [Docker Volumes](https://docs.docker.com/engine/userguide/dockervolumes/) and
|
||||
[Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/) for more
|
||||
information.
|
||||
|
||||
### volumes_from
|
||||
@ -382,18 +578,28 @@ specifying read-only access(``ro``) or read-write(``rw``).
|
||||
|
||||
volumes_from:
|
||||
- service_name
|
||||
- container_name
|
||||
- service_name:rw
|
||||
- service_name:ro
|
||||
- container:container_name
|
||||
- container:container_name:rw
|
||||
|
||||
### cpu\_shares, cpuset, domainname, entrypoint, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, privileged, read\_only, restart, stdin\_open, tty, user, working\_dir
|
||||
> **Note:** The `container:...` formats are only supported in the
|
||||
> [version 2 file format](#version-2). In [version 1](#version-1), you can use
|
||||
> container names without marking them as such:
|
||||
>
|
||||
> - service_name
|
||||
> - service_name:ro
|
||||
> - container_name
|
||||
> - container_name:rw
|
||||
|
||||
### cpu\_shares, cpu\_quota, cpuset, domainname, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, privileged, read\_only, restart, stdin\_open, tty, user, working\_dir
|
||||
|
||||
Each of these is a single value, analogous to its
|
||||
[docker run](https://docs.docker.com/reference/run/) counterpart.
|
||||
[docker run](https://docs.docker.com/engine/reference/run/) counterpart.
|
||||
|
||||
cpu_shares: 73
|
||||
cpu_quota: 50000
|
||||
cpuset: 0,1
|
||||
|
||||
entrypoint: /code/entrypoint.sh
|
||||
user: postgresql
|
||||
working_dir: /code
|
||||
|
||||
@ -412,6 +618,355 @@ Each of these is a single value, analogous to its
|
||||
stdin_open: true
|
||||
tty: true
|
||||
|
||||
|
||||
## Volume configuration reference
|
||||
|
||||
While it is possible to declare volumes on the fly as part of the service
|
||||
declaration, this section allows you to create named volumes that can be
|
||||
reused across multiple services (without relying on `volumes_from`), and are
|
||||
easily retrieved and inspected using the docker command line or API.
|
||||
See the [docker volume](http://docs.docker.com/reference/commandline/volume/)
|
||||
subcommand documentation for more information.
|
||||
|
||||
### driver
|
||||
|
||||
Specify which volume driver should be used for this volume. Defaults to
|
||||
`local`. The Docker Engine will return an error if the driver is not available.
|
||||
|
||||
driver: foobar
|
||||
|
||||
### driver_opts
|
||||
|
||||
Specify a list of options as key-value pairs to pass to the driver for this
|
||||
volume. Those options are driver-dependent - consult the driver's
|
||||
documentation for more information. Optional.
|
||||
|
||||
driver_opts:
|
||||
foo: "bar"
|
||||
baz: 1
|
||||
|
||||
## external
|
||||
|
||||
If set to `true`, specifies that this volume has been created outside of
|
||||
Compose. `docker-compose up` will not attempt to create it, and will raise
|
||||
an error if it doesn't exist.
|
||||
|
||||
`external` cannot be used in conjunction with other volume configuration keys
|
||||
(`driver`, `driver_opts`).
|
||||
|
||||
In the example below, instead of attemping to create a volume called
|
||||
`[projectname]_data`, Compose will look for an existing volume simply
|
||||
called `data` and mount it into the `db` service's containers.
|
||||
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres
|
||||
volumes:
|
||||
- data:/var/lib/postgres/data
|
||||
|
||||
volumes:
|
||||
data:
|
||||
external: true
|
||||
|
||||
You can also specify the name of the volume separately from the name used to
|
||||
refer to it within the Compose file:
|
||||
|
||||
volumes
|
||||
data:
|
||||
external:
|
||||
name: actual-name-of-volume
|
||||
|
||||
|
||||
## Network configuration reference
|
||||
|
||||
The top-level `networks` key lets you specify networks to be created. For a full
|
||||
explanation of Compose's use of Docker networking features, see the
|
||||
[Networking guide](networking.md).
|
||||
|
||||
### driver
|
||||
|
||||
Specify which driver should be used for this network.
|
||||
|
||||
The default driver depends on how the Docker Engine you're using is configured,
|
||||
but in most instances it will be `bridge` on a single host and `overlay` on a
|
||||
Swarm.
|
||||
|
||||
The Docker Engine will return an error if the driver is not available.
|
||||
|
||||
driver: overlay
|
||||
|
||||
### driver_opts
|
||||
|
||||
Specify a list of options as key-value pairs to pass to the driver for this
|
||||
network. Those options are driver-dependent - consult the driver's
|
||||
documentation for more information. Optional.
|
||||
|
||||
driver_opts:
|
||||
foo: "bar"
|
||||
baz: 1
|
||||
|
||||
### ipam
|
||||
|
||||
Specify custom IPAM config. This is an object with several properties, each of
|
||||
which is optional:
|
||||
|
||||
- `driver`: Custom IPAM driver, instead of the default.
|
||||
- `config`: A list with zero or more config blocks, each containing any of
|
||||
the following keys:
|
||||
- `subnet`: Subnet in CIDR format that represents a network segment
|
||||
- `ip_range`: Range of IPs from which to allocate container IPs
|
||||
- `gateway`: IPv4 or IPv6 gateway for the master subnet
|
||||
- `aux_addresses`: Auxiliary IPv4 or IPv6 addresses used by Network driver,
|
||||
as a mapping from hostname to IP
|
||||
|
||||
A full example:
|
||||
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 172.28.0.0/16
|
||||
ip_range: 172.28.5.0/24
|
||||
gateway: 172.28.5.254
|
||||
aux_addresses:
|
||||
host1: 172.28.1.5
|
||||
host2: 172.28.1.6
|
||||
host3: 172.28.1.7
|
||||
|
||||
### external
|
||||
|
||||
If set to `true`, specifies that this network has been created outside of
|
||||
Compose. `docker-compose up` will not attempt to create it, and will raise
|
||||
an error if it doesn't exist.
|
||||
|
||||
`external` cannot be used in conjunction with other network configuration keys
|
||||
(`driver`, `driver_opts`, `ipam`).
|
||||
|
||||
In the example below, `proxy` is the gateway to the outside world. Instead of
|
||||
attemping to create a network called `[projectname]_outside`, Compose will
|
||||
look for an existing network simply called `outside` and connect the `proxy`
|
||||
service's containers to it.
|
||||
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
proxy:
|
||||
build: ./proxy
|
||||
networks:
|
||||
- outside
|
||||
- default
|
||||
app:
|
||||
build: ./app
|
||||
networks:
|
||||
- default
|
||||
|
||||
networks
|
||||
outside:
|
||||
external: true
|
||||
|
||||
You can also specify the name of the network separately from the name used to
|
||||
refer to it within the Compose file:
|
||||
|
||||
networks
|
||||
outside:
|
||||
external:
|
||||
name: actual-name-of-network
|
||||
|
||||
|
||||
## Versioning
|
||||
|
||||
There are two versions of the Compose file format:
|
||||
|
||||
- Version 1, the legacy format. This is specified by omitting a `version` key at
|
||||
the root of the YAML.
|
||||
- Version 2, the recommended format. This is specified with a `version: '2'` entry
|
||||
at the root of the YAML.
|
||||
|
||||
To move your project from version 1 to 2, see the [Upgrading](#upgrading)
|
||||
section.
|
||||
|
||||
> **Note:** If you're using
|
||||
> [multiple Compose files](extends.md#different-environments) or
|
||||
> [extending services](extends.md#extending-services), each file must be of the
|
||||
> same version - you cannot mix version 1 and 2 in a single project.
|
||||
|
||||
Several things differ depending on which version you use:
|
||||
|
||||
- The structure and permitted configuration keys
|
||||
- The minimum Docker Engine version you must be running
|
||||
- Compose's behaviour with regards to networking
|
||||
|
||||
These differences are explained below.
|
||||
|
||||
|
||||
### Version 1
|
||||
|
||||
Compose files that do not declare a version are considered "version 1". In
|
||||
those files, all the [services](#service-configuration-reference) are declared
|
||||
at the root of the document.
|
||||
|
||||
Version 1 is supported by **Compose up to 1.6.x**. It will be deprecated in a
|
||||
future Compose release.
|
||||
|
||||
Version 1 files cannot declare named
|
||||
[volumes](#volume-configuration-reference), [networks](networking.md) or
|
||||
[build arguments](#args).
|
||||
|
||||
Example:
|
||||
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
links:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
|
||||
|
||||
### Version 2
|
||||
|
||||
Compose files using the version 2 syntax must indicate the version number at
|
||||
the root of the document. All [services](#service-configuration-reference)
|
||||
must be declared under the `services` key.
|
||||
|
||||
Version 2 files are supported by **Compose 1.6.0+** and require a Docker Engine
|
||||
of version **1.10.0+**.
|
||||
|
||||
Named [volumes](#volume-configuration-reference) can be declared under the
|
||||
`volumes` key, and [networks](#network-configuration-reference) can be declared
|
||||
under the `networks` key.
|
||||
|
||||
Simple example:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
redis:
|
||||
image: redis
|
||||
|
||||
A more extended example, defining volumes and networks:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
networks:
|
||||
- front-tier
|
||||
- back-tier
|
||||
redis:
|
||||
image: redis
|
||||
volumes:
|
||||
- redis-data:/var/lib/redis
|
||||
networks:
|
||||
- back-tier
|
||||
volumes:
|
||||
redis-data:
|
||||
driver: local
|
||||
networks:
|
||||
front-tier:
|
||||
driver: bridge
|
||||
back-tier:
|
||||
driver: bridge
|
||||
|
||||
|
||||
### Upgrading
|
||||
|
||||
In the majority of cases, moving from version 1 to 2 is a very simple process:
|
||||
|
||||
1. Indent the whole file by one level and put a `services:` key at the top.
|
||||
2. Add a `version: '2'` line at the top of the file.
|
||||
|
||||
It's more complicated if you're using particular configuration features:
|
||||
|
||||
- `dockerfile`: This now lives under the `build` key:
|
||||
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile-alternate
|
||||
|
||||
- `log_driver`, `log_opt`: These now live under the `logging` key:
|
||||
|
||||
logging:
|
||||
driver: syslog
|
||||
options:
|
||||
syslog-address: "tcp://192.168.0.42:123"
|
||||
|
||||
- `links` with environment variables: As documented in the
|
||||
[environment variables reference](link-env-deprecated.md), environment variables
|
||||
created by
|
||||
links have been deprecated for some time. In the new Docker network system,
|
||||
they have been removed. You should either connect directly to the
|
||||
appropriate hostname or set the relevant environment variable yourself,
|
||||
using the link hostname:
|
||||
|
||||
web:
|
||||
links:
|
||||
- db
|
||||
environment:
|
||||
- DB_PORT=tcp://db:5432
|
||||
|
||||
- `external_links`: Compose uses Docker networks when running version 2
|
||||
projects, so links behave slightly differently. In particular, two
|
||||
containers must be connected to at least one network in common in order to
|
||||
communicate, even if explicitly linked together.
|
||||
|
||||
Either connect the external container to your app's
|
||||
[default network](networking.md), or connect both the external container and
|
||||
your service's containers to an
|
||||
[external network](networking.md#using-a-pre-existing-network).
|
||||
|
||||
- `net`: This is now replaced by [network_mode](#network_mode):
|
||||
|
||||
net: host -> network_mode: host
|
||||
net: bridge -> network_mode: bridge
|
||||
net: none -> network_mode: none
|
||||
|
||||
If you're using `net: "container:[service name]"`, you must now use
|
||||
`network_mode: "service:[service name]"` instead.
|
||||
|
||||
net: "container:web" -> network_mode: "service:web"
|
||||
|
||||
If you're using `net: "container:[container name/id]"`, the value does not
|
||||
need to change.
|
||||
|
||||
net: "container:cont-name" -> network_mode: "container:cont-name"
|
||||
net: "container:abc12345" -> network_mode: "container:abc12345"
|
||||
|
||||
- `volumes` with named volumes: these must now be explicitly declared in a
|
||||
top-level `volumes` section of your Compose file. If a service mounts a
|
||||
named volume called `data`, you must declare a `data` volume in your
|
||||
top-level `volumes` section. The whole file might look like this:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
db:
|
||||
image: postgres
|
||||
volumes:
|
||||
- data:/var/lib/postgresql/data
|
||||
volumes:
|
||||
data: {}
|
||||
|
||||
By default, Compose creates a volume whose name is prefixed with your
|
||||
project name. If you want it to just be called `data`, declared it as
|
||||
external:
|
||||
|
||||
volumes:
|
||||
data:
|
||||
external: true
|
||||
|
||||
## Variable substitution
|
||||
|
||||
Your configuration options can contain environment variables. Compose uses the
|
||||
|
@ -1,16 +1,16 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Quickstart Guide: Compose and Django"
|
||||
title = "Quickstart: Compose and Django"
|
||||
description = "Getting started with Docker Compose and Django"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
parent="workw_compose"
|
||||
weight=4
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Quickstart Guide: Compose and Django
|
||||
# Quickstart: Compose and Django
|
||||
|
||||
This quick-start guide demonstrates how to use Compose to set up and run a
|
||||
simple Django/PostgreSQL app. Before starting, you'll need to have
|
||||
@ -30,8 +30,8 @@ and a `docker-compose.yml` file.
|
||||
The Dockerfile defines an application's image content via one or more build
|
||||
commands that configure that image. Once built, you can run the image in a
|
||||
container. For more information on `Dockerfiles`, see the [Docker user
|
||||
guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile)
|
||||
and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile)
|
||||
and the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
|
||||
|
||||
3. Add the following content to the `Dockerfile`.
|
||||
|
||||
@ -144,7 +144,7 @@ In this section, you set up the database connection for Django.
|
||||
}
|
||||
|
||||
These settings are determined by the
|
||||
[postgres](https://registry.hub.docker.com/_/postgres/) Docker image
|
||||
[postgres](https://hub.docker.com/_/postgres/) Docker image
|
||||
specified in `docker-compose.yml`.
|
||||
|
||||
3. Save and close the file.
|
||||
@ -171,7 +171,7 @@ In this section, you set up the database connection for Django.
|
||||
|
||||
## More Compose documentation
|
||||
|
||||
- [User guide](../index.md)
|
||||
- [User guide](index.md)
|
||||
- [Installing Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
|
@ -1,11 +1,11 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Extending services in Compose"
|
||||
title = "Extending Services in Compose"
|
||||
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=2
|
||||
parent="workw_compose"
|
||||
weight=20
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
@ -32,17 +32,14 @@ contains your base configuration. The override file, as its name implies, can
|
||||
contain configuration overrides for existing services or entirely new
|
||||
services.
|
||||
|
||||
If a service is defined in both files, Compose merges the configurations using
|
||||
the same rules as the `extends` field (see [Adding and overriding
|
||||
configuration](#adding-and-overriding-configuration)), with one exception. If a
|
||||
service contains `links` or `volumes_from` those fields are copied over and
|
||||
replace any values in the original service, in the same way single-valued fields
|
||||
are copied.
|
||||
If a service is defined in both files Compose merges the configurations using
|
||||
the rules described in [Adding and overriding
|
||||
configuration](#adding-and-overriding-configuration).
|
||||
|
||||
To use multiple override files, or an override file with a different name, you
|
||||
can use the `-f` option to specify the list of files. Compose merges files in
|
||||
the order they're specified on the command line. See the [`docker-compose`
|
||||
command reference](./reference/docker-compose.md) for more information about
|
||||
command reference](./reference/overview.md) for more information about
|
||||
using `-f`.
|
||||
|
||||
When you use multiple configuration files, you must make sure all paths in the
|
||||
@ -176,10 +173,12 @@ is useful if you have several services that reuse a common set of configuration
|
||||
options. Using `extends` you can define a common set of service options in one
|
||||
place and refer to it from anywhere.
|
||||
|
||||
> **Note:** `links` and `volumes_from` are never shared between services using
|
||||
> `extends`. See
|
||||
> [Adding and overriding configuration](#adding-and-overriding-configuration)
|
||||
> for more information.
|
||||
> **Note:** `links`, `volumes_from`, and `depends_on` are never shared between
|
||||
> services using >`extends`. These exceptions exist to avoid
|
||||
> implicit dependencies—you always define `links` and `volumes_from`
|
||||
> locally. This ensures dependencies between services are clearly visible when
|
||||
> reading the current file. Defining these locally also ensures changes to the
|
||||
> referenced file don't result in breakage.
|
||||
|
||||
### Understand the extends configuration
|
||||
|
||||
@ -275,13 +274,7 @@ common configuration:
|
||||
|
||||
## Adding and overriding configuration
|
||||
|
||||
Compose copies configurations from the original service over to the local one,
|
||||
**except** for `links` and `volumes_from`. These exceptions exist to avoid
|
||||
implicit dependencies—you always define `links` and `volumes_from`
|
||||
locally. This ensures dependencies between services are clearly visible when
|
||||
reading the current file. Defining these locally also ensures changes to the
|
||||
referenced file don't result in breakage.
|
||||
|
||||
Compose copies configurations from the original service over to the local one.
|
||||
If a configuration option is defined in both the original service the local
|
||||
service, the local value *replaces* or *extends* the original value.
|
||||
|
||||
@ -365,7 +358,7 @@ In the case of `environment`, `labels`, `volumes` and `devices`, Compose
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [User guide](index.md)
|
||||
- [Installing Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
|
@ -4,8 +4,9 @@ title = "Frequently Asked Questions"
|
||||
description = "Docker Compose FAQ"
|
||||
keywords = "documentation, docs, docker, compose, faq"
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=9
|
||||
identifier="faq.compose"
|
||||
parent="workw_compose"
|
||||
weight=90
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
@ -50,8 +51,8 @@ handling `SIGTERM` properly.
|
||||
Compose uses the project name to create unique identifiers for all of a
|
||||
project's containers and other resources. To run multiple copies of a project,
|
||||
set a custom project name using the [`-p` command line
|
||||
option](./reference/docker-compose.md) or the [`COMPOSE_PROJECT_NAME`
|
||||
environment variable](./reference/overview.md#compose-project-name).
|
||||
option](./reference/overview.md) or the [`COMPOSE_PROJECT_NAME`
|
||||
environment variable](./reference/envvars.md#compose-project-name).
|
||||
|
||||
## What's the difference between `up`, `run`, and `start`?
|
||||
|
||||
|
@ -4,8 +4,8 @@ title = "Getting Started"
|
||||
description = "Getting started with Docker Compose"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=3
|
||||
parent="workw_compose"
|
||||
weight=-85
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
@ -77,7 +77,7 @@ dependencies the Python application requires, including Python itself.
|
||||
* Install the Python dependencies.
|
||||
* Set the default command for the container to `python app.py`
|
||||
|
||||
For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
2. Build the image.
|
||||
|
||||
|
171
docs/index.md
171
docs/index.md
@ -1,60 +1,21 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Overview of Docker Compose"
|
||||
title = "Docker Compose"
|
||||
description = "Introduction and Overview of Compose"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
identifier="workw_compose"
|
||||
weight=-70
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Overview of Docker Compose
|
||||
# Docker Compose
|
||||
|
||||
Compose is a tool for defining and running multi-container Docker applications.
|
||||
With Compose, you use a Compose file to configure your application's services.
|
||||
Then, using a single command, you create and start all the services
|
||||
from your configuration. To learn more about all the features of Compose
|
||||
see [the list of features](#features).
|
||||
Compose is a tool for defining and running multi-container Docker applications. To learn more about Compose refer to the following documentation:
|
||||
|
||||
Compose is great for development, testing, and staging environments, as well as
|
||||
CI workflows. You can learn more about each case in
|
||||
[Common Use Cases](#common-use-cases).
|
||||
|
||||
Using Compose is basically a three-step process.
|
||||
|
||||
1. Define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere.
|
||||
2. Define the services that make up your app in `docker-compose.yml` so
|
||||
they can be run together in an isolated environment.
|
||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
|
||||
A `docker-compose.yml` looks like this:
|
||||
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
links:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
|
||||
For more information about the Compose file, see the
|
||||
[Compose file reference](compose-file.md)
|
||||
|
||||
Compose has commands for managing the whole lifecycle of your application:
|
||||
|
||||
* Start, stop and rebuild services
|
||||
* View the status of running services
|
||||
* Stream the log output of running services
|
||||
* Run a one-off command on a service
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [Compose Overview](overview.md)
|
||||
- [Install Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
@ -63,124 +24,6 @@ Compose has commands for managing the whole lifecycle of your application:
|
||||
- [Command line reference](./reference/index.md)
|
||||
- [Compose file reference](compose-file.md)
|
||||
|
||||
## Features
|
||||
|
||||
The features of Compose that make it effective are:
|
||||
|
||||
* [Multiple isolated environments on a single host](#Multiple-isolated-environments-on-a-single-host)
|
||||
* [Preserve volume data when containers are created](#preserve-volume-data-when-containers-are-created)
|
||||
* [Only recreate containers that have changed](#only-recreate-containers-that-have-changed)
|
||||
* [Variables and moving a composition between environments](#variables-and-moving-a-composition-between-environments)
|
||||
|
||||
#### Multiple isolated environments on a single host
|
||||
|
||||
Compose uses a project name to isolate environments from each other. You can use
|
||||
this project name to:
|
||||
|
||||
* on a dev host, to create multiple copies of a single environment (ex: you want
|
||||
to run a stable copy for each feature branch of a project)
|
||||
* on a CI server, to keep builds from interfering with each other, you can set
|
||||
the project name to a unique build number
|
||||
* on a shared host or dev host, to prevent different projects which may use the
|
||||
same service names, from interfering with each other
|
||||
|
||||
The default project name is the basename of the project directory. You can set
|
||||
a custom project name by using the
|
||||
[`-p` command line option](./reference/docker-compose.md) or the
|
||||
[`COMPOSE_PROJECT_NAME` environment variable](./reference/overview.md#compose-project-name).
|
||||
|
||||
#### Preserve volume data when containers are created
|
||||
|
||||
Compose preserves all volumes used by your services. When `docker-compose up`
|
||||
runs, if it finds any containers from previous runs, it copies the volumes from
|
||||
the old container to the new container. This process ensures that any data
|
||||
you've created in volumes isn't lost.
|
||||
|
||||
|
||||
#### Only recreate containers that have changed
|
||||
|
||||
Compose caches the configuration used to create a container. When you
|
||||
restart a service that has not changed, Compose re-uses the existing
|
||||
containers. Re-using containers means that you can make changes to your
|
||||
environment very quickly.
|
||||
|
||||
|
||||
#### Variables and moving a composition between environments
|
||||
|
||||
Compose supports variables in the Compose file. You can use these variables
|
||||
to customize your composition for different environments, or different users.
|
||||
See [Variable substitution](compose-file.md#variable-substitution) for more
|
||||
details.
|
||||
|
||||
You can extend a Compose file using the `extends` field or by creating multiple
|
||||
Compose files. See [extends](extends.md) for more details.
|
||||
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
Compose can be used in many different ways. Some common use cases are outlined
|
||||
below.
|
||||
|
||||
### Development environments
|
||||
|
||||
When you're developing software, the ability to run an application in an
|
||||
isolated environment and interact with it is crucial. The Compose command
|
||||
line tool can be used to create the environment and interact with it.
|
||||
|
||||
The [Compose file](compose-file.md) provides a way to document and configure
|
||||
all of the application's service dependencies (databases, queues, caches,
|
||||
web service APIs, etc). Using the Compose command line tool you can create
|
||||
and start one or more containers for each dependency with a single command
|
||||
(`docker-compose up`).
|
||||
|
||||
Together, these features provide a convenient way for developers to get
|
||||
started on a project. Compose can reduce a multi-page "developer getting
|
||||
started guide" to a single machine readable Compose file and a few commands.
|
||||
|
||||
### Automated testing environments
|
||||
|
||||
An important part of any Continuous Deployment or Continuous Integration process
|
||||
is the automated test suite. Automated end-to-end testing requires an
|
||||
environment in which to run tests. Compose provides a convenient way to create
|
||||
and destroy isolated testing environments for your test suite. By defining the full
|
||||
environment in a [Compose file](compose-file.md) you can create and destroy these
|
||||
environments in just a few commands:
|
||||
|
||||
$ docker-compose up -d
|
||||
$ ./run_tests
|
||||
$ docker-compose stop
|
||||
$ docker-compose rm -f
|
||||
|
||||
### Single host deployments
|
||||
|
||||
Compose has traditionally been focused on development and testing workflows,
|
||||
but with each release we're making progress on more production-oriented features.
|
||||
You can use Compose to deploy to a remote Docker Engine. The Docker Engine may
|
||||
be a single instance provisioned with
|
||||
[Docker Machine](https://docs.docker.com/machine/) or an entire
|
||||
[Docker Swarm](https://docs.docker.com/swarm/) cluster.
|
||||
|
||||
For details on using production-oriented features, see
|
||||
[compose in production](production.md) in this documentation.
|
||||
|
||||
|
||||
## Release Notes
|
||||
|
||||
To see a detailed list of changes for past and current releases of Docker
|
||||
Compose, please refer to the
|
||||
[CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md).
|
||||
|
||||
## Getting help
|
||||
|
||||
Docker Compose is under active development. If you need help, would like to
|
||||
contribute, or simply want to talk about the project with like-minded
|
||||
individuals, we have a number of open channels for communication.
|
||||
|
||||
* To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues).
|
||||
|
||||
* To talk about the project with people in real time: please join the
|
||||
`#docker-compose` channel on freenode IRC.
|
||||
|
||||
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls).
|
||||
|
||||
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).
|
||||
|
@ -1,11 +1,11 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Docker Compose"
|
||||
title = "Install Compose"
|
||||
description = "How to install Docker Compose"
|
||||
keywords = ["compose, orchestration, install, installation, docker, documentation"]
|
||||
[menu.main]
|
||||
parent="mn_install"
|
||||
weight=4
|
||||
parent="workw_compose"
|
||||
weight=-90
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
@ -20,11 +20,11 @@ To install Compose, do the following:
|
||||
|
||||
1. Install Docker Engine version 1.7.1 or greater:
|
||||
|
||||
* <a href="https://docs.docker.com/installation/mac/" target="_blank">Mac OS X installation</a> (Toolbox installation includes both Engine and Compose)
|
||||
* <a href="https://docs.docker.com/engine/installation/mac/" target="_blank">Mac OS X installation</a> (Toolbox installation includes both Engine and Compose)
|
||||
|
||||
* <a href="https://docs.docker.com/installation/ubuntulinux/" target="_blank">Ubuntu installation</a>
|
||||
* <a href="https://docs.docker.com/engine/installation/ubuntulinux/" target="_blank">Ubuntu installation</a>
|
||||
|
||||
* <a href="https://docs.docker.com/installation/" target="_blank">other system installations</a>
|
||||
* <a href="https://docs.docker.com/engine/installation/" target="_blank">other system installations</a>
|
||||
|
||||
2. Mac OS X users are done installing. Others should continue to the next step.
|
||||
|
||||
@ -39,7 +39,7 @@ which the release page specifies, in your terminal.
|
||||
|
||||
The following is an example command illustrating the format:
|
||||
|
||||
curl -L https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/1.6.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
|
||||
If you have problems installing with `curl`, see
|
||||
[Alternative Install Options](#alternative-install-options).
|
||||
@ -54,7 +54,7 @@ which the release page specifies, in your terminal.
|
||||
7. Test the installation.
|
||||
|
||||
$ docker-compose --version
|
||||
docker-compose version: 1.5.2
|
||||
docker-compose version: 1.6.0
|
||||
|
||||
|
||||
## Alternative install options
|
||||
@ -77,7 +77,7 @@ to get started.
|
||||
Compose can also be run inside a container, from a small bash script wrapper.
|
||||
To install compose as a container run:
|
||||
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.5.2/run.sh > /usr/local/bin/docker-compose
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.6.0/run.sh > /usr/local/bin/docker-compose
|
||||
$ chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
## Master builds
|
||||
@ -98,7 +98,7 @@ be recreated with labels added.
|
||||
If Compose detects containers that were created without labels, it will refuse
|
||||
to run so that you don't end up with two sets of them. If you want to keep using
|
||||
your existing containers (for example, because they have data volumes you want
|
||||
to preserve) you can migrate them with the following command:
|
||||
to preserve) you can use compose 1.5.x to migrate them with the following command:
|
||||
|
||||
$ docker-compose migrate-to-labels
|
||||
|
||||
@ -127,7 +127,7 @@ To uninstall Docker Compose if you installed using `pip`:
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [User guide](/)
|
||||
- [User guide](index.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
|
@ -1,17 +1,20 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Compose environment variables reference"
|
||||
title = "Link Environment Variables"
|
||||
description = "Compose CLI reference"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
aliases = ["/compose/env"]
|
||||
[menu.main]
|
||||
parent="smn_compose_ref"
|
||||
weight=3
|
||||
parent="workw_compose"
|
||||
weight=89
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Compose environment variables reference
|
||||
# Link environment variables reference
|
||||
|
||||
**Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details.
|
||||
> **Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details.
|
||||
>
|
||||
> Environment variables will only be populated if you're using the [legacy version 1 Compose file format](compose-file.md#versioning).
|
||||
|
||||
Compose uses [Docker links] to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container.
|
||||
|
||||
@ -35,7 +38,7 @@ Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp`
|
||||
<b><i>name</i>\_NAME</b><br>
|
||||
Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
|
||||
|
||||
[Docker links]: http://docs.docker.com/userguide/dockerlinks/
|
||||
[Docker links]: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
|
||||
|
||||
## Related Information
|
||||
|
@ -4,89 +4,147 @@ title = "Networking in Compose"
|
||||
description = "How Compose sets up networking between containers"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers, networking"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=6
|
||||
parent="workw_compose"
|
||||
weight=21
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Networking in Compose
|
||||
|
||||
> **Note:** Compose's networking support is experimental, and must be explicitly enabled with the `docker-compose --x-networking` flag.
|
||||
> **Note:** This document only applies if you're using [version 2 of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files.
|
||||
|
||||
Compose sets up a single default
|
||||
By default Compose sets up a single
|
||||
[network](/engine/reference/commandline/network_create.md) for your app. Each
|
||||
container for a service joins the default network and is both *reachable* by
|
||||
other containers on that network, and *discoverable* by them at a hostname
|
||||
identical to the container name.
|
||||
|
||||
> **Note:** Your app's network is given the same name as the "project name", which is based on the name of the directory it lives in. See the [Command line overview](reference/docker-compose.md) for how to override it.
|
||||
> **Note:** Your app's network is given a name based on the "project name",
|
||||
> which is based on the name of the directory it lives in. You can override the
|
||||
> project name with either the [`--project-name`
|
||||
> flag](reference/overview.md) or the [`COMPOSE_PROJECT_NAME` environment
|
||||
> variable](reference/envvars.md#compose-project-name).
|
||||
|
||||
For example, suppose your app is in a directory called `myapp`, and your `docker-compose.yml` looks like this:
|
||||
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "8000:8000"
|
||||
db:
|
||||
image: postgres
|
||||
version: '2'
|
||||
|
||||
When you run `docker-compose --x-networking up`, the following happens:
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "8000:8000"
|
||||
db:
|
||||
image: postgres
|
||||
|
||||
1. A network called `myapp` is created.
|
||||
2. A container is created using `web`'s configuration. It joins the network
|
||||
`myapp` under the name `myapp_web_1`.
|
||||
3. A container is created using `db`'s configuration. It joins the network
|
||||
`myapp` under the name `myapp_db_1`.
|
||||
When you run `docker-compose up`, the following happens:
|
||||
|
||||
Each container can now look up the hostname `myapp_web_1` or `myapp_db_1` and
|
||||
1. A network called `myapp_default` is created.
|
||||
2. A container is created using `web`'s configuration. It joins the network
|
||||
`myapp_default` under the name `web`.
|
||||
3. A container is created using `db`'s configuration. It joins the network
|
||||
`myapp_default` under the name `db`.
|
||||
|
||||
Each container can now look up the hostname `web` or `db` and
|
||||
get back the appropriate container's IP address. For example, `web`'s
|
||||
application code could connect to the URL `postgres://myapp_db_1:5432` and start
|
||||
application code could connect to the URL `postgres://db:5432` and start
|
||||
using the Postgres database.
|
||||
|
||||
Because `web` explicitly maps a port, it's also accessible from the outside world via port 8000 on your Docker host's network interface.
|
||||
|
||||
> **Note:** in the next release there will be additional aliases for the
|
||||
> container, including a short name without the project name and container
|
||||
> index. The full container name will remain as one of the alias for backwards
|
||||
> compatibility.
|
||||
|
||||
## Updating containers
|
||||
|
||||
If you make a configuration change to a service and run `docker-compose up` to update it, the old container will be removed and the new one will join the network under a different IP address but the same name. Running containers will be able to look up that name and connect to the new address, but the old address will stop working.
|
||||
|
||||
If any containers have connections open to the old container, they will be closed. It is a container's responsibility to detect this condition, look up the name again and reconnect.
|
||||
|
||||
## Configure how services are published
|
||||
|
||||
By default, containers for each service are published on the network with the
|
||||
container name. If you want to change the name, or stop containers from being
|
||||
discoverable at all, you can use the `container_name` option:
|
||||
|
||||
web:
|
||||
build: .
|
||||
container_name: "my-web-application"
|
||||
|
||||
## Links
|
||||
|
||||
Docker links are a one-way, single-host communication system. They should now be considered deprecated, and you should update your app to use networking instead. In the majority of cases, this will simply involve removing the `links` sections from your `docker-compose.yml`.
|
||||
Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service's name. In the following example, `db` is reachable from `web` at the hostnames `db` and `database`:
|
||||
|
||||
## Specifying the network driver
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
links:
|
||||
- "db:database"
|
||||
db:
|
||||
image: postgres
|
||||
|
||||
By default, Compose uses the `bridge` driver when creating the app’s network. The Docker Engine provides one other driver out-of-the-box: `overlay`, which implements secure communication between containers on different hosts (see the next section for how to set up and use the `overlay` driver). Docker also allows you to install [custom network drivers](/engine/extend/plugins_network.md).
|
||||
|
||||
You can specify which one to use with the `--x-network-driver` flag:
|
||||
|
||||
$ docker-compose --x-networking --x-network-driver=overlay up
|
||||
See the [links reference](compose-file.md#links) for more information.
|
||||
|
||||
## Multi-host networking
|
||||
|
||||
(TODO: talk about Swarm and the overlay driver)
|
||||
When deploying a Compose application to a Swarm cluster, you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to application code. Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay.md) to see how to set up the overlay driver, and then specify `driver: overlay` in your networking config (see the sections below for how to do this).
|
||||
|
||||
## Custom container network modes
|
||||
## Specifying custom networks
|
||||
|
||||
Compose allows you to specify a custom network mode for a service with the `net` option - for example, `net: "host"` specifies that its containers should use the same network namespace as the Docker host, and `net: "none"` specifies that they should have no networking capabilities.
|
||||
Instead of just using the default app network, you can specify your own networks with the top-level `networks` key. This lets you create more complex topologies and specify [custom network drivers](/engine/extend/plugins_network.md) and options. You can also use it to connect services to externally-created networks which aren't managed by Compose.
|
||||
|
||||
If a service specifies the `net` option, its containers will *not* join the app’s network and will not be able to communicate with other services in the app.
|
||||
Each service can specify what networks to connect to with the *service-level* `networks` key, which is a list of names referencing entries under the *top-level* `networks` key.
|
||||
|
||||
If *all* services in an app specify the `net` option, a network will not be created at all.
|
||||
Here's an example Compose file defining two custom networks. The `proxy` service is isolated from the `db` service, because they do not share a network in common - only `app` can talk to both.
|
||||
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
proxy:
|
||||
build: ./proxy
|
||||
networks:
|
||||
- front
|
||||
app:
|
||||
build: ./app
|
||||
networks:
|
||||
- front
|
||||
- back
|
||||
db:
|
||||
image: postgres
|
||||
networks:
|
||||
- back
|
||||
|
||||
networks:
|
||||
front:
|
||||
# Use the overlay driver for multi-host communication
|
||||
driver: overlay
|
||||
back:
|
||||
# Use a custom driver which takes special options
|
||||
driver: my-custom-driver
|
||||
driver_opts:
|
||||
foo: "1"
|
||||
bar: "2"
|
||||
|
||||
For full details of the network configuration options available, see the following references:
|
||||
|
||||
- [Top-level `networks` key](compose-file.md#network-configuration-reference)
|
||||
- [Service-level `networks` key](compose-file.md#networks)
|
||||
|
||||
## Configuring the default network
|
||||
|
||||
Instead of (or as well as) specifying your own networks, you can also change the settings of the app-wide default network by defining an entry under `networks` named `default`:
|
||||
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "8000:8000"
|
||||
db:
|
||||
image: postgres
|
||||
|
||||
networks:
|
||||
default:
|
||||
# Use the overlay driver for multi-host communication
|
||||
driver: overlay
|
||||
|
||||
## Using a pre-existing network
|
||||
|
||||
If you want your containers to join a pre-existing network, use the [`external` option](compose-file.md#network-configuration-reference):
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: my-pre-existing-network
|
||||
|
||||
Instead of attemping to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it.
|
||||
|
191
docs/overview.md
Normal file
191
docs/overview.md
Normal file
@ -0,0 +1,191 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Overview of Docker Compose"
|
||||
description = "Introduction and Overview of Compose"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="workw_compose"
|
||||
weight=-99
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Overview of Docker Compose
|
||||
|
||||
Compose is a tool for defining and running multi-container Docker applications.
|
||||
With Compose, you use a Compose file to configure your application's services.
|
||||
Then, using a single command, you create and start all the services
|
||||
from your configuration. To learn more about all the features of Compose
|
||||
see [the list of features](#features).
|
||||
|
||||
Compose is great for development, testing, and staging environments, as well as
|
||||
CI workflows. You can learn more about each case in
|
||||
[Common Use Cases](#common-use-cases).
|
||||
|
||||
Using Compose is basically a three-step process.
|
||||
|
||||
1. Define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere.
|
||||
2. Define the services that make up your app in `docker-compose.yml` so
|
||||
they can be run together in an isolated environment.
|
||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
|
||||
A `docker-compose.yml` looks like this:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
- logvolume01:/var/log
|
||||
links:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
volumes:
|
||||
logvolume01: {}
|
||||
|
||||
For more information about the Compose file, see the
|
||||
[Compose file reference](compose-file.md)
|
||||
|
||||
Compose has commands for managing the whole lifecycle of your application:
|
||||
|
||||
* Start, stop and rebuild services
|
||||
* View the status of running services
|
||||
* Stream the log output of running services
|
||||
* Run a one-off command on a service
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with WordPress](wordpress.md)
|
||||
- [Frequently asked questions](faq.md)
|
||||
- [Command line reference](./reference/index.md)
|
||||
- [Compose file reference](compose-file.md)
|
||||
|
||||
## Features
|
||||
|
||||
The features of Compose that make it effective are:
|
||||
|
||||
* [Multiple isolated environments on a single host](#Multiple-isolated-environments-on-a-single-host)
|
||||
* [Preserve volume data when containers are created](#preserve-volume-data-when-containers-are-created)
|
||||
* [Only recreate containers that have changed](#only-recreate-containers-that-have-changed)
|
||||
* [Variables and moving a composition between environments](#variables-and-moving-a-composition-between-environments)
|
||||
|
||||
### Multiple isolated environments on a single host
|
||||
|
||||
Compose uses a project name to isolate environments from each other. You can use
|
||||
this project name to:
|
||||
|
||||
* on a dev host, to create multiple copies of a single environment (ex: you want
|
||||
to run a stable copy for each feature branch of a project)
|
||||
* on a CI server, to keep builds from interfering with each other, you can set
|
||||
the project name to a unique build number
|
||||
* on a shared host or dev host, to prevent different projects which may use the
|
||||
same service names, from interfering with each other
|
||||
|
||||
The default project name is the basename of the project directory. You can set
|
||||
a custom project name by using the
|
||||
[`-p` command line option](./reference/overview.md) or the
|
||||
[`COMPOSE_PROJECT_NAME` environment variable](./reference/envvars.md#compose-project-name).
|
||||
|
||||
### Preserve volume data when containers are created
|
||||
|
||||
Compose preserves all volumes used by your services. When `docker-compose up`
|
||||
runs, if it finds any containers from previous runs, it copies the volumes from
|
||||
the old container to the new container. This process ensures that any data
|
||||
you've created in volumes isn't lost.
|
||||
|
||||
|
||||
### Only recreate containers that have changed
|
||||
|
||||
Compose caches the configuration used to create a container. When you
|
||||
restart a service that has not changed, Compose re-uses the existing
|
||||
containers. Re-using containers means that you can make changes to your
|
||||
environment very quickly.
|
||||
|
||||
|
||||
### Variables and moving a composition between environments
|
||||
|
||||
Compose supports variables in the Compose file. You can use these variables
|
||||
to customize your composition for different environments, or different users.
|
||||
See [Variable substitution](compose-file.md#variable-substitution) for more
|
||||
details.
|
||||
|
||||
You can extend a Compose file using the `extends` field or by creating multiple
|
||||
Compose files. See [extends](extends.md) for more details.
|
||||
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
Compose can be used in many different ways. Some common use cases are outlined
|
||||
below.
|
||||
|
||||
### Development environments
|
||||
|
||||
When you're developing software, the ability to run an application in an
|
||||
isolated environment and interact with it is crucial. The Compose command
|
||||
line tool can be used to create the environment and interact with it.
|
||||
|
||||
The [Compose file](compose-file.md) provides a way to document and configure
|
||||
all of the application's service dependencies (databases, queues, caches,
|
||||
web service APIs, etc). Using the Compose command line tool you can create
|
||||
and start one or more containers for each dependency with a single command
|
||||
(`docker-compose up`).
|
||||
|
||||
Together, these features provide a convenient way for developers to get
|
||||
started on a project. Compose can reduce a multi-page "developer getting
|
||||
started guide" to a single machine readable Compose file and a few commands.
|
||||
|
||||
### Automated testing environments
|
||||
|
||||
An important part of any Continuous Deployment or Continuous Integration process
|
||||
is the automated test suite. Automated end-to-end testing requires an
|
||||
environment in which to run tests. Compose provides a convenient way to create
|
||||
and destroy isolated testing environments for your test suite. By defining the full
|
||||
environment in a [Compose file](compose-file.md) you can create and destroy these
|
||||
environments in just a few commands:
|
||||
|
||||
$ docker-compose up -d
|
||||
$ ./run_tests
|
||||
$ docker-compose down
|
||||
|
||||
### Single host deployments
|
||||
|
||||
Compose has traditionally been focused on development and testing workflows,
|
||||
but with each release we're making progress on more production-oriented features.
|
||||
You can use Compose to deploy to a remote Docker Engine. The Docker Engine may
|
||||
be a single instance provisioned with
|
||||
[Docker Machine](https://docs.docker.com/machine/) or an entire
|
||||
[Docker Swarm](https://docs.docker.com/swarm/) cluster.
|
||||
|
||||
For details on using production-oriented features, see
|
||||
[compose in production](production.md) in this documentation.
|
||||
|
||||
|
||||
## Release Notes
|
||||
|
||||
To see a detailed list of changes for past and current releases of Docker
|
||||
Compose, please refer to the
|
||||
[CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md).
|
||||
|
||||
## Getting help
|
||||
|
||||
Docker Compose is under active development. If you need help, would like to
|
||||
contribute, or simply want to talk about the project with like-minded
|
||||
individuals, we have a number of open channels for communication.
|
||||
|
||||
* To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues).
|
||||
|
||||
* To talk about the project with people in real time: please join the
|
||||
`#docker-compose` channel on freenode IRC.
|
||||
|
||||
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls).
|
||||
|
||||
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/opensource/get-help/).
|
@ -1,11 +1,11 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Using Compose in production"
|
||||
title = "Using Compose in Production"
|
||||
description = "Guide to using Docker Compose in production"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers, production"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=1
|
||||
parent="workw_compose"
|
||||
weight=22
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
@ -60,7 +60,7 @@ recreating any services which `web` depends on.
|
||||
You can use Compose to deploy an app to a remote Docker host by setting the
|
||||
`DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables
|
||||
appropriately. For tasks like this,
|
||||
[Docker Machine](https://docs.docker.com/machine) makes managing local and
|
||||
[Docker Machine](https://docs.docker.com/machine/) makes managing local and
|
||||
remote Docker hosts very easy, and is recommended even if you're not deploying
|
||||
remotely.
|
||||
|
||||
@ -69,7 +69,7 @@ commands will work with no further configuration.
|
||||
|
||||
### Running Compose on a Swarm cluster
|
||||
|
||||
[Docker Swarm](https://docs.docker.com/swarm), a Docker-native clustering
|
||||
[Docker Swarm](https://docs.docker.com/swarm/), a Docker-native clustering
|
||||
system, exposes the same API as a single Docker host, which means you can use
|
||||
Compose against a Swarm instance and run your apps across multiple hosts.
|
||||
|
||||
|
@ -1,15 +1,15 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Quickstart Guide: Compose and Rails"
|
||||
title = "Quickstart: Compose and Rails"
|
||||
description = "Getting started with Docker Compose and Rails"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
parent="workw_compose"
|
||||
weight=5
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
## Quickstart Guide: Compose and Rails
|
||||
## Quickstart: Compose and Rails
|
||||
|
||||
This Quickstart guide will show you how to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md).
|
||||
|
||||
@ -30,7 +30,7 @@ Dockerfile consists of:
|
||||
RUN bundle install
|
||||
ADD . /myapp
|
||||
|
||||
That'll put your application code inside an image that will build a container with Ruby, Bundler and all your dependencies inside it. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
That'll put your application code inside an image that will build a container with Ruby, Bundler and all your dependencies inside it. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
|
||||
|
||||
Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`.
|
||||
|
||||
@ -128,12 +128,12 @@ Finally, you need to create the database. In another terminal, run:
|
||||
|
||||
$ docker-compose run web rake db:create
|
||||
|
||||
That's it. Your app should now be running on port 3000 on your Docker daemon. If you're using [Docker Machine](https://docs.docker.com/machine), then `docker-machine ip MACHINE_VM` returns the Docker host IP address.
|
||||
That's it. Your app should now be running on port 3000 on your Docker daemon. If you're using [Docker Machine](https://docs.docker.com/machine/), then `docker-machine ip MACHINE_VM` returns the Docker host IP address.
|
||||
|
||||
|
||||
## More Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [User guide](index.md)
|
||||
- [Installing Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
|
23
docs/reference/config.md
Normal file
23
docs/reference/config.md
Normal file
@ -0,0 +1,23 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "config"
|
||||
description = "Config validates and view the compose file."
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, config"]
|
||||
[menu.main]
|
||||
identifier="config.compose"
|
||||
parent = "smn_compose_cli"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# config
|
||||
|
||||
```:
|
||||
Usage: config [options]
|
||||
|
||||
Options:
|
||||
-q, --quiet Only validate the configuration, don't print
|
||||
anything.
|
||||
--services Print the service names, one per line.
|
||||
```
|
||||
|
||||
Validate and view the compose file.
|
25
docs/reference/create.md
Normal file
25
docs/reference/create.md
Normal file
@ -0,0 +1,25 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "create"
|
||||
description = "Create creates containers for a service."
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, create"]
|
||||
[menu.main]
|
||||
identifier="create.compose"
|
||||
parent = "smn_compose_cli"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# create
|
||||
|
||||
```
|
||||
Usage: create [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--force-recreate Recreate containers even if their configuration and
|
||||
image haven't changed. Incompatible with --no-recreate.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
```
|
||||
|
||||
Creates containers for a service.
|
@ -1,107 +0,0 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "docker-compose"
|
||||
description = "docker-compose Command Binary"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, docker-compose"]
|
||||
[menu.main]
|
||||
parent = "smn_compose_cli"
|
||||
weight=-2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# docker-compose Command
|
||||
|
||||
```
|
||||
Usage:
|
||||
docker-compose [-f=<arg>...] [options] [COMMAND] [ARGS...]
|
||||
docker-compose -h|--help
|
||||
|
||||
Options:
|
||||
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
|
||||
-p, --project-name NAME Specify an alternate project name (default: directory name)
|
||||
--verbose Show more output
|
||||
-v, --version Print version and exit
|
||||
|
||||
Commands:
|
||||
build Build or rebuild services
|
||||
help Get help on a command
|
||||
kill Kill containers
|
||||
logs View output from containers
|
||||
pause Pause services
|
||||
port Print the public port for a port binding
|
||||
ps List containers
|
||||
pull Pulls service images
|
||||
restart Restart services
|
||||
rm Remove stopped containers
|
||||
run Run a one-off command
|
||||
scale Set number of containers for a service
|
||||
start Start services
|
||||
stop Stop services
|
||||
unpause Unpause services
|
||||
up Create and start containers
|
||||
migrate-to-labels Recreate containers to add labels
|
||||
version Show the Docker-Compose version information
|
||||
```
|
||||
|
||||
The Docker Compose binary. You use this command to build and manage multiple
|
||||
services in Docker containers.
|
||||
|
||||
Use the `-f` flag to specify the location of a Compose configuration file. You
|
||||
can supply multiple `-f` configuration files. When you supply multiple files,
|
||||
Compose combines them into a single configuration. Compose builds the
|
||||
configuration in the order you supply the files. Subsequent files override and
|
||||
add to their successors.
|
||||
|
||||
For example, consider this command line:
|
||||
|
||||
```
|
||||
$ docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db`
|
||||
```
|
||||
|
||||
The `docker-compose.yml` file might specify a `webapp` service.
|
||||
|
||||
```
|
||||
webapp:
|
||||
image: examples/web
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- "/data"
|
||||
```
|
||||
|
||||
If the `docker-compose.admin.yml` also specifies this same service, any matching
|
||||
fields will override the previous file. New values, add to the `webapp` service
|
||||
configuration.
|
||||
|
||||
```
|
||||
webapp:
|
||||
build: .
|
||||
environment:
|
||||
- DEBUG=1
|
||||
```
|
||||
|
||||
Use a `-f` with `-` (dash) as the filename to read the configuration from
|
||||
stdin. When stdin is used all paths in the configuration are
|
||||
relative to the current working directory.
|
||||
|
||||
The `-f` flag is optional. If you don't provide this flag on the command line,
|
||||
Compose traverses the working directory and its subdirectories looking for a
|
||||
`docker-compose.yml` and a `docker-compose.override.yml` file. You must
|
||||
supply at least the `docker-compose.yml` file. If both files are present,
|
||||
Compose combines the two files into a single configuration. The configuration
|
||||
in the `docker-compose.override.yml` file is applied over and in addition to
|
||||
the values in the `docker-compose.yml` file.
|
||||
|
||||
See also the `COMPOSE_FILE` [environment variable](overview.md#compose-file).
|
||||
|
||||
Each configuration has a project name. If you supply a `-p` flag, you can
|
||||
specify a project name. If you don't specify the flag, Compose uses the current
|
||||
directory name. See also the `COMPOSE_PROJECT_NAME` [environment variable](
|
||||
overview.md#compose-project-name)
|
||||
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [CLI environment variables](overview.md)
|
||||
* [Command line reference](index.md)
|
26
docs/reference/down.md
Normal file
26
docs/reference/down.md
Normal file
@ -0,0 +1,26 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "down"
|
||||
description = "down"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, down"]
|
||||
[menu.main]
|
||||
identifier="down.compose"
|
||||
parent = "smn_compose_cli"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# down
|
||||
|
||||
```
|
||||
Stop containers and remove containers, networks, volumes, and images
|
||||
created by `up`. Only containers and networks are removed by default.
|
||||
|
||||
Usage: down [options]
|
||||
|
||||
Options:
|
||||
--rmi type Remove images, type may be one of: 'all' to remove
|
||||
all images, or 'local' to remove only images that
|
||||
don't have an custom name set by the `image` field
|
||||
-v, --volumes Remove data volumes
|
||||
|
||||
```
|
78
docs/reference/envvars.md
Normal file
78
docs/reference/envvars.md
Normal file
@ -0,0 +1,78 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "CLI Environment Variables"
|
||||
description = "CLI Environment Variables"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
[menu.main]
|
||||
parent = "smn_compose_cli"
|
||||
weight=-1
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# CLI Environment Variables
|
||||
|
||||
Several environment variables are available for you to configure the Docker Compose command-line behaviour.
|
||||
|
||||
Variables starting with `DOCKER_` are the same as those used to configure the
|
||||
Docker command-line client. If you're using `docker-machine`, then the `eval "$(docker-machine env my-docker-vm)"` command should set them to their correct values. (In this example, `my-docker-vm` is the name of a machine you created.)
|
||||
|
||||
## COMPOSE\_PROJECT\_NAME
|
||||
|
||||
Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively.
|
||||
|
||||
Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME`
|
||||
defaults to the `basename` of the project directory. See also the `-p`
|
||||
[command-line option](overview.md).
|
||||
|
||||
## COMPOSE\_FILE
|
||||
|
||||
Specify the file containing the compose configuration. If not provided,
|
||||
Compose looks for a file named `docker-compose.yml` in the current directory
|
||||
and then each parent directory in succession until a file by that name is
|
||||
found. See also the `-f` [command-line option](overview.md).
|
||||
|
||||
## COMPOSE\_API\_VERSION
|
||||
|
||||
The Docker API only supports requests from clients which report a specific
|
||||
version. If you receive a `client and server don't have same version error` using
|
||||
`docker-compose`, you can workaround this error by setting this environment
|
||||
variable. Set the version value to match the server version.
|
||||
|
||||
Setting this variable is intended as a workaround for situations where you need
|
||||
to run temporarily with a mismatch between the client and server version. For
|
||||
example, if you can upgrade the client but need to wait to upgrade the server.
|
||||
|
||||
Running with this variable set and a known mismatch does prevent some Docker
|
||||
features from working properly. The exact features that fail would depend on the
|
||||
Docker client and server versions. For this reason, running with this variable
|
||||
set is only intended as a workaround and it is not officially supported.
|
||||
|
||||
If you run into problems running with this set, resolve the mismatch through
|
||||
upgrade and remove this setting to see if your problems resolve before notifying
|
||||
support.
|
||||
|
||||
## DOCKER\_HOST
|
||||
|
||||
Sets the URL of the `docker` daemon. As with the Docker client, defaults to `unix:///var/run/docker.sock`.
|
||||
|
||||
## DOCKER\_TLS\_VERIFY
|
||||
|
||||
When set to anything other than an empty string, enables TLS communication with
|
||||
the `docker` daemon.
|
||||
|
||||
## DOCKER\_CERT\_PATH
|
||||
|
||||
Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TLS verification. Defaults to `~/.docker`.
|
||||
|
||||
## COMPOSE\_HTTP\_TIMEOUT
|
||||
|
||||
Configures the time (in seconds) a request to the Docker daemon is allowed to hang before Compose considers
|
||||
it failed. Defaults to 60 seconds.
|
||||
|
||||
|
||||
## Related Information
|
||||
|
||||
- [User guide](../index.md)
|
||||
- [Installing Compose](../install.md)
|
||||
- [Compose file reference](../compose-file.md)
|
34
docs/reference/events.md
Normal file
34
docs/reference/events.md
Normal file
@ -0,0 +1,34 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "events"
|
||||
description = "Receive real time events from containers."
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, events"]
|
||||
[menu.main]
|
||||
identifier="events.compose"
|
||||
parent = "smn_compose_cli"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# events
|
||||
|
||||
```
|
||||
Usage: events [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--json Output events as a stream of json objects
|
||||
```
|
||||
|
||||
Stream container events for every container in the project.
|
||||
|
||||
With the `--json` flag, a json object will be printed one per line with the
|
||||
format:
|
||||
|
||||
```
|
||||
{
|
||||
"service": "web",
|
||||
"event": "create",
|
||||
"container": "213cf75fc39a",
|
||||
"image": "alpine:edge",
|
||||
"time": "2015-11-20T18:01:03.615550",
|
||||
}
|
||||
```
|
@ -1,34 +1,42 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Compose CLI reference"
|
||||
title = "Command-line Reference"
|
||||
description = "Compose CLI reference"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
[menu.main]
|
||||
identifier = "smn_compose_cli"
|
||||
parent = "smn_compose_ref"
|
||||
parent = "workw_compose"
|
||||
weight=80
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
## Compose CLI reference
|
||||
## Compose command-line reference
|
||||
|
||||
The following pages describe the usage information for the [docker-compose](docker-compose.md) subcommands. You can also see this information by running `docker-compose [SUBCOMMAND] --help` from the command line.
|
||||
The following pages describe the usage information for the [docker-compose](overview.md) subcommands. You can also see this information by running `docker-compose [SUBCOMMAND] --help` from the command line.
|
||||
|
||||
* [docker-compose](overview.md)
|
||||
* [build](build.md)
|
||||
* [config](config.md)
|
||||
* [create](create.md)
|
||||
* [down](down.md)
|
||||
* [events](events.md)
|
||||
* [help](help.md)
|
||||
* [kill](kill.md)
|
||||
* [ps](ps.md)
|
||||
* [restart](restart.md)
|
||||
* [run](run.md)
|
||||
* [start](start.md)
|
||||
* [up](up.md)
|
||||
* [logs](logs.md)
|
||||
* [pause](pause.md)
|
||||
* [port](port.md)
|
||||
* [ps](ps.md)
|
||||
* [pull](pull.md)
|
||||
* [restart](restart.md)
|
||||
* [rm](rm.md)
|
||||
* [run](run.md)
|
||||
* [scale](scale.md)
|
||||
* [start](start.md)
|
||||
* [stop](stop.md)
|
||||
* [unpause](unpause.md)
|
||||
* [up](up.md)
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [CLI environment variables](overview.md)
|
||||
* [docker-compose Command](docker-compose.md)
|
||||
* [CLI environment variables](envvars.md)
|
||||
* [docker-compose Command](overview.md)
|
||||
|
@ -1,8 +1,9 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Introduction to the CLI"
|
||||
description = "Introduction to the CLI"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
title = "Overview of docker-compose CLI"
|
||||
description = "Overview of docker-compose CLI"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, docker-compose"]
|
||||
aliases = ["/compose/reference/docker-compose/"]
|
||||
[menu.main]
|
||||
parent = "smn_compose_cli"
|
||||
weight=-2
|
||||
@ -10,80 +11,107 @@ weight=-2
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Introduction to the CLI
|
||||
# Overview of docker-compose CLI
|
||||
|
||||
This section describes the subcommands you can use with the `docker-compose` command. You can run subcommand against one or more services. To run against a specific service, you supply the service name from your compose configuration. If you do not specify the service name, the command runs against all the services in your configuration.
|
||||
This page provides the usage information for the `docker-compose` Command.
|
||||
You can also see this information by running `docker-compose --help` from the
|
||||
command line.
|
||||
|
||||
```
|
||||
Define and run multi-container applications with Docker.
|
||||
|
||||
Usage:
|
||||
docker-compose [-f=<arg>...] [options] [COMMAND] [ARGS...]
|
||||
docker-compose -h|--help
|
||||
|
||||
Options:
|
||||
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
|
||||
-p, --project-name NAME Specify an alternate project name (default: directory name)
|
||||
--verbose Show more output
|
||||
-v, --version Print version and exit
|
||||
|
||||
Commands:
|
||||
build Build or rebuild services
|
||||
config Validate and view the compose file
|
||||
create Create services
|
||||
down Stop and remove containers, networks, images, and volumes
|
||||
events Receive real time events from containers
|
||||
help Get help on a command
|
||||
kill Kill containers
|
||||
logs View output from containers
|
||||
pause Pause services
|
||||
port Print the public port for a port binding
|
||||
ps List containers
|
||||
pull Pulls service images
|
||||
restart Restart services
|
||||
rm Remove stopped containers
|
||||
run Run a one-off command
|
||||
scale Set number of containers for a service
|
||||
start Start services
|
||||
stop Stop services
|
||||
unpause Unpause services
|
||||
up Create and start containers
|
||||
version Show the Docker-Compose version information
|
||||
|
||||
```
|
||||
|
||||
The Docker Compose binary. You use this command to build and manage multiple
|
||||
services in Docker containers.
|
||||
|
||||
Use the `-f` flag to specify the location of a Compose configuration file. You
|
||||
can supply multiple `-f` configuration files. When you supply multiple files,
|
||||
Compose combines them into a single configuration. Compose builds the
|
||||
configuration in the order you supply the files. Subsequent files override and
|
||||
add to their successors.
|
||||
|
||||
For example, consider this command line:
|
||||
|
||||
```
|
||||
$ docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db`
|
||||
```
|
||||
|
||||
The `docker-compose.yml` file might specify a `webapp` service.
|
||||
|
||||
```
|
||||
webapp:
|
||||
image: examples/web
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- "/data"
|
||||
```
|
||||
|
||||
If the `docker-compose.admin.yml` also specifies this same service, any matching
|
||||
fields will override the previous file. New values, add to the `webapp` service
|
||||
configuration.
|
||||
|
||||
```
|
||||
webapp:
|
||||
build: .
|
||||
environment:
|
||||
- DEBUG=1
|
||||
```
|
||||
|
||||
Use a `-f` with `-` (dash) as the filename to read the configuration from
|
||||
stdin. When stdin is used all paths in the configuration are
|
||||
relative to the current working directory.
|
||||
|
||||
The `-f` flag is optional. If you don't provide this flag on the command line,
|
||||
Compose traverses the working directory and its parent directories looking for a
|
||||
`docker-compose.yml` and a `docker-compose.override.yml` file. You must
|
||||
supply at least the `docker-compose.yml` file. If both files are present on the
|
||||
same directory level, Compose combines the two files into a single configuration.
|
||||
The configuration in the `docker-compose.override.yml` file is applied over and
|
||||
in addition to the values in the `docker-compose.yml` file.
|
||||
|
||||
See also the `COMPOSE_FILE` [environment variable](envvars.md#compose-file).
|
||||
|
||||
Each configuration has a project name. If you supply a `-p` flag, you can
|
||||
specify a project name. If you don't specify the flag, Compose uses the current
|
||||
directory name. See also the `COMPOSE_PROJECT_NAME` [environment variable](
|
||||
envvars.md#compose-project-name)
|
||||
|
||||
|
||||
## Commands
|
||||
## Where to go next
|
||||
|
||||
* [docker-compose Command](docker-compose.md)
|
||||
* [CLI Reference](index.md)
|
||||
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Several environment variables are available for you to configure the Docker Compose command-line behaviour.
|
||||
|
||||
Variables starting with `DOCKER_` are the same as those used to configure the
|
||||
Docker command-line client. If you're using `docker-machine`, then the `eval "$(docker-machine env my-docker-vm)"` command should set them to their correct values. (In this example, `my-docker-vm` is the name of a machine you created.)
|
||||
|
||||
### COMPOSE\_PROJECT\_NAME
|
||||
|
||||
Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively.
|
||||
|
||||
Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME`
|
||||
defaults to the `basename` of the project directory. See also the `-p`
|
||||
[command-line option](docker-compose.md).
|
||||
|
||||
### COMPOSE\_FILE
|
||||
|
||||
Specify the file containing the compose configuration. If not provided,
|
||||
Compose looks for a file named `docker-compose.yml` in the current directory
|
||||
and then each parent directory in succession until a file by that name is
|
||||
found. See also the `-f` [command-line option](docker-compose.md).
|
||||
|
||||
### COMPOSE\_API\_VERSION
|
||||
|
||||
The Docker API only supports requests from clients which report a specific
|
||||
version. If you receive a `client and server don't have same version error` using
|
||||
`docker-compose`, you can workaround this error by setting this environment
|
||||
variable. Set the version value to match the server version.
|
||||
|
||||
Setting this variable is intended as a workaround for situations where you need
|
||||
to run temporarily with a mismatch between the client and server version. For
|
||||
example, if you can upgrade the client but need to wait to upgrade the server.
|
||||
|
||||
Running with this variable set and a known mismatch does prevent some Docker
|
||||
features from working properly. The exact features that fail would depend on the
|
||||
Docker client and server versions. For this reason, running with this variable
|
||||
set is only intended as a workaround and it is not officially supported.
|
||||
|
||||
If you run into problems running with this set, resolve the mismatch through
|
||||
upgrade and remove this setting to see if your problems resolve before notifying
|
||||
support.
|
||||
|
||||
### DOCKER\_HOST
|
||||
|
||||
Sets the URL of the `docker` daemon. As with the Docker client, defaults to `unix:///var/run/docker.sock`.
|
||||
|
||||
### DOCKER\_TLS\_VERIFY
|
||||
|
||||
When set to anything other than an empty string, enables TLS communication with
|
||||
the `docker` daemon.
|
||||
|
||||
### DOCKER\_CERT\_PATH
|
||||
|
||||
Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TLS verification. Defaults to `~/.docker`.
|
||||
|
||||
### COMPOSE\_HTTP\_TIMEOUT
|
||||
|
||||
Configures the time (in seconds) a request to the Docker daemon is allowed to hang before Compose considers
|
||||
it failed. Defaults to 60 seconds.
|
||||
|
||||
|
||||
## Related Information
|
||||
|
||||
- [User guide](../index.md)
|
||||
- [Installing Compose](../install.md)
|
||||
- [Compose file reference](../compose-file.md)
|
||||
* [CLI environment variables](envvars.md)
|
||||
|
@ -20,3 +20,8 @@ Options:
|
||||
```
|
||||
|
||||
Removes stopped service containers.
|
||||
|
||||
By default, volumes attached to containers will not be removed. You can see all
|
||||
volumes with `docker volume ls`.
|
||||
|
||||
Any data which is not in a volume will be lost.
|
||||
|
@ -17,6 +17,7 @@ Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
|
||||
Options:
|
||||
-d Detached mode: Run container in the background, print
|
||||
new container name.
|
||||
--name NAME Assign a name to the container
|
||||
--entrypoint CMD Override the entrypoint of the image.
|
||||
-e KEY=VAL Set an environment variable (can be used multiple times)
|
||||
-u, --user="" Run as specified username or uid
|
||||
|
@ -9,7 +9,7 @@ parent = "smn_compose_cli"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# pause
|
||||
# unpause
|
||||
|
||||
```
|
||||
Usage: unpause [SERVICE...]
|
||||
|
@ -15,18 +15,22 @@ parent = "smn_compose_cli"
|
||||
Usage: up [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--force-recreate Recreate containers even if their configuration and
|
||||
image haven't changed. Incompatible with --no-recreate.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
-t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
|
||||
when attached or when containers are already
|
||||
running. (default: 10)
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
Incompatible with --abort-on-container-exit.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--force-recreate Recreate containers even if their configuration
|
||||
and image haven't changed.
|
||||
Incompatible with --no-recreate.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
--abort-on-container-exit Stops all containers if any container was stopped.
|
||||
Incompatible with -d.
|
||||
-t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
|
||||
when attached or when containers are already
|
||||
running. (default: 10)
|
||||
```
|
||||
|
||||
Builds, (re)creates, starts, and attaches to containers for a service.
|
||||
|
@ -1,16 +1,16 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Quickstart Guide: Compose and WordPress"
|
||||
title = "Quickstart: Compose and WordPress"
|
||||
description = "Getting started with Compose and WordPress"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
parent="workw_compose"
|
||||
weight=6
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Quickstart Guide: Compose and WordPress
|
||||
# Quickstart: Compose and WordPress
|
||||
|
||||
You can use Compose to easily run WordPress in an isolated environment built
|
||||
with Docker containers.
|
||||
@ -28,9 +28,9 @@ to the name of your project.
|
||||
Next, inside that directory, create a `Dockerfile`, a file that defines what
|
||||
environment your app is going to run in. For more information on how to write
|
||||
Dockerfiles, see the
|
||||
[Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the
|
||||
[Dockerfile reference](http://docs.docker.com/reference/builder/). In this case,
|
||||
your Dockerfile should be:
|
||||
[Docker user guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the
|
||||
[Dockerfile reference](https://docs.docker.com/engine/reference/builder/). In
|
||||
this case, your Dockerfile should be:
|
||||
|
||||
FROM orchardup/php5
|
||||
ADD . /code
|
||||
@ -89,11 +89,11 @@ configuration at the `db` container:
|
||||
|
||||
With those four files in place, run `docker-compose up` inside your WordPress
|
||||
directory and it'll pull and build the needed images, and then start the web and
|
||||
database containers. If you're using [Docker Machine](https://docs.docker.com/machine), then `docker-machine ip MACHINE_VM` gives you the machine address and you can open `http://MACHINE_VM_IP:8000` in a browser.
|
||||
database containers. If you're using [Docker Machine](https://docs.docker.com/machine/), then `docker-machine ip MACHINE_VM` gives you the machine address and you can open `http://MACHINE_VM_IP:8000` in a browser.
|
||||
|
||||
## More Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [User guide](index.md)
|
||||
- [Installing Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
|
@ -15,9 +15,9 @@ Before you start, you’ll need to install the experimental build of Docker, and
|
||||
$ curl -L https://experimental.docker.com/builds/Darwin/x86_64/docker-latest > /usr/local/bin/docker
|
||||
$ chmod +x /usr/local/bin/docker
|
||||
|
||||
- To install Machine, follow the instructions [here](http://docs.docker.com/machine/).
|
||||
- To install Machine, follow the instructions [here](https://docs.docker.com/machine/install-machine/).
|
||||
|
||||
- To install Compose, follow the instructions [here](http://docs.docker.com/compose/install/).
|
||||
- To install Compose, follow the instructions [here](https://docs.docker.com/compose/install/).
|
||||
|
||||
You’ll also need a [Docker Hub](https://hub.docker.com/account/signup/) account and a [Digital Ocean](https://www.digitalocean.com/) account.
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
PyYAML==3.11
|
||||
docker-py==1.5.0
|
||||
dockerpty==0.3.4
|
||||
cached-property==1.2.0
|
||||
docker-py==1.7.0
|
||||
dockerpty==0.4.1
|
||||
docopt==0.6.1
|
||||
enum34==1.0.4
|
||||
jsonschema==2.5.1
|
||||
|
@ -41,6 +41,9 @@ Get-ChildItem -Recurse -Include *.pyc | foreach ($_) { Remove-Item $_.FullName }
|
||||
# Create virtualenv
|
||||
virtualenv .\venv
|
||||
|
||||
# pip and pyinstaller generate lots of warnings, so we need to ignore them
|
||||
$ErrorActionPreference = "Continue"
|
||||
|
||||
# Install dependencies
|
||||
.\venv\Scripts\pip install pypiwin32==219
|
||||
.\venv\Scripts\pip install -r requirements.txt
|
||||
@ -50,8 +53,6 @@ virtualenv .\venv
|
||||
git rev-parse --short HEAD | out-file -encoding ASCII compose\GITSHA
|
||||
|
||||
# Build binary
|
||||
# pyinstaller has lots of warnings, so we need to run with ErrorAction = Continue
|
||||
$ErrorActionPreference = "Continue"
|
||||
.\venv\Scripts\pyinstaller .\docker-compose.spec
|
||||
$ErrorActionPreference = "Stop"
|
||||
|
||||
|
@ -2,5 +2,6 @@
|
||||
set -e
|
||||
|
||||
find . -type f -name '*.pyc' -delete
|
||||
find -name __pycache__ -delete
|
||||
find . -name .coverage.* -delete
|
||||
find . -name __pycache__ -delete
|
||||
rm -rf docs/_site build dist docker-compose.egg-info
|
||||
|
@ -18,10 +18,13 @@ PREV_RELEASE=$1
|
||||
VERSION=HEAD
|
||||
URL="https://api.github.com/repos/docker/compose/compare"
|
||||
|
||||
curl -sf "$URL/$PREV_RELEASE...$VERSION" | \
|
||||
contribs=$(curl -sf "$URL/$PREV_RELEASE...$VERSION" | \
|
||||
jq -r '.commits[].author.login' | \
|
||||
sort | \
|
||||
uniq -c | \
|
||||
sort -nr | \
|
||||
awk '{print "@"$2","}' | \
|
||||
xargs echo
|
||||
sort -nr)
|
||||
|
||||
echo "Contributions by user: "
|
||||
echo "$contribs"
|
||||
echo
|
||||
echo "$contribs" | awk '{print "@"$2","}' | xargs
|
||||
|
@ -46,7 +46,7 @@ if [ -z "$REMOTE" ]; then
|
||||
fi
|
||||
|
||||
# handle the difference between a branch and a tag
|
||||
if [ -z "$(git name-rev $BASE_VERSION | grep tags)" ]; then
|
||||
if [ -z "$(git name-rev --tags $BASE_VERSION | grep tags)" ]; then
|
||||
BASE_VERSION=$REMOTE/$BASE_VERSION
|
||||
fi
|
||||
|
||||
@ -63,15 +63,17 @@ git merge --strategy=ours --no-edit $REMOTE/release
|
||||
git config "branch.${BRANCH}.release" $VERSION
|
||||
|
||||
|
||||
editor=${EDITOR:-vim}
|
||||
|
||||
echo "Update versions in docs/install.md, compose/__init__.py, script/run.sh"
|
||||
$EDITOR docs/install.md
|
||||
$EDITOR compose/__init__.py
|
||||
$EDITOR script/run.sh
|
||||
$editor docs/install.md
|
||||
$editor compose/__init__.py
|
||||
$editor script/run.sh
|
||||
|
||||
|
||||
echo "Write release notes in CHANGELOG.md"
|
||||
browser "https://github.com/docker/compose/issues?q=milestone%3A$VERSION+is%3Aclosed"
|
||||
$EDITOR CHANGELOG.md
|
||||
$editor CHANGELOG.md
|
||||
|
||||
|
||||
git diff
|
||||
@ -84,10 +86,10 @@ echo "Push branch to user remote"
|
||||
GITHUB_USER=$USER
|
||||
USER_REMOTE="$(find_remote $GITHUB_USER/compose)"
|
||||
if [ -z "$USER_REMOTE" ]; then
|
||||
echo "No user remote found for $GITHUB_USER"
|
||||
read -r -p "Enter the name of your github user: " GITHUB_USER
|
||||
echo "$GITHUB_USER/compose not found"
|
||||
read -r -p "Enter the name of your GitHub fork (username/repo): " GITHUB_REPO
|
||||
# assumes there is already a user remote somewhere
|
||||
USER_REMOTE=$(find_remote $GITHUB_USER/compose)
|
||||
USER_REMOTE=$(find_remote $GITHUB_REPO)
|
||||
fi
|
||||
if [ -z "$USER_REMOTE" ]; then
|
||||
>&2 echo "No user remote found. You need to 'git push' your branch."
|
||||
|
@ -60,7 +60,7 @@ sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/maste
|
||||
./script/write-git-sha
|
||||
python setup.py sdist
|
||||
if [ "$(command -v twine 2> /dev/null)" ]; then
|
||||
twine upload ./dist/docker-compose-${VERSION}.tar.gz
|
||||
twine upload ./dist/docker-compose-${VERSION/-/}.tar.gz
|
||||
else
|
||||
python setup.py upload
|
||||
fi
|
||||
|
@ -22,7 +22,7 @@ VERSION="$(git config "branch.${BRANCH}.release")" || usage
|
||||
|
||||
|
||||
COMMIT_MSG="Bump $VERSION"
|
||||
sha="$(git log --grep "$COMMIT_MSG" --format="%H")"
|
||||
sha="$(git log --grep "$COMMIT_MSG\$" --format="%H")"
|
||||
if [ -z "$sha" ]; then
|
||||
>&2 echo "No commit with message \"$COMMIT_MSG\""
|
||||
exit 2
|
||||
@ -32,7 +32,7 @@ if [[ "$sha" == "$(git rev-parse HEAD)" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
commits=$(git log --format="%H" "$sha..HEAD" | wc -l)
|
||||
commits=$(git log --format="%H" "$sha..HEAD" | wc -l | xargs echo)
|
||||
|
||||
git rebase --onto $sha~1 HEAD~$commits $BRANCH
|
||||
git cherry-pick $sha
|
||||
|
22
script/run.ps1
Normal file
22
script/run.ps1
Normal file
@ -0,0 +1,22 @@
|
||||
# Run docker-compose in a container via boot2docker.
|
||||
#
|
||||
# The current directory will be mirrored as a volume and additional
|
||||
# volumes (or any other options) can be mounted by using
|
||||
# $Env:DOCKER_COMPOSE_OPTIONS.
|
||||
|
||||
if ($Env:DOCKER_COMPOSE_VERSION -eq $null -or $Env:DOCKER_COMPOSE_VERSION.Length -eq 0) {
|
||||
$Env:DOCKER_COMPOSE_VERSION = "1.6.0rc1"
|
||||
}
|
||||
|
||||
if ($Env:DOCKER_COMPOSE_OPTIONS -eq $null) {
|
||||
$Env:DOCKER_COMPOSE_OPTIONS = ""
|
||||
}
|
||||
|
||||
if (-not $Env:DOCKER_HOST) {
|
||||
docker-machine env --shell=powershell default | Invoke-Expression
|
||||
if (-not $?) { exit $LastExitCode }
|
||||
}
|
||||
|
||||
$local="/$($PWD -replace '^(.):(.*)$', '"$1".ToLower()+"$2".Replace("\","/")' | Invoke-Expression)"
|
||||
docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -v "${local}:$local" -w "$local" $Env:DOCKER_COMPOSE_OPTIONS "docker/compose:$Env:DOCKER_COMPOSE_VERSION" $args
|
||||
exit $LastExitCode
|
@ -15,7 +15,7 @@
|
||||
|
||||
set -e
|
||||
|
||||
VERSION="1.5.2"
|
||||
VERSION="1.6.0"
|
||||
IMAGE="docker/compose:$VERSION"
|
||||
|
||||
|
||||
@ -40,8 +40,14 @@ if [ -n "$compose_dir" ]; then
|
||||
VOLUMES="$VOLUMES -v $compose_dir:$compose_dir"
|
||||
fi
|
||||
if [ -n "$HOME" ]; then
|
||||
VOLUMES="$VOLUMES -v $HOME:$HOME"
|
||||
VOLUMES="$VOLUMES -v $HOME:$HOME -v $HOME:/root" # mount $HOME in /root to share docker.config
|
||||
fi
|
||||
|
||||
# Only allocate tty if we detect one
|
||||
if [ -t 1 ]; then
|
||||
DOCKER_RUN_OPTIONS="-ti"
|
||||
else
|
||||
DOCKER_RUN_OPTIONS="-i"
|
||||
fi
|
||||
|
||||
exec docker run --rm -ti $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w $(pwd) $IMAGE $@
|
||||
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w $(pwd) $IMAGE $@
|
||||
|
@ -18,7 +18,7 @@ get_versions="docker run --rm
|
||||
if [ "$DOCKER_VERSIONS" == "" ]; then
|
||||
DOCKER_VERSIONS="$($get_versions default)"
|
||||
elif [ "$DOCKER_VERSIONS" == "all" ]; then
|
||||
DOCKER_VERSIONS="$($get_versions recent -n 2)"
|
||||
DOCKER_VERSIONS=$($get_versions -n 2 recent)
|
||||
fi
|
||||
|
||||
|
||||
@ -38,12 +38,14 @@ for version in $DOCKER_VERSIONS; do
|
||||
|
||||
trap "on_exit" EXIT
|
||||
|
||||
repo="dockerswarm/dind"
|
||||
|
||||
docker run \
|
||||
-d \
|
||||
--name "$daemon_container" \
|
||||
--privileged \
|
||||
--volume="/var/lib/docker" \
|
||||
dockerswarm/dind:$version \
|
||||
"$repo:$version" \
|
||||
docker daemon -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
|
||||
2>&1 | tail -n 10
|
||||
|
||||
@ -51,6 +53,7 @@ for version in $DOCKER_VERSIONS; do
|
||||
--rm \
|
||||
--link="$daemon_container:docker" \
|
||||
--env="DOCKER_HOST=tcp://docker:2375" \
|
||||
--env="DOCKER_VERSION=$version" \
|
||||
--entrypoint="tox" \
|
||||
"$TAG" \
|
||||
-e py27,py34 -- "$@"
|
||||
|
@ -4,8 +4,8 @@ set -ex
|
||||
|
||||
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
|
||||
script/build-linux
|
||||
script/build-image master
|
||||
# TODO: requires auth
|
||||
# TODO: requires auth to push, so disable for now
|
||||
# script/build-image master
|
||||
# docker push docker/compose:master
|
||||
else
|
||||
script/prepare-osx
|
||||
|
@ -1,5 +1,7 @@
|
||||
#!/usr/bin/env python
|
||||
from __future__ import absolute_import
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import datetime
|
||||
import os.path
|
||||
|
@ -21,7 +21,9 @@ For example, if the list of versions is:
|
||||
`default` would return `1.7.1` and
|
||||
`recent -n 3` would return `1.8.0-rc2 1.7.1 1.6.2`
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import argparse
|
||||
import itertools
|
||||
|
15
setup.py
15
setup.py
@ -28,13 +28,14 @@ def find_version(*file_paths):
|
||||
|
||||
|
||||
install_requires = [
|
||||
'cached-property >= 1.2.0, < 2',
|
||||
'docopt >= 0.6.1, < 0.7',
|
||||
'PyYAML >= 3.10, < 4',
|
||||
'requests >= 2.6.1, < 2.8',
|
||||
'texttable >= 0.8.1, < 0.9',
|
||||
'websocket-client >= 0.32.0, < 1.0',
|
||||
'docker-py >= 1.5.0, < 2',
|
||||
'dockerpty >= 0.3.4, < 0.4',
|
||||
'docker-py >= 1.7.0, < 2',
|
||||
'dockerpty >= 0.4.1, < 0.5',
|
||||
'six >= 1.3.0, < 2',
|
||||
'jsonschema >= 2.5.1, < 3',
|
||||
]
|
||||
@ -66,4 +67,14 @@ setup(
|
||||
[console_scripts]
|
||||
docker-compose=compose.cli.main:main
|
||||
""",
|
||||
classifiers=[
|
||||
'Development Status :: 5 - Production/Stable',
|
||||
'Environment :: Console',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: Apache Software License',
|
||||
'Programming Language :: Python :: 2',
|
||||
'Programming Language :: Python :: 2.7',
|
||||
'Programming Language :: Python :: 3',
|
||||
'Programming Language :: Python :: 3.4',
|
||||
],
|
||||
)
|
||||
|
@ -1,3 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import sys
|
||||
|
||||
if sys.version_info >= (2, 7):
|
||||
|
@ -1,5 +1,8 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import os
|
||||
import shlex
|
||||
import signal
|
||||
@ -8,14 +11,16 @@ import time
|
||||
from collections import namedtuple
|
||||
from operator import attrgetter
|
||||
|
||||
import yaml
|
||||
from docker import errors
|
||||
|
||||
from .. import mock
|
||||
from compose.cli.command import get_project
|
||||
from compose.cli.docker_client import docker_client
|
||||
from compose.container import Container
|
||||
from tests.integration.testcases import DockerClientTestCase
|
||||
from tests.integration.testcases import get_links
|
||||
from tests.integration.testcases import pull_busybox
|
||||
from tests.integration.testcases import v2_only
|
||||
|
||||
|
||||
ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
|
||||
@ -38,12 +43,13 @@ def start_process(base_dir, options):
|
||||
def wait_on_process(proc, returncode=0):
|
||||
stdout, stderr = proc.communicate()
|
||||
if proc.returncode != returncode:
|
||||
print(stderr.decode('utf-8'))
|
||||
print("Stderr: {}".format(stderr))
|
||||
print("Stdout: {}".format(stdout))
|
||||
assert proc.returncode == returncode
|
||||
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
|
||||
|
||||
|
||||
def wait_on_condition(condition, delay=0.1, timeout=5):
|
||||
def wait_on_condition(condition, delay=0.1, timeout=40):
|
||||
start_time = time.time()
|
||||
while not condition():
|
||||
if time.time() - start_time > timeout:
|
||||
@ -51,6 +57,11 @@ def wait_on_condition(condition, delay=0.1, timeout=5):
|
||||
time.sleep(delay)
|
||||
|
||||
|
||||
def kill_service(service):
|
||||
for container in service.containers():
|
||||
container.kill()
|
||||
|
||||
|
||||
class ContainerCountCondition(object):
|
||||
|
||||
def __init__(self, project, expected):
|
||||
@ -71,7 +82,6 @@ class ContainerStateCondition(object):
|
||||
self.name = name
|
||||
self.running = running
|
||||
|
||||
# State.Running == true
|
||||
def __call__(self):
|
||||
try:
|
||||
container = self.client.inspect_container(self.name)
|
||||
@ -80,7 +90,8 @@ class ContainerStateCondition(object):
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return "waiting for container to have state %s" % self.expected
|
||||
state = 'running' if self.running else 'stopped'
|
||||
return "waiting for container to be %s" % state
|
||||
|
||||
|
||||
class CLITestCase(DockerClientTestCase):
|
||||
@ -90,10 +101,18 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.base_dir = 'tests/fixtures/simple-composefile'
|
||||
|
||||
def tearDown(self):
|
||||
self.project.kill()
|
||||
self.project.remove_stopped()
|
||||
for container in self.project.containers(stopped=True, one_off=True):
|
||||
container.remove(force=True)
|
||||
if self.base_dir:
|
||||
self.project.kill()
|
||||
self.project.remove_stopped()
|
||||
|
||||
for container in self.project.containers(stopped=True, one_off=True):
|
||||
container.remove(force=True)
|
||||
|
||||
networks = self.client.networks()
|
||||
for n in networks:
|
||||
if n['Name'].startswith('{}_'.format(self.project.name)):
|
||||
self.client.remove_network(n['Name'])
|
||||
|
||||
super(CLITestCase, self).tearDown()
|
||||
|
||||
@property
|
||||
@ -108,14 +127,75 @@ class CLITestCase(DockerClientTestCase):
|
||||
proc = start_process(self.base_dir, project_options + options)
|
||||
return wait_on_process(proc, returncode=returncode)
|
||||
|
||||
def execute(self, container, cmd):
|
||||
# Remove once Hijack and CloseNotifier sign a peace treaty
|
||||
self.client.close()
|
||||
exc = self.client.exec_create(container.id, cmd)
|
||||
self.client.exec_start(exc)
|
||||
return self.client.exec_inspect(exc)['ExitCode']
|
||||
|
||||
def lookup(self, container, hostname):
|
||||
return self.execute(container, ["nslookup", hostname]) == 0
|
||||
|
||||
def test_help(self):
|
||||
old_base_dir = self.base_dir
|
||||
self.base_dir = 'tests/fixtures/no-composefile'
|
||||
result = self.dispatch(['help', 'up'], returncode=1)
|
||||
assert 'Usage: up [options] [SERVICE...]' in result.stderr
|
||||
# self.project.kill() fails during teardown
|
||||
# unless there is a composefile.
|
||||
self.base_dir = old_base_dir
|
||||
# Prevent tearDown from trying to create a project
|
||||
self.base_dir = None
|
||||
|
||||
# TODO: this shouldn't be v2-dependent
|
||||
@v2_only()
|
||||
def test_config_list_services(self):
|
||||
self.base_dir = 'tests/fixtures/v2-full'
|
||||
result = self.dispatch(['config', '--services'])
|
||||
assert set(result.stdout.rstrip().split('\n')) == {'web', 'other'}
|
||||
|
||||
# TODO: this shouldn't be v2-dependent
|
||||
@v2_only()
|
||||
def test_config_quiet_with_error(self):
|
||||
self.base_dir = None
|
||||
result = self.dispatch([
|
||||
'-f', 'tests/fixtures/invalid-composefile/invalid.yml',
|
||||
'config', '-q'
|
||||
], returncode=1)
|
||||
assert "'notaservice' doesn't have any configuration" in result.stderr
|
||||
|
||||
# TODO: this shouldn't be v2-dependent
|
||||
@v2_only()
|
||||
def test_config_quiet(self):
|
||||
self.base_dir = 'tests/fixtures/v2-full'
|
||||
assert self.dispatch(['config', '-q']).stdout == ''
|
||||
|
||||
# TODO: this shouldn't be v2-dependent
|
||||
@v2_only()
|
||||
def test_config_default(self):
|
||||
self.base_dir = 'tests/fixtures/v2-full'
|
||||
result = self.dispatch(['config'])
|
||||
# assert there are no python objects encoded in the output
|
||||
assert '!!' not in result.stdout
|
||||
|
||||
output = yaml.load(result.stdout)
|
||||
expected = {
|
||||
'version': '2.0',
|
||||
'volumes': {'data': {'driver': 'local'}},
|
||||
'networks': {'front': {}},
|
||||
'services': {
|
||||
'web': {
|
||||
'build': {
|
||||
'context': os.path.abspath(self.base_dir),
|
||||
},
|
||||
'networks': ['front', 'default'],
|
||||
'volumes_from': ['service:other:rw'],
|
||||
},
|
||||
'other': {
|
||||
'image': 'busybox:latest',
|
||||
'command': 'top',
|
||||
'volumes': ['/data:rw'],
|
||||
},
|
||||
},
|
||||
}
|
||||
assert output == expected
|
||||
|
||||
def test_ps(self):
|
||||
self.project.get_service('simple').create_container()
|
||||
@ -166,7 +246,8 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
assert 'Pulling simple (busybox:latest)...' in result.stderr
|
||||
assert 'Pulling another (nonexisting-image:latest)...' in result.stderr
|
||||
assert 'Error: image library/nonexisting-image:latest not found' in result.stderr
|
||||
assert 'Error: image library/nonexisting-image' in result.stderr
|
||||
assert 'not found' in result.stderr
|
||||
|
||||
def test_build_plain(self):
|
||||
self.base_dir = 'tests/fixtures/simple-dockerfile'
|
||||
@ -231,6 +312,73 @@ class CLITestCase(DockerClientTestCase):
|
||||
]
|
||||
assert not containers
|
||||
|
||||
def test_create(self):
|
||||
self.dispatch(['create'])
|
||||
service = self.project.get_service('simple')
|
||||
another = self.project.get_service('another')
|
||||
self.assertEqual(len(service.containers()), 0)
|
||||
self.assertEqual(len(another.containers()), 0)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.assertEqual(len(another.containers(stopped=True)), 1)
|
||||
|
||||
def test_create_with_force_recreate(self):
|
||||
self.dispatch(['create'], None)
|
||||
service = self.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 0)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
|
||||
old_ids = [c.id for c in service.containers(stopped=True)]
|
||||
|
||||
self.dispatch(['create', '--force-recreate'], None)
|
||||
self.assertEqual(len(service.containers()), 0)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
|
||||
new_ids = [c.id for c in service.containers(stopped=True)]
|
||||
|
||||
self.assertNotEqual(old_ids, new_ids)
|
||||
|
||||
def test_create_with_no_recreate(self):
|
||||
self.dispatch(['create'], None)
|
||||
service = self.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 0)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
|
||||
old_ids = [c.id for c in service.containers(stopped=True)]
|
||||
|
||||
self.dispatch(['create', '--no-recreate'], None)
|
||||
self.assertEqual(len(service.containers()), 0)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
|
||||
new_ids = [c.id for c in service.containers(stopped=True)]
|
||||
|
||||
self.assertEqual(old_ids, new_ids)
|
||||
|
||||
def test_create_with_force_recreate_and_no_recreate(self):
|
||||
self.dispatch(
|
||||
['create', '--force-recreate', '--no-recreate'],
|
||||
returncode=1)
|
||||
|
||||
def test_down_invalid_rmi_flag(self):
|
||||
result = self.dispatch(['down', '--rmi', 'bogus'], returncode=1)
|
||||
assert '--rmi flag must be' in result.stderr
|
||||
|
||||
@v2_only()
|
||||
def test_down(self):
|
||||
self.base_dir = 'tests/fixtures/v2-full'
|
||||
self.dispatch(['up', '-d'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 2))
|
||||
|
||||
result = self.dispatch(['down', '--rmi=local', '--volumes'])
|
||||
assert 'Stopping v2full_web_1' in result.stderr
|
||||
assert 'Stopping v2full_other_1' in result.stderr
|
||||
assert 'Removing v2full_web_1' in result.stderr
|
||||
assert 'Removing v2full_other_1' in result.stderr
|
||||
assert 'Removing volume v2full_data' in result.stderr
|
||||
assert 'Removing image v2full_web' in result.stderr
|
||||
assert 'Removing image busybox' not in result.stderr
|
||||
assert 'Removing network v2full_default' in result.stderr
|
||||
assert 'Removing network v2full_front' in result.stderr
|
||||
|
||||
def test_up_detached(self):
|
||||
self.dispatch(['up', '-d'])
|
||||
service = self.project.get_service('simple')
|
||||
@ -251,60 +399,244 @@ class CLITestCase(DockerClientTestCase):
|
||||
assert 'simple_1 | simple' in result.stdout
|
||||
assert 'another_1 | another' in result.stdout
|
||||
|
||||
def test_up_without_networking(self):
|
||||
self.require_api_version('1.21')
|
||||
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
@v2_only()
|
||||
def test_up(self):
|
||||
self.base_dir = 'tests/fixtures/v2-simple'
|
||||
self.dispatch(['up', '-d'], None)
|
||||
client = docker_client(version='1.21')
|
||||
|
||||
networks = client.networks(names=[self.project.name])
|
||||
self.assertEqual(len(networks), 0)
|
||||
|
||||
for service in self.project.get_services():
|
||||
containers = service.containers()
|
||||
self.assertEqual(len(containers), 1)
|
||||
self.assertNotEqual(containers[0].get('Config.Hostname'), service.name)
|
||||
|
||||
web_container = self.project.get_service('web').containers()[0]
|
||||
self.assertTrue(web_container.get('HostConfig.Links'))
|
||||
|
||||
def test_up_with_networking(self):
|
||||
self.require_api_version('1.21')
|
||||
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['--x-networking', 'up', '-d'], None)
|
||||
client = docker_client(version='1.21')
|
||||
|
||||
services = self.project.get_services()
|
||||
|
||||
networks = client.networks(names=[self.project.name])
|
||||
for n in networks:
|
||||
self.addCleanup(client.remove_network, n['Id'])
|
||||
network_name = self.project.networks.networks['default'].full_name
|
||||
networks = self.client.networks(names=[network_name])
|
||||
self.assertEqual(len(networks), 1)
|
||||
self.assertEqual(networks[0]['Driver'], 'bridge')
|
||||
assert 'com.docker.network.bridge.enable_icc' not in networks[0]['Options']
|
||||
|
||||
network = client.inspect_network(networks[0]['Id'])
|
||||
self.assertEqual(len(network['Containers']), len(services))
|
||||
network = self.client.inspect_network(networks[0]['Id'])
|
||||
|
||||
for service in services:
|
||||
containers = service.containers()
|
||||
self.assertEqual(len(containers), 1)
|
||||
self.assertIn(containers[0].id, network['Containers'])
|
||||
|
||||
container = containers[0]
|
||||
self.assertIn(container.id, network['Containers'])
|
||||
|
||||
networks = container.get('NetworkSettings.Networks')
|
||||
self.assertEqual(list(networks), [network['Name']])
|
||||
|
||||
self.assertEqual(
|
||||
sorted(networks[network['Name']]['Aliases']),
|
||||
sorted([service.name, container.short_id]))
|
||||
|
||||
for service in services:
|
||||
assert self.lookup(container, service.name)
|
||||
|
||||
@v2_only()
|
||||
def test_up_with_default_network_config(self):
|
||||
filename = 'default-network-config.yml'
|
||||
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self._project = get_project(self.base_dir, [filename])
|
||||
|
||||
self.dispatch(['-f', filename, 'up', '-d'], None)
|
||||
|
||||
network_name = self.project.networks.networks['default'].full_name
|
||||
networks = self.client.networks(names=[network_name])
|
||||
|
||||
assert networks[0]['Options']['com.docker.network.bridge.enable_icc'] == 'false'
|
||||
|
||||
@v2_only()
|
||||
def test_up_with_networks(self):
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self.dispatch(['up', '-d'], None)
|
||||
|
||||
back_name = '{}_back'.format(self.project.name)
|
||||
front_name = '{}_front'.format(self.project.name)
|
||||
|
||||
networks = [
|
||||
n for n in self.client.networks()
|
||||
if n['Name'].startswith('{}_'.format(self.project.name))
|
||||
]
|
||||
|
||||
# Two networks were created: back and front
|
||||
assert sorted(n['Name'] for n in networks) == [back_name, front_name]
|
||||
|
||||
back_network = [n for n in networks if n['Name'] == back_name][0]
|
||||
front_network = [n for n in networks if n['Name'] == front_name][0]
|
||||
|
||||
web_container = self.project.get_service('web').containers()[0]
|
||||
self.assertFalse(web_container.get('HostConfig.Links'))
|
||||
app_container = self.project.get_service('app').containers()[0]
|
||||
db_container = self.project.get_service('db').containers()[0]
|
||||
|
||||
def test_up_with_links(self):
|
||||
for net_name in [front_name, back_name]:
|
||||
links = app_container.get('NetworkSettings.Networks.{}.Links'.format(net_name))
|
||||
assert '{}:database'.format(db_container.name) in links
|
||||
|
||||
# db and app joined the back network
|
||||
assert sorted(back_network['Containers']) == sorted([db_container.id, app_container.id])
|
||||
|
||||
# web and app joined the front network
|
||||
assert sorted(front_network['Containers']) == sorted([web_container.id, app_container.id])
|
||||
|
||||
# web can see app but not db
|
||||
assert self.lookup(web_container, "app")
|
||||
assert not self.lookup(web_container, "db")
|
||||
|
||||
# app can see db
|
||||
assert self.lookup(app_container, "db")
|
||||
|
||||
# app has aliased db to "database"
|
||||
assert self.lookup(app_container, "database")
|
||||
|
||||
@v2_only()
|
||||
def test_up_missing_network(self):
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
|
||||
result = self.dispatch(
|
||||
['-f', 'missing-network.yml', 'up', '-d'],
|
||||
returncode=1)
|
||||
|
||||
assert 'Service "web" uses an undefined network "foo"' in result.stderr
|
||||
|
||||
@v2_only()
|
||||
def test_up_with_network_mode(self):
|
||||
c = self.client.create_container('busybox', 'top', name='composetest_network_mode_container')
|
||||
self.addCleanup(self.client.remove_container, c, force=True)
|
||||
self.client.start(c)
|
||||
container_mode_source = 'container:{}'.format(c['Id'])
|
||||
|
||||
filename = 'network-mode.yml'
|
||||
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self._project = get_project(self.base_dir, [filename])
|
||||
|
||||
self.dispatch(['-f', filename, 'up', '-d'], None)
|
||||
|
||||
networks = [
|
||||
n for n in self.client.networks()
|
||||
if n['Name'].startswith('{}_'.format(self.project.name))
|
||||
]
|
||||
assert not networks
|
||||
|
||||
for name in ['bridge', 'host', 'none']:
|
||||
container = self.project.get_service(name).containers()[0]
|
||||
assert list(container.get('NetworkSettings.Networks')) == [name]
|
||||
assert container.get('HostConfig.NetworkMode') == name
|
||||
|
||||
service_mode_source = 'container:{}'.format(
|
||||
self.project.get_service('bridge').containers()[0].id)
|
||||
service_mode_container = self.project.get_service('service').containers()[0]
|
||||
assert not service_mode_container.get('NetworkSettings.Networks')
|
||||
assert service_mode_container.get('HostConfig.NetworkMode') == service_mode_source
|
||||
|
||||
container_mode_container = self.project.get_service('container').containers()[0]
|
||||
assert not container_mode_container.get('NetworkSettings.Networks')
|
||||
assert container_mode_container.get('HostConfig.NetworkMode') == container_mode_source
|
||||
|
||||
@v2_only()
|
||||
def test_up_external_networks(self):
|
||||
filename = 'external-networks.yml'
|
||||
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self._project = get_project(self.base_dir, [filename])
|
||||
|
||||
result = self.dispatch(['-f', filename, 'up', '-d'], returncode=1)
|
||||
assert 'declared as external, but could not be found' in result.stderr
|
||||
|
||||
networks = [
|
||||
n['Name'] for n in self.client.networks()
|
||||
if n['Name'].startswith('{}_'.format(self.project.name))
|
||||
]
|
||||
assert not networks
|
||||
|
||||
network_names = ['{}_{}'.format(self.project.name, n) for n in ['foo', 'bar']]
|
||||
for name in network_names:
|
||||
self.client.create_network(name)
|
||||
|
||||
self.dispatch(['-f', filename, 'up', '-d'])
|
||||
container = self.project.containers()[0]
|
||||
assert sorted(list(container.get('NetworkSettings.Networks'))) == sorted(network_names)
|
||||
|
||||
@v2_only()
|
||||
def test_up_with_external_default_network(self):
|
||||
filename = 'external-default.yml'
|
||||
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self._project = get_project(self.base_dir, [filename])
|
||||
|
||||
result = self.dispatch(['-f', filename, 'up', '-d'], returncode=1)
|
||||
assert 'declared as external, but could not be found' in result.stderr
|
||||
|
||||
networks = [
|
||||
n['Name'] for n in self.client.networks()
|
||||
if n['Name'].startswith('{}_'.format(self.project.name))
|
||||
]
|
||||
assert not networks
|
||||
|
||||
network_name = 'composetest_external_network'
|
||||
self.client.create_network(network_name)
|
||||
|
||||
self.dispatch(['-f', filename, 'up', '-d'])
|
||||
container = self.project.containers()[0]
|
||||
assert list(container.get('NetworkSettings.Networks')) == [network_name]
|
||||
|
||||
@v2_only()
|
||||
def test_up_no_services(self):
|
||||
self.base_dir = 'tests/fixtures/no-services'
|
||||
self.dispatch(['up', '-d'], None)
|
||||
|
||||
network_names = [
|
||||
n['Name'] for n in self.client.networks()
|
||||
if n['Name'].startswith('{}_'.format(self.project.name))
|
||||
]
|
||||
assert network_names == []
|
||||
|
||||
def test_up_with_links_v1(self):
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['up', '-d', 'web'], None)
|
||||
|
||||
# No network was created
|
||||
network_name = self.project.networks.networks['default'].full_name
|
||||
networks = self.client.networks(names=[network_name])
|
||||
assert networks == []
|
||||
|
||||
web = self.project.get_service('web')
|
||||
db = self.project.get_service('db')
|
||||
console = self.project.get_service('console')
|
||||
|
||||
# console was not started
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
# web has links
|
||||
web_container = web.containers()[0]
|
||||
self.assertTrue(web_container.get('HostConfig.Links'))
|
||||
|
||||
def test_up_with_net_is_invalid(self):
|
||||
self.base_dir = 'tests/fixtures/net-container'
|
||||
|
||||
result = self.dispatch(
|
||||
['-f', 'v2-invalid.yml', 'up', '-d'],
|
||||
returncode=1)
|
||||
|
||||
# TODO: fix validation error messages for v2 files
|
||||
# assert "Unsupported config option for service 'web': 'net'" in exc.exconly()
|
||||
assert "Unsupported config option" in result.stderr
|
||||
|
||||
def test_up_with_net_v1(self):
|
||||
self.base_dir = 'tests/fixtures/net-container'
|
||||
self.dispatch(['up', '-d'], None)
|
||||
|
||||
bar = self.project.get_service('bar')
|
||||
bar_container = bar.containers()[0]
|
||||
|
||||
foo = self.project.get_service('foo')
|
||||
foo_container = foo.containers()[0]
|
||||
|
||||
assert foo_container.get('HostConfig.NetworkMode') == \
|
||||
'container:{}'.format(bar_container.id)
|
||||
|
||||
def test_up_with_no_deps(self):
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['up', '-d', '--no-deps', 'web'], None)
|
||||
@ -375,6 +707,17 @@ class CLITestCase(DockerClientTestCase):
|
||||
os.kill(proc.pid, signal.SIGTERM)
|
||||
wait_on_condition(ContainerCountCondition(self.project, 0))
|
||||
|
||||
@v2_only()
|
||||
def test_up_handles_force_shutdown(self):
|
||||
self.base_dir = 'tests/fixtures/sleeps-composefile'
|
||||
proc = start_process(self.base_dir, ['up', '-t', '200'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 2))
|
||||
|
||||
os.kill(proc.pid, signal.SIGTERM)
|
||||
time.sleep(0.1)
|
||||
os.kill(proc.pid, signal.SIGTERM)
|
||||
wait_on_condition(ContainerCountCondition(self.project, 0))
|
||||
|
||||
def test_run_service_without_links(self):
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['run', 'console', '/bin/true'])
|
||||
@ -558,6 +901,28 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(port_short, "127.0.0.1:30000")
|
||||
self.assertEqual(port_full, "127.0.0.1:30001")
|
||||
|
||||
def test_run_with_expose_ports(self):
|
||||
# create one off container
|
||||
self.base_dir = 'tests/fixtures/expose-composefile'
|
||||
self.dispatch(['run', '-d', '--service-ports', 'simple'])
|
||||
container = self.project.get_service('simple').containers(one_off=True)[0]
|
||||
|
||||
ports = container.ports
|
||||
self.assertEqual(len(ports), 9)
|
||||
# exposed ports are not mapped to host ports
|
||||
assert ports['3000/tcp'] is None
|
||||
assert ports['3001/tcp'] is None
|
||||
assert ports['3001/udp'] is None
|
||||
assert ports['3002/tcp'] is None
|
||||
assert ports['3003/tcp'] is None
|
||||
assert ports['3004/tcp'] is None
|
||||
assert ports['3005/tcp'] is None
|
||||
assert ports['3006/udp'] is None
|
||||
assert ports['3007/udp'] is None
|
||||
|
||||
# close all one off containers we just created
|
||||
container.stop()
|
||||
|
||||
def test_run_with_custom_name(self):
|
||||
self.base_dir = 'tests/fixtures/environment-composefile'
|
||||
name = 'the-container-name'
|
||||
@ -567,18 +932,48 @@ class CLITestCase(DockerClientTestCase):
|
||||
container, = service.containers(stopped=True, one_off=True)
|
||||
self.assertEqual(container.name, name)
|
||||
|
||||
def test_run_with_networking(self):
|
||||
self.require_api_version('1.21')
|
||||
client = docker_client(version='1.21')
|
||||
self.base_dir = 'tests/fixtures/simple-dockerfile'
|
||||
self.dispatch(['--x-networking', 'run', 'simple', 'true'], None)
|
||||
service = self.project.get_service('simple')
|
||||
container, = service.containers(stopped=True, one_off=True)
|
||||
networks = client.networks(names=[self.project.name])
|
||||
for n in networks:
|
||||
self.addCleanup(client.remove_network, n['Id'])
|
||||
self.assertEqual(len(networks), 1)
|
||||
self.assertEqual(container.human_readable_command, u'true')
|
||||
@v2_only()
|
||||
def test_run_interactive_connects_to_network(self):
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
|
||||
self.dispatch(['up', '-d'])
|
||||
self.dispatch(['run', 'app', 'nslookup', 'app'])
|
||||
self.dispatch(['run', 'app', 'nslookup', 'db'])
|
||||
|
||||
containers = self.project.get_service('app').containers(
|
||||
stopped=True, one_off=True)
|
||||
assert len(containers) == 2
|
||||
|
||||
for container in containers:
|
||||
networks = container.get('NetworkSettings.Networks')
|
||||
|
||||
assert sorted(list(networks)) == [
|
||||
'{}_{}'.format(self.project.name, name)
|
||||
for name in ['back', 'front']
|
||||
]
|
||||
|
||||
for _, config in networks.items():
|
||||
assert not config['Aliases']
|
||||
|
||||
@v2_only()
|
||||
def test_run_detached_connects_to_network(self):
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self.dispatch(['up', '-d'])
|
||||
self.dispatch(['run', '-d', 'app', 'top'])
|
||||
|
||||
container = self.project.get_service('app').containers(one_off=True)[0]
|
||||
networks = container.get('NetworkSettings.Networks')
|
||||
|
||||
assert sorted(list(networks)) == [
|
||||
'{}_{}'.format(self.project.name, name)
|
||||
for name in ['back', 'front']
|
||||
]
|
||||
|
||||
for _, config in networks.items():
|
||||
assert not config['Aliases']
|
||||
|
||||
assert self.lookup(container, 'app')
|
||||
assert self.lookup(container, 'db')
|
||||
|
||||
def test_run_handles_sigint(self):
|
||||
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
|
||||
@ -609,13 +1004,13 @@ class CLITestCase(DockerClientTestCase):
|
||||
def test_rm(self):
|
||||
service = self.project.get_service('simple')
|
||||
service.create_container()
|
||||
service.kill()
|
||||
kill_service(service)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.dispatch(['rm', '--force'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 0)
|
||||
service = self.project.get_service('simple')
|
||||
service.create_container()
|
||||
service.kill()
|
||||
kill_service(service)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.dispatch(['rm', '-f'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 0)
|
||||
@ -631,6 +1026,51 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.assertFalse(service.containers(stopped=True)[0].is_running)
|
||||
|
||||
def test_stop_signal(self):
|
||||
self.base_dir = 'tests/fixtures/stop-signal-composefile'
|
||||
self.dispatch(['up', '-d'], None)
|
||||
service = self.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
self.assertTrue(service.containers()[0].is_running)
|
||||
|
||||
self.dispatch(['stop', '-t', '1'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.assertFalse(service.containers(stopped=True)[0].is_running)
|
||||
self.assertEqual(service.containers(stopped=True)[0].exit_code, 0)
|
||||
|
||||
def test_start_no_containers(self):
|
||||
result = self.dispatch(['start'], returncode=1)
|
||||
assert 'No containers to start' in result.stderr
|
||||
|
||||
@v2_only()
|
||||
def test_up_logging(self):
|
||||
self.base_dir = 'tests/fixtures/logging-composefile'
|
||||
self.dispatch(['up', '-d'])
|
||||
simple = self.project.get_service('simple').containers()[0]
|
||||
log_config = simple.get('HostConfig.LogConfig')
|
||||
self.assertTrue(log_config)
|
||||
self.assertEqual(log_config.get('Type'), 'none')
|
||||
|
||||
another = self.project.get_service('another').containers()[0]
|
||||
log_config = another.get('HostConfig.LogConfig')
|
||||
self.assertTrue(log_config)
|
||||
self.assertEqual(log_config.get('Type'), 'json-file')
|
||||
self.assertEqual(log_config.get('Config')['max-size'], '10m')
|
||||
|
||||
def test_up_logging_legacy(self):
|
||||
self.base_dir = 'tests/fixtures/logging-composefile-legacy'
|
||||
self.dispatch(['up', '-d'])
|
||||
simple = self.project.get_service('simple').containers()[0]
|
||||
log_config = simple.get('HostConfig.LogConfig')
|
||||
self.assertTrue(log_config)
|
||||
self.assertEqual(log_config.get('Type'), 'none')
|
||||
|
||||
another = self.project.get_service('another').containers()[0]
|
||||
log_config = another.get('HostConfig.LogConfig')
|
||||
self.assertTrue(log_config)
|
||||
self.assertEqual(log_config.get('Type'), 'json-file')
|
||||
self.assertEqual(log_config.get('Config')['max-size'], '10m')
|
||||
|
||||
def test_pause_unpause(self):
|
||||
self.dispatch(['up', '-d'], None)
|
||||
service = self.project.get_service('simple')
|
||||
@ -642,6 +1082,14 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.dispatch(['unpause'], None)
|
||||
self.assertFalse(service.containers()[0].is_paused)
|
||||
|
||||
def test_pause_no_containers(self):
|
||||
result = self.dispatch(['pause'], returncode=1)
|
||||
assert 'No containers to pause' in result.stderr
|
||||
|
||||
def test_unpause_no_containers(self):
|
||||
result = self.dispatch(['unpause'], returncode=1)
|
||||
assert 'No containers to unpause' in result.stderr
|
||||
|
||||
def test_logs_invalid_service_name(self):
|
||||
self.dispatch(['logs', 'madeupname'], returncode=1)
|
||||
|
||||
@ -682,7 +1130,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
def test_restart(self):
|
||||
service = self.project.get_service('simple')
|
||||
container = service.create_container()
|
||||
container.start()
|
||||
service.start_container(container)
|
||||
started_at = container.dictionary['State']['StartedAt']
|
||||
self.dispatch(['restart', '-t', '1'], None)
|
||||
container.inspect()
|
||||
@ -704,6 +1152,10 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.dispatch(['restart', '-t', '1'], None)
|
||||
self.assertEqual(len(service.containers(stopped=False)), 1)
|
||||
|
||||
def test_restart_no_containers(self):
|
||||
result = self.dispatch(['restart'], returncode=1)
|
||||
assert 'No containers to restart' in result.stderr
|
||||
|
||||
def test_scale(self):
|
||||
project = self.project
|
||||
|
||||
@ -758,6 +1210,35 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(get_port(3000, index=2), containers[1].get_local_port(3000))
|
||||
self.assertEqual(get_port(3002), "")
|
||||
|
||||
def test_events_json(self):
|
||||
events_proc = start_process(self.base_dir, ['events', '--json'])
|
||||
self.dispatch(['up', '-d'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 2))
|
||||
|
||||
os.kill(events_proc.pid, signal.SIGINT)
|
||||
result = wait_on_process(events_proc, returncode=1)
|
||||
lines = [json.loads(line) for line in result.stdout.rstrip().split('\n')]
|
||||
assert [e['action'] for e in lines] == ['create', 'start', 'create', 'start']
|
||||
|
||||
def test_events_human_readable(self):
|
||||
events_proc = start_process(self.base_dir, ['events'])
|
||||
self.dispatch(['up', '-d', 'simple'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 1))
|
||||
|
||||
os.kill(events_proc.pid, signal.SIGINT)
|
||||
result = wait_on_process(events_proc, returncode=1)
|
||||
lines = result.stdout.rstrip().split('\n')
|
||||
assert len(lines) == 2
|
||||
|
||||
container, = self.project.containers()
|
||||
expected_template = (
|
||||
' container {} {} (image=busybox:latest, '
|
||||
'name=simplecomposefile_simple_1)')
|
||||
|
||||
assert expected_template.format('create', container.id) in lines[0]
|
||||
assert expected_template.format('start', container.id) in lines[1]
|
||||
assert lines[0].startswith(datetime.date.today().isoformat())
|
||||
|
||||
def test_env_file_relative_to_compose_file(self):
|
||||
config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml')
|
||||
self.dispatch(['-f', config_path, 'up', '-d'], None)
|
||||
@ -776,7 +1257,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.dispatch(['up', '-d'], None)
|
||||
|
||||
container = self.project.containers(stopped=True)[0]
|
||||
actual_host_path = container.get('Volumes')['/container-path']
|
||||
actual_host_path = container.get_mount('/container-path')['Source']
|
||||
components = actual_host_path.split('/')
|
||||
assert components[-2:] == ['home-dir', 'my-volume']
|
||||
|
||||
@ -814,7 +1295,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
web, other, db = containers
|
||||
self.assertEqual(web.human_readable_command, 'top')
|
||||
self.assertTrue({'db', 'other'} <= set(web.links()))
|
||||
self.assertTrue({'db', 'other'} <= set(get_links(web)))
|
||||
self.assertEqual(db.human_readable_command, 'top')
|
||||
self.assertEqual(other.human_readable_command, 'top')
|
||||
|
||||
@ -836,7 +1317,9 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(containers), 2)
|
||||
web = containers[1]
|
||||
|
||||
self.assertEqual(set(web.links()), set(['db', 'mydb_1', 'extends_mydb_1']))
|
||||
self.assertEqual(
|
||||
set(get_links(web)),
|
||||
set(['db', 'mydb_1', 'extends_mydb_1']))
|
||||
|
||||
expected_env = set([
|
||||
"FOO=1",
|
||||
|
11
tests/fixtures/expose-composefile/docker-compose.yml
vendored
Normal file
11
tests/fixtures/expose-composefile/docker-compose.yml
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
expose:
|
||||
- '3000'
|
||||
- '3001/tcp'
|
||||
- '3001/udp'
|
||||
- '3002-3003'
|
||||
- '3004-3005/tcp'
|
||||
- '3006-3007/udp'
|
13
tests/fixtures/extends/common-env-labels-ulimits.yml
vendored
Normal file
13
tests/fixtures/extends/common-env-labels-ulimits.yml
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
web:
|
||||
extends:
|
||||
file: common.yml
|
||||
service: web
|
||||
environment:
|
||||
- FOO=2
|
||||
- BAZ=3
|
||||
labels: ['label=one']
|
||||
ulimits:
|
||||
nproc: 65535
|
||||
memlock:
|
||||
soft: 1024
|
||||
hard: 2048
|
1
tests/fixtures/extends/common.yml
vendored
1
tests/fixtures/extends/common.yml
vendored
@ -1,6 +1,7 @@
|
||||
web:
|
||||
image: busybox
|
||||
command: /bin/true
|
||||
net: host
|
||||
environment:
|
||||
- FOO=1
|
||||
- BAR=1
|
||||
|
1
tests/fixtures/extends/docker-compose.yml
vendored
1
tests/fixtures/extends/docker-compose.yml
vendored
@ -11,6 +11,7 @@ myweb:
|
||||
BAR: "2"
|
||||
# add BAZ
|
||||
BAZ: "2"
|
||||
net: bridge
|
||||
mydb:
|
||||
image: busybox
|
||||
command: top
|
||||
|
12
tests/fixtures/extends/invalid-net-v2.yml
vendored
Normal file
12
tests/fixtures/extends/invalid-net-v2.yml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
version: "2"
|
||||
services:
|
||||
myweb:
|
||||
build: '.'
|
||||
extends:
|
||||
service: web
|
||||
command: top
|
||||
web:
|
||||
build: '.'
|
||||
network_mode: "service:net"
|
||||
net:
|
||||
build: '.'
|
5
tests/fixtures/invalid-composefile/invalid.yml
vendored
Normal file
5
tests/fixtures/invalid-composefile/invalid.yml
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
|
||||
notaservice: oops
|
||||
|
||||
web:
|
||||
image: 'alpine:edge'
|
10
tests/fixtures/logging-composefile-legacy/docker-compose.yml
vendored
Normal file
10
tests/fixtures/logging-composefile-legacy/docker-compose.yml
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
log_driver: "none"
|
||||
another:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
log_driver: "json-file"
|
||||
log_opt:
|
||||
max-size: "10m"
|
14
tests/fixtures/logging-composefile/docker-compose.yml
vendored
Normal file
14
tests/fixtures/logging-composefile/docker-compose.yml
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
version: "2"
|
||||
services:
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
logging:
|
||||
driver: "none"
|
||||
another:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
7
tests/fixtures/net-container/docker-compose.yml
vendored
Normal file
7
tests/fixtures/net-container/docker-compose.yml
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
foo:
|
||||
image: busybox
|
||||
command: top
|
||||
net: "container:bar"
|
||||
bar:
|
||||
image: busybox
|
||||
command: top
|
10
tests/fixtures/net-container/v2-invalid.yml
vendored
Normal file
10
tests/fixtures/net-container/v2-invalid.yml
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
foo:
|
||||
image: busybox
|
||||
command: top
|
||||
bar:
|
||||
image: busybox
|
||||
command: top
|
||||
net: "container:foo"
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user