Merge pull request #5726 from docker/bump-1.20.0-rc1

Bump 1.20.0 rc1
This commit is contained in:
Joffrey F 2018-02-27 15:22:38 -08:00 committed by GitHub
commit 296d8ed155
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
59 changed files with 1893 additions and 340 deletions

View File

@ -5,15 +5,15 @@ jobs:
xcode: "8.3.3" xcode: "8.3.3"
steps: steps:
- checkout - checkout
# - run: - run:
# name: install python3 name: install python3
# command: brew install python3 command: brew update > /dev/null && brew install python3
- run: - run:
name: install tox name: install tox
command: sudo pip install --upgrade tox==2.1.1 command: sudo pip install --upgrade tox==2.1.1
- run: - run:
name: unit tests name: unit tests
command: tox -e py27 -- tests/unit command: tox -e py27,py36 -- tests/unit
build-osx-binary: build-osx-binary:
macos: macos:

View File

@ -1,6 +1,83 @@
Change log Change log
========== ==========
1.20.0 (2018-03-07)
-------------------
### New features
#### Compose file version 3.6
- Introduced version 3.6 of the `docker-compose.yml` specification.
This version requires to be used with Docker Engine 18.02.0 or above.
- Added support for the `tmpfs.size` property in volume mappings
#### Compose file version 3.2 and up
- The `--build-arg` option can now be used without specifying a service
in `docker-compose build`
#### Compose file version 2.3
- Added support for `device_cgroup_rules` in service definitions
- Added support for the `tmpfs.size` property in long-form volume mappings
- The `--build-arg` option can now be used without specifying a service
in `docker-compose build`
#### All formats
- Added a `--log-level` option to the top-level `docker-compose` command.
Accepted values are `debug`, `info`, `warning`, `error`, `critical`.
Default log level is `info`
- `docker-compose run` now allows users to unset the container's entrypoint
- Proxy configuration found in the `~/.docker/config.json` file now populates
environment and build args for containers created by Compose
- Added a `--use-aliases` flag to `docker-compose run`, indicating that
network aliases declared in the service's config should be used for the
running container
- `docker-compose run` now kills and removes the running container upon
receiving `SIGHUP`
- `docker-compose ps` now shows the containers' health status if available
- Added the long-form `--detach` option to the `exec`, `run` and `up`
commands
### Bugfixes
- Fixed `.dockerignore` handling, notably with regard to absolute paths
and last-line precedence rules
- Fixed a bug introduced in 1.19.0 which caused the default certificate path
to not be honored by Compose
- Fixed a bug where Compose would incorrectly check whether a symlink's
destination was accessible when part of a build context
- Fixed a bug where `.dockerignore` files containing lines of whitespace
caused Compose to error out on Windows
- Fixed a bug where `--tls*` and `--host` options wouldn't be properly honored
for interactive `run` and `exec` commands
- A `seccomp:<filepath>` entry in the `security_opt` config now correctly
sends the contents of the file to the engine
- Improved support for non-unicode locales
- Fixed a crash occurring on Windows when the user's home directory name
contained non-ASCII characters
- Fixed a bug occurring during builds caused by files with a negative `mtime`
values in the build context
1.19.0 (2018-02-07) 1.19.0 (2018-02-07)
------------------- -------------------

View File

@ -43,7 +43,11 @@ To run the style checks at any time run `tox -e pre-commit`.
## Submitting a pull request ## Submitting a pull request
See Docker's [basic contribution workflow](https://docs.docker.com/opensource/workflow/make-a-contribution/#the-basic-contribution-workflow) for a guide on how to submit a pull request for code or documentation. See Docker's [basic contribution workflow](https://docs.docker.com/v17.06/opensource/code/#code-contribution-workflow) for a guide on how to submit a pull request for code.
## Documentation changes
Issues and pull requests to update the documentation should be submitted to the [docs repo](https://github.com/docker/docker.github.io). You can learn more about contributing to the documentation [here](https://docs.docker.com/opensource/#how-to-contribute-to-the-docs).
## Running the test suite ## Running the test suite
@ -69,6 +73,4 @@ you can specify a test directory, file, module, class or method:
## Finding things to work on ## Finding things to work on
We use a [ZenHub board](https://www.zenhub.io/) to keep track of specific things we are working on and planning to work on. If you're looking for things to work on, stuff in the backlog is a great place to start. [Issues marked with the `exp/beginner` label](https://github.com/docker/compose/issues?q=is%3Aopen+is%3Aissue+label%3Aexp%2Fbeginner) are a good starting point for people looking to make their first contribution to the project.
For more information about our project planning, take a look at our [GitHub wiki](https://github.com/docker/compose/wiki).

View File

@ -1,21 +1,12 @@
FROM debian:wheezy FROM python:3.6
RUN set -ex; \ RUN set -ex; \
apt-get update -qq; \ apt-get update -qq; \
apt-get install -y \ apt-get install -y \
locales \ locales \
gcc \
make \
zlib1g \
zlib1g-dev \
libssl-dev \
git \
ca-certificates \
curl \ curl \
libsqlite3-dev \ python-dev \
libbz2-dev \ git
; \
rm -rf /var/lib/apt/lists/*
RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz" && \ RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz" && \
SHA256=692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d; \ SHA256=692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d; \
@ -25,44 +16,6 @@ RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stabl
chmod +x /usr/local/bin/docker && \ chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz rm dockerbins.tgz
# Build Python 2.7.13 from source
RUN set -ex; \
curl -LO https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
SHA256=a4f05a0720ce0fd92626f0278b6b433eee9a6173ddf2bced7957dfb599a5ece1; \
echo "${SHA256} Python-2.7.13.tgz" | sha256sum -c - && \
tar -xzf Python-2.7.13.tgz; \
cd Python-2.7.13; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.13; \
rm Python-2.7.13.tgz
# Build python 3.4 from source
RUN set -ex; \
curl -LO https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz && \
SHA256=fe59daced99549d1d452727c050ae486169e9716a890cffb0d468b376d916b48; \
echo "${SHA256} Python-3.4.6.tgz" | sha256sum -c - && \
tar -xzf Python-3.4.6.tgz; \
cd Python-3.4.6; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-3.4.6; \
rm Python-3.4.6.tgz
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install pip
RUN set -ex; \
curl -LO https://bootstrap.pypa.io/get-pip.py && \
SHA256=19dae841a150c86e2a09d475b5eb0602861f2a5b7761ec268049a662dbd2bd0c; \
echo "${SHA256} get-pip.py" | sha256sum -c - && \
python get-pip.py
# Python3 requires a valid locale # Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8 ENV LANG en_US.UTF-8
@ -83,4 +36,4 @@ RUN tox --notest
ADD . /code/ ADD . /code/
RUN chown -R user /code/ RUN chown -R user /code/
ENTRYPOINT ["/code/.tox/py27/bin/docker-compose"] ENTRYPOINT ["/code/.tox/py36/bin/docker-compose"]

View File

@ -33,15 +33,15 @@ RUN set -ex; \
cd ..; \ cd ..; \
rm -rf /Python-2.7.13 rm -rf /Python-2.7.13
# Build python 3.4 from source # Build python 3.6 from source
RUN set -ex; \ RUN set -ex; \
curl -L https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz | tar -xz; \ curl -L https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz | tar -xz; \
cd Python-3.4.6; \ cd Python-3.6.4; \
./configure --enable-shared; \ ./configure --enable-shared; \
make; \ make; \
make install; \ make install; \
cd ..; \ cd ..; \
rm -rf /Python-3.4.6 rm -rf /Python-3.6.4
# Make libpython findable # Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib ENV LD_LIBRARY_PATH /usr/local/lib

View File

@ -29,6 +29,16 @@ RUN mkdir -p /lib /lib64 /usr/glibc-compat/lib/locale /etc && \
ln -s /usr/glibc-compat/lib/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2 && \ ln -s /usr/glibc-compat/lib/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2 && \
ln -s /usr/glibc-compat/etc/ld.so.cache /etc/ld.so.cache ln -s /usr/glibc-compat/etc/ld.so.cache /etc/ld.so.cache
RUN apk add --no-cache curl && \
curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz" && \
SHA256=692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d; \
echo "${SHA256} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz && \
apk del curl
COPY dist/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose COPY dist/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
ENTRYPOINT ["docker-compose"] ENTRYPOINT ["docker-compose"]

35
Jenkinsfile vendored
View File

@ -18,12 +18,26 @@ def buildImage = { ->
} }
} }
def get_versions = { int number ->
def docker_versions
wrappedNode(label: "ubuntu && !zfs") {
def result = sh(script: """docker run --rm \\
--entrypoint=/code/.tox/py27/bin/python \\
${image.id} \\
/code/script/test/versions.py -n ${number} docker/docker-ce recent
""", returnStdout: true
)
docker_versions = result.split()
}
return docker_versions
}
def runTests = { Map settings -> def runTests = { Map settings ->
def dockerVersions = settings.get("dockerVersions", null) def dockerVersions = settings.get("dockerVersions", null)
def pythonVersions = settings.get("pythonVersions", null) def pythonVersions = settings.get("pythonVersions", null)
if (!pythonVersions) { if (!pythonVersions) {
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py27,py34')`") throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py27,py36')`")
} }
if (!dockerVersions) { if (!dockerVersions) {
throw new Exception("Need Docker versions to test. e.g.: `runTests(dockerVersions: 'all')`") throw new Exception("Need Docker versions to test. e.g.: `runTests(dockerVersions: 'all')`")
@ -46,7 +60,7 @@ def runTests = { Map settings ->
-e "DOCKER_VERSIONS=${dockerVersions}" \\ -e "DOCKER_VERSIONS=${dockerVersions}" \\
-e "BUILD_NUMBER=\$BUILD_TAG" \\ -e "BUILD_NUMBER=\$BUILD_TAG" \\
-e "PY_TEST_VERSIONS=${pythonVersions}" \\ -e "PY_TEST_VERSIONS=${pythonVersions}" \\
--entrypoint="script/ci" \\ --entrypoint="script/test/ci" \\
${image.id} \\ ${image.id} \\
--verbose --verbose
""" """
@ -56,9 +70,14 @@ def runTests = { Map settings ->
} }
buildImage() buildImage()
// TODO: break this out into meaningful "DOCKER_VERSIONS" values instead of all
parallel( def testMatrix = [failFast: true]
failFast: true, def docker_versions = get_versions(2)
all_py27: runTests(pythonVersions: "py27", dockerVersions: "all"),
all_py34: runTests(pythonVersions: "py34", dockerVersions: "all"), for (int i = 0 ;i < docker_versions.length ; i++) {
) def dockerVersion = docker_versions[i]
testMatrix["${dockerVersion}_py27"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py27"])
testMatrix["${dockerVersion}_py36"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py36"])
}
parallel(testMatrix)

View File

@ -2,15 +2,15 @@
version: '{branch}-{build}' version: '{branch}-{build}'
install: install:
- "SET PATH=C:\\Python27-x64;C:\\Python27-x64\\Scripts;%PATH%" - "SET PATH=C:\\Python36-x64;C:\\Python36-x64\\Scripts;%PATH%"
- "python --version" - "python --version"
- "pip install tox==2.1.1 virtualenv==13.1.2" - "pip install tox==2.9.1 virtualenv==15.1.0"
# Build the binary after tests # Build the binary after tests
build: false build: false
test_script: test_script:
- "tox -e py27,py34 -- tests/unit" - "tox -e py27,py36 -- tests/unit"
- ps: ".\\script\\build\\windows.ps1" - ps: ".\\script\\build\\windows.ps1"
artifacts: artifacts:

View File

@ -1,4 +1,4 @@
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = '1.19.0' __version__ = '1.20.0-rc1'

View File

@ -1,49 +0,0 @@
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import os
import subprocess
import sys
# Attempt to detect https://github.com/docker/compose/issues/4344
try:
# We don't try importing pip because it messes with package imports
# on some Linux distros (Ubuntu, Fedora)
# https://github.com/docker/compose/issues/4425
# https://github.com/docker/compose/issues/4481
# https://github.com/pypa/pip/blob/master/pip/_vendor/__init__.py
env = os.environ.copy()
env[str('PIP_DISABLE_PIP_VERSION_CHECK')] = str('1')
s_cmd = subprocess.Popen(
# DO NOT replace this call with a `sys.executable` call. It breaks the binary
# distribution (with the binary calling itself recursively over and over).
['pip', 'freeze'], stderr=subprocess.PIPE, stdout=subprocess.PIPE,
env=env
)
packages = s_cmd.communicate()[0].splitlines()
dockerpy_installed = len(
list(filter(lambda p: p.startswith(b'docker-py=='), packages))
) > 0
if dockerpy_installed:
from .colors import yellow
print(
yellow('WARNING:'),
"Dependency conflict: an older version of the 'docker-py' package "
"may be polluting the namespace. "
"If you're experiencing crashes, run the following command to remedy the issue:\n"
"pip uninstall docker-py; pip uninstall docker; pip install docker",
file=sys.stderr
)
except OSError:
# pip command is not available, which indicates it's probably the binary
# distribution of Compose which is not affected
pass
except UnicodeDecodeError:
# ref: https://github.com/docker/compose/issues/4663
# This could be caused by a number of things, but it seems to be a
# python 2 + MacOS interaction. It's not ideal to ignore this, but at least
# it doesn't make the program unusable.
pass

View File

@ -38,6 +38,7 @@ def project_from_options(project_dir, options):
tls_config=tls_config_from_options(options, environment), tls_config=tls_config_from_options(options, environment),
environment=environment, environment=environment,
override_dir=options.get('--project-directory'), override_dir=options.get('--project-directory'),
compatibility=options.get('--compatibility'),
) )
@ -63,7 +64,8 @@ def get_config_from_options(base_dir, options):
base_dir, options, environment base_dir, options, environment
) )
return config.load( return config.load(
config.find(base_dir, config_path, environment) config.find(base_dir, config_path, environment),
options.get('--compatibility')
) )
@ -100,14 +102,15 @@ def get_client(environment, verbose=False, version=None, tls_config=None, host=N
def get_project(project_dir, config_path=None, project_name=None, verbose=False, def get_project(project_dir, config_path=None, project_name=None, verbose=False,
host=None, tls_config=None, environment=None, override_dir=None): host=None, tls_config=None, environment=None, override_dir=None,
compatibility=False):
if not environment: if not environment:
environment = Environment.from_env_file(project_dir) environment = Environment.from_env_file(project_dir)
config_details = config.find(project_dir, config_path, environment, override_dir) config_details = config.find(project_dir, config_path, environment, override_dir)
project_name = get_project_name( project_name = get_project_name(
config_details.working_dir, project_name, environment config_details.working_dir, project_name, environment
) )
config_data = config.load(config_details) config_data = config.load(config_details, compatibility)
api_version = environment.get( api_version = environment.get(
'COMPOSE_API_VERSION', 'COMPOSE_API_VERSION',

View File

@ -9,16 +9,21 @@ from docker import APIClient
from docker.errors import TLSParameterError from docker.errors import TLSParameterError
from docker.tls import TLSConfig from docker.tls import TLSConfig
from docker.utils import kwargs_from_env from docker.utils import kwargs_from_env
from docker.utils.config import home_dir
from ..config.environment import Environment from ..config.environment import Environment
from ..const import HTTP_TIMEOUT from ..const import HTTP_TIMEOUT
from ..utils import unquote_path
from .errors import UserError from .errors import UserError
from .utils import generate_user_agent from .utils import generate_user_agent
from .utils import unquote_path
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
def default_cert_path():
return os.path.join(home_dir(), '.docker')
def get_tls_version(environment): def get_tls_version(environment):
compose_tls_version = environment.get('COMPOSE_TLS_VERSION', None) compose_tls_version = environment.get('COMPOSE_TLS_VERSION', None)
if not compose_tls_version: if not compose_tls_version:
@ -56,6 +61,12 @@ def tls_config_from_options(options, environment=None):
key = os.path.join(cert_path, 'key.pem') key = os.path.join(cert_path, 'key.pem')
ca_cert = os.path.join(cert_path, 'ca.pem') ca_cert = os.path.join(cert_path, 'ca.pem')
if verify and not any((ca_cert, cert, key)):
# Default location for cert files is ~/.docker
ca_cert = os.path.join(default_cert_path(), 'ca.pem')
cert = os.path.join(default_cert_path(), 'cert.pem')
key = os.path.join(default_cert_path(), 'key.pem')
tls_version = get_tls_version(environment) tls_version = get_tls_version(environment)
advanced_opts = any([ca_cert, cert, key, verify, tls_version]) advanced_opts = any([ca_cert, cert, key, verify, tls_version])
@ -106,4 +117,7 @@ def docker_client(environment, version=None, tls_config=None, host=None,
kwargs['user_agent'] = generate_user_agent() kwargs['user_agent'] = generate_user_agent()
return APIClient(**kwargs) client = APIClient(**kwargs)
client._original_base_url = kwargs.get('base_url')
return client

View File

@ -100,7 +100,10 @@ def dispatch():
{'options_first': True, 'version': get_version_info('compose')}) {'options_first': True, 'version': get_version_info('compose')})
options, handler, command_options = dispatcher.parse(sys.argv[1:]) options, handler, command_options = dispatcher.parse(sys.argv[1:])
setup_console_handler(console_handler, options.get('--verbose'), options.get('--no-ansi')) setup_console_handler(console_handler,
options.get('--verbose'),
options.get('--no-ansi'),
options.get("--log-level"))
setup_parallel_logger(options.get('--no-ansi')) setup_parallel_logger(options.get('--no-ansi'))
if options.get('--no-ansi'): if options.get('--no-ansi'):
command_options['--no-color'] = True command_options['--no-color'] = True
@ -113,13 +116,13 @@ def perform_command(options, handler, command_options):
handler(command_options) handler(command_options)
return return
if options['COMMAND'] in ('config', 'bundle'): if options['COMMAND'] == 'config':
command = TopLevelCommand(None) command = TopLevelCommand(None, options=options)
handler(command, options, command_options) handler(command, command_options)
return return
project = project_from_options('.', options) project = project_from_options('.', options)
command = TopLevelCommand(project) command = TopLevelCommand(project, options=options)
with errors.handle_connection_errors(project.client): with errors.handle_connection_errors(project.client):
handler(command, command_options) handler(command, command_options)
@ -139,7 +142,7 @@ def setup_parallel_logger(noansi):
compose.parallel.ParallelStreamWriter.set_noansi() compose.parallel.ParallelStreamWriter.set_noansi()
def setup_console_handler(handler, verbose, noansi=False): def setup_console_handler(handler, verbose, noansi=False, level=None):
if handler.stream.isatty() and noansi is False: if handler.stream.isatty() and noansi is False:
format_class = ConsoleWarningFormatter format_class = ConsoleWarningFormatter
else: else:
@ -147,10 +150,26 @@ def setup_console_handler(handler, verbose, noansi=False):
if verbose: if verbose:
handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s')) handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s'))
handler.setLevel(logging.DEBUG) loglevel = logging.DEBUG
else: else:
handler.setFormatter(format_class()) handler.setFormatter(format_class())
handler.setLevel(logging.INFO) loglevel = logging.INFO
if level is not None:
levels = {
'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL,
}
loglevel = levels.get(level.upper())
if loglevel is None:
raise UserError(
'Invalid value for --log-level. Expected one of DEBUG, INFO, WARNING, ERROR, CRITICAL.'
)
handler.setLevel(loglevel)
# stolen from docopt master # stolen from docopt master
@ -168,9 +187,12 @@ class TopLevelCommand(object):
docker-compose -h|--help docker-compose -h|--help
Options: Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml) -f, --file FILE Specify an alternate compose file
-p, --project-name NAME Specify an alternate project name (default: directory name) (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--verbose Show more output --verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi Do not print ANSI control characters --no-ansi Do not print ANSI control characters
-v, --version Print version and exit -v, --version Print version and exit
-H, --host HOST Daemon socket to connect to -H, --host HOST Daemon socket to connect to
@ -180,11 +202,12 @@ class TopLevelCommand(object):
--tlscert CLIENT_CERT_PATH Path to TLS certificate file --tlscert CLIENT_CERT_PATH Path to TLS certificate file
--tlskey TLS_KEY_PATH Path to TLS key file --tlskey TLS_KEY_PATH Path to TLS key file
--tlsverify Use TLS and verify the remote --tlsverify Use TLS and verify the remote
--skip-hostname-check Don't check the daemon's hostname against the name specified --skip-hostname-check Don't check the daemon's hostname against the
in the client certificate (for example if your docker host name specified in the client certificate
is an IP address)
--project-directory PATH Specify an alternate working directory --project-directory PATH Specify an alternate working directory
(default: the path of the Compose file) (default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert deploy
keys in v3 files to their non-Swarm equivalent
Commands: Commands:
build Build or rebuild services build Build or rebuild services
@ -215,9 +238,10 @@ class TopLevelCommand(object):
version Show the Docker-Compose version information version Show the Docker-Compose version information
""" """
def __init__(self, project, project_dir='.'): def __init__(self, project, project_dir='.', options=None):
self.project = project self.project = project
self.project_dir = '.' self.project_dir = '.'
self.toplevel_options = options or {}
def build(self, options): def build(self, options):
""" """
@ -234,26 +258,28 @@ class TopLevelCommand(object):
--no-cache Do not use cache when building the image. --no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image. --pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container. -m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for one service. --build-arg key=val Set build-time variables for services.
""" """
service_names = options['SERVICE'] service_names = options['SERVICE']
build_args = options.get('--build-arg', None) build_args = options.get('--build-arg', None)
if build_args: if build_args:
if not service_names and docker.utils.version_lt(self.project.client.api_version, '1.25'):
raise UserError(
'--build-arg is only supported when services are specified for API version < 1.25.'
' Please use a Compose file version > 2.2 or specify which services to build.'
)
environment = Environment.from_env_file(self.project_dir) environment = Environment.from_env_file(self.project_dir)
build_args = resolve_build_args(build_args, environment) build_args = resolve_build_args(build_args, environment)
if not service_names and build_args:
raise UserError("Need service name for --build-arg option")
self.project.build( self.project.build(
service_names=service_names, service_names=options['SERVICE'],
no_cache=bool(options.get('--no-cache', False)), no_cache=bool(options.get('--no-cache', False)),
pull=bool(options.get('--pull', False)), pull=bool(options.get('--pull', False)),
force_rm=bool(options.get('--force-rm', False)), force_rm=bool(options.get('--force-rm', False)),
memory=options.get('--memory'), memory=options.get('--memory'),
build_args=build_args) build_args=build_args)
def bundle(self, config_options, options): def bundle(self, options):
""" """
Generate a Distributed Application Bundle (DAB) from the Compose file. Generate a Distributed Application Bundle (DAB) from the Compose file.
@ -272,8 +298,7 @@ class TopLevelCommand(object):
-o, --output PATH Path to write the bundle file to. -o, --output PATH Path to write the bundle file to.
Defaults to "<project name>.dab". Defaults to "<project name>.dab".
""" """
self.project = project_from_options('.', config_options) compose_config = get_config_from_options(self.project_dir, self.toplevel_options)
compose_config = get_config_from_options(self.project_dir, config_options)
output = options["--output"] output = options["--output"]
if not output: if not output:
@ -286,7 +311,7 @@ class TopLevelCommand(object):
log.info("Wrote bundle to {}".format(output)) log.info("Wrote bundle to {}".format(output))
def config(self, config_options, options): def config(self, options):
""" """
Validate and view the Compose file. Validate and view the Compose file.
@ -301,11 +326,12 @@ class TopLevelCommand(object):
""" """
compose_config = get_config_from_options(self.project_dir, config_options) compose_config = get_config_from_options(self.project_dir, self.toplevel_options)
image_digests = None image_digests = None
if options['--resolve-image-digests']: if options['--resolve-image-digests']:
self.project = project_from_options('.', config_options) self.project = project_from_options('.', self.toplevel_options)
with errors.handle_connection_errors(self.project.client):
image_digests = image_digests_for_project(self.project) image_digests = image_digests_for_project(self.project)
if options['--quiet']: if options['--quiet']:
@ -424,7 +450,7 @@ class TopLevelCommand(object):
Usage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...] Usage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]
Options: Options:
-d Detached mode: Run command in the background. -d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process. --privileged Give extended privileges to the process.
-u, --user USER Run the command as this user. -u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec` -T Disable pseudo-tty allocation. By default `docker-compose exec`
@ -438,7 +464,7 @@ class TopLevelCommand(object):
use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI') use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
index = int(options.get('--index')) index = int(options.get('--index'))
service = self.project.get_service(options['SERVICE']) service = self.project.get_service(options['SERVICE'])
detach = options['-d'] detach = options.get('--detach')
if options['--env'] and docker.utils.version_lt(self.project.client.api_version, '1.25'): if options['--env'] and docker.utils.version_lt(self.project.client.api_version, '1.25'):
raise UserError("Setting environment for exec is not supported in API < 1.25'") raise UserError("Setting environment for exec is not supported in API < 1.25'")
@ -451,7 +477,10 @@ class TopLevelCommand(object):
tty = not options["-T"] tty = not options["-T"]
if IS_WINDOWS_PLATFORM or use_cli and not detach: if IS_WINDOWS_PLATFORM or use_cli and not detach:
sys.exit(call_docker(build_exec_command(options, container.id, command))) sys.exit(call_docker(
build_exec_command(options, container.id, command),
self.toplevel_options)
)
create_exec_options = { create_exec_options = {
"privileged": options["--privileged"], "privileged": options["--privileged"],
@ -503,14 +532,14 @@ class TopLevelCommand(object):
Usage: images [options] [SERVICE...] Usage: images [options] [SERVICE...]
Options: Options:
-q Only display IDs -q, --quiet Only display IDs
""" """
containers = sorted( containers = sorted(
self.project.containers(service_names=options['SERVICE'], stopped=True) + self.project.containers(service_names=options['SERVICE'], stopped=True) +
self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only), self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
key=attrgetter('name')) key=attrgetter('name'))
if options['-q']: if options['--quiet']:
for image in set(c.image for c in containers): for image in set(c.image for c in containers):
print(image.split(':')[1]) print(image.split(':')[1])
else: else:
@ -624,12 +653,12 @@ class TopLevelCommand(object):
Usage: ps [options] [SERVICE...] Usage: ps [options] [SERVICE...]
Options: Options:
-q Only display IDs -q, --quiet Only display IDs
--services Display services --services Display services
--filter KEY=VAL Filter services by a property --filter KEY=VAL Filter services by a property
""" """
if options['-q'] and options['--services']: if options['--quiet'] and options['--services']:
raise UserError('-q and --services cannot be combined') raise UserError('--quiet and --services cannot be combined')
if options['--services']: if options['--services']:
filt = build_filter(options.get('--filter')) filt = build_filter(options.get('--filter'))
@ -644,7 +673,7 @@ class TopLevelCommand(object):
self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only), self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
key=attrgetter('name')) key=attrgetter('name'))
if options['-q']: if options['--quiet']:
for container in containers: for container in containers:
print(container.id) print(container.id)
else: else:
@ -676,13 +705,15 @@ class TopLevelCommand(object):
Options: Options:
--ignore-pull-failures Pull what it can and ignores images with pull failures. --ignore-pull-failures Pull what it can and ignores images with pull failures.
--parallel Pull multiple images in parallel. --parallel Pull multiple images in parallel.
--quiet Pull without printing progress information -q, --quiet Pull without printing progress information
--include-deps Also pull services declared as dependencies
""" """
self.project.pull( self.project.pull(
service_names=options['SERVICE'], service_names=options['SERVICE'],
ignore_pull_failures=options.get('--ignore-pull-failures'), ignore_pull_failures=options.get('--ignore-pull-failures'),
parallel_pull=options.get('--parallel'), parallel_pull=options.get('--parallel'),
silent=options.get('--quiet'), silent=options.get('--quiet'),
include_deps=options.get('--include-deps'),
) )
def push(self, options): def push(self, options):
@ -760,7 +791,7 @@ class TopLevelCommand(object):
SERVICE [COMMAND] [ARGS...] SERVICE [COMMAND] [ARGS...]
Options: Options:
-d Detached mode: Run container in the background, print -d, --detach Detached mode: Run container in the background, print
new container name. new container name.
--name NAME Assign a name to the container --name NAME Assign a name to the container
--entrypoint CMD Override the entrypoint of the image. --entrypoint CMD Override the entrypoint of the image.
@ -772,13 +803,15 @@ class TopLevelCommand(object):
-p, --publish=[] Publish a container's port(s) to the host -p, --publish=[] Publish a container's port(s) to the host
--service-ports Run command with the service's ports enabled and mapped --service-ports Run command with the service's ports enabled and mapped
to the host. to the host.
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
-v, --volume=[] Bind mount a volume (default []) -v, --volume=[] Bind mount a volume (default [])
-T Disable pseudo-tty allocation. By default `docker-compose run` -T Disable pseudo-tty allocation. By default `docker-compose run`
allocates a TTY. allocates a TTY.
-w, --workdir="" Working directory inside the container -w, --workdir="" Working directory inside the container
""" """
service = self.project.get_service(options['SERVICE']) service = self.project.get_service(options['SERVICE'])
detach = options['-d'] detach = options.get('--detach')
if options['--publish'] and options['--service-ports']: if options['--publish'] and options['--service-ports']:
raise UserError( raise UserError(
@ -794,7 +827,10 @@ class TopLevelCommand(object):
command = service.options.get('command') command = service.options.get('command')
container_options = build_container_options(options, detach, command) container_options = build_container_options(options, detach, command)
run_one_off_container(container_options, self.project, service, options, self.project_dir) run_one_off_container(
container_options, self.project, service, options,
self.toplevel_options, self.project_dir
)
def scale(self, options): def scale(self, options):
""" """
@ -926,10 +962,11 @@ class TopLevelCommand(object):
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...] Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
Options: Options:
-d Detached mode: Run containers in the background, -d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with print new container names. Incompatible with
--abort-on-container-exit. --abort-on-container-exit.
--no-color Produce monochrome output. --no-color Produce monochrome output.
--quiet-pull Pull without printing progress information
--no-deps Don't start linked services. --no-deps Don't start linked services.
--force-recreate Recreate containers even if their configuration --force-recreate Recreate containers even if their configuration
and image haven't changed. and image haven't changed.
@ -961,7 +998,7 @@ class TopLevelCommand(object):
service_names = options['SERVICE'] service_names = options['SERVICE']
timeout = timeout_from_opts(options) timeout = timeout_from_opts(options)
remove_orphans = options['--remove-orphans'] remove_orphans = options['--remove-orphans']
detached = options.get('-d') detached = options.get('--detach')
no_start = options.get('--no-start') no_start = options.get('--no-start')
if detached and (cascade_stop or exit_value_from): if detached and (cascade_stop or exit_value_from):
@ -973,7 +1010,7 @@ class TopLevelCommand(object):
if ignore_orphans and remove_orphans: if ignore_orphans and remove_orphans:
raise UserError("COMPOSE_IGNORE_ORPHANS and --remove-orphans cannot be combined.") raise UserError("COMPOSE_IGNORE_ORPHANS and --remove-orphans cannot be combined.")
opts = ['-d', '--abort-on-container-exit', '--exit-code-from'] opts = ['--detach', '--abort-on-container-exit', '--exit-code-from']
for excluded in [x for x in opts if options.get(x) and no_start]: for excluded in [x for x in opts if options.get(x) and no_start]:
raise UserError('--no-start and {} cannot be combined.'.format(excluded)) raise UserError('--no-start and {} cannot be combined.'.format(excluded))
@ -994,7 +1031,8 @@ class TopLevelCommand(object):
start=not no_start, start=not no_start,
always_recreate_deps=always_recreate_deps, always_recreate_deps=always_recreate_deps,
reset_container_image=rebuild, reset_container_image=rebuild,
renew_anonymous_volumes=options.get('--renew-anon-volumes') renew_anonymous_volumes=options.get('--renew-anon-volumes'),
silent=options.get('--quiet-pull'),
) )
try: try:
@ -1108,7 +1146,6 @@ def timeout_from_opts(options):
def image_digests_for_project(project, allow_push=False): def image_digests_for_project(project, allow_push=False):
with errors.handle_connection_errors(project.client):
try: try:
return get_image_digests( return get_image_digests(
project, project,
@ -1197,8 +1234,10 @@ def build_container_options(options, detach, command):
if options['--label']: if options['--label']:
container_options['labels'] = parse_labels(options['--label']) container_options['labels'] = parse_labels(options['--label'])
if options['--entrypoint']: if options.get('--entrypoint') is not None:
container_options['entrypoint'] = options.get('--entrypoint') container_options['entrypoint'] = (
[""] if options['--entrypoint'] == '' else options['--entrypoint']
)
if options['--rm']: if options['--rm']:
container_options['restart'] = None container_options['restart'] = None
@ -1225,7 +1264,8 @@ def build_container_options(options, detach, command):
return container_options return container_options
def run_one_off_container(container_options, project, service, options, project_dir='.'): def run_one_off_container(container_options, project, service, options, toplevel_options,
project_dir='.'):
if not options['--no-deps']: if not options['--no-deps']:
deps = service.get_dependency_names() deps = service.get_dependency_names()
if deps: if deps:
@ -1243,8 +1283,10 @@ def run_one_off_container(container_options, project, service, options, project_
one_off=True, one_off=True,
**container_options) **container_options)
if options['-d']: use_network_aliases = options['--use-aliases']
service.start_container(container)
if options.get('--detach'):
service.start_container(container, use_network_aliases)
print(container.name) print(container.name)
return return
@ -1256,11 +1298,15 @@ def run_one_off_container(container_options, project, service, options, project_
use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI') use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
signals.set_signal_handler_to_shutdown() signals.set_signal_handler_to_shutdown()
signals.set_signal_handler_to_hang_up()
try: try:
try: try:
if IS_WINDOWS_PLATFORM or use_cli: if IS_WINDOWS_PLATFORM or use_cli:
service.connect_container_to_networks(container) service.connect_container_to_networks(container, use_network_aliases)
exit_code = call_docker(["start", "--attach", "--interactive", container.id]) exit_code = call_docker(
["start", "--attach", "--interactive", container.id],
toplevel_options
)
else: else:
operation = RunOperation( operation = RunOperation(
project.client, project.client,
@ -1270,13 +1316,13 @@ def run_one_off_container(container_options, project, service, options, project_
) )
pty = PseudoTerminal(project.client, operation) pty = PseudoTerminal(project.client, operation)
sockets = pty.sockets() sockets = pty.sockets()
service.start_container(container) service.start_container(container, use_network_aliases)
pty.start(sockets) pty.start(sockets)
exit_code = container.wait() exit_code = container.wait()
except signals.ShutdownException: except (signals.ShutdownException):
project.client.stop(container.id) project.client.stop(container.id)
exit_code = 1 exit_code = 1
except signals.ShutdownException: except (signals.ShutdownException, signals.HangUpException):
project.client.kill(container.id) project.client.kill(container.id)
remove_container(force=True) remove_container(force=True)
sys.exit(2) sys.exit(2)
@ -1339,12 +1385,32 @@ def exit_if(condition, message, exit_code):
raise SystemExit(exit_code) raise SystemExit(exit_code)
def call_docker(args): def call_docker(args, dockeropts):
executable_path = find_executable('docker') executable_path = find_executable('docker')
if not executable_path: if not executable_path:
raise UserError(errors.docker_not_found_msg("Couldn't find `docker` binary.")) raise UserError(errors.docker_not_found_msg("Couldn't find `docker` binary."))
args = [executable_path] + args tls = dockeropts.get('--tls', False)
ca_cert = dockeropts.get('--tlscacert')
cert = dockeropts.get('--tlscert')
key = dockeropts.get('--tlskey')
verify = dockeropts.get('--tlsverify')
host = dockeropts.get('--host')
tls_options = []
if tls:
tls_options.append('--tls')
if ca_cert:
tls_options.extend(['--tlscacert', ca_cert])
if cert:
tls_options.extend(['--tlscert', cert])
if key:
tls_options.extend(['--tlskey', key])
if verify:
tls_options.append('--tlsverify')
if host:
tls_options.extend(['--host', host])
args = [executable_path] + tls_options + args
log.debug(" ".join(map(pipes.quote, args))) log.debug(" ".join(map(pipes.quote, args)))
return subprocess.call(args) return subprocess.call(args)
@ -1369,7 +1435,7 @@ def parse_scale_args(options):
def build_exec_command(options, container_id, command): def build_exec_command(options, container_id, command):
args = ["exec"] args = ["exec"]
if options["-d"]: if options["--detach"]:
args += ["--detach"] args += ["--detach"]
else: else:
args += ["--interactive"] args += ["--interactive"]

View File

@ -10,6 +10,10 @@ class ShutdownException(Exception):
pass pass
class HangUpException(Exception):
pass
def shutdown(signal, frame): def shutdown(signal, frame):
raise ShutdownException() raise ShutdownException()
@ -23,6 +27,16 @@ def set_signal_handler_to_shutdown():
set_signal_handler(shutdown) set_signal_handler(shutdown)
def hang_up(signal, frame):
raise HangUpException()
def set_signal_handler_to_hang_up():
# on Windows a ValueError will be raised if trying to set signal handler for SIGHUP
if not IS_WINDOWS_PLATFORM:
signal.signal(signal.SIGHUP, hang_up)
def ignore_sigpipe(): def ignore_sigpipe():
# Restore default behavior for SIGPIPE instead of raising # Restore default behavior for SIGPIPE instead of raising
# an exception when encountered. # an exception when encountered.

View File

@ -131,14 +131,6 @@ def generate_user_agent():
return " ".join(parts) return " ".join(parts)
def unquote_path(s):
if not s:
return s
if s[0] == '"' and s[-1] == '"':
return s[1:-1]
return s
def human_readable_file_size(size): def human_readable_file_size(size):
suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ] suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
order = int(math.log(size, 2) / 10) if size else 0 order = int(math.log(size, 2) / 10) if size else 0

View File

@ -16,6 +16,7 @@ from . import types
from .. import const from .. import const
from ..const import COMPOSEFILE_V1 as V1 from ..const import COMPOSEFILE_V1 as V1
from ..const import COMPOSEFILE_V2_1 as V2_1 from ..const import COMPOSEFILE_V2_1 as V2_1
from ..const import COMPOSEFILE_V2_3 as V2_3
from ..const import COMPOSEFILE_V3_0 as V3_0 from ..const import COMPOSEFILE_V3_0 as V3_0
from ..const import COMPOSEFILE_V3_4 as V3_4 from ..const import COMPOSEFILE_V3_4 as V3_4
from ..utils import build_string_dict from ..utils import build_string_dict
@ -39,6 +40,7 @@ from .sort_services import sort_service_dicts
from .types import MountSpec from .types import MountSpec
from .types import parse_extra_hosts from .types import parse_extra_hosts
from .types import parse_restart_spec from .types import parse_restart_spec
from .types import SecurityOpt
from .types import ServiceLink from .types import ServiceLink
from .types import ServicePort from .types import ServicePort
from .types import VolumeFromSpec from .types import VolumeFromSpec
@ -70,6 +72,7 @@ DOCKER_CONFIG_KEYS = [
'cpus', 'cpus',
'cpuset', 'cpuset',
'detach', 'detach',
'device_cgroup_rules',
'devices', 'devices',
'dns', 'dns',
'dns_search', 'dns_search',
@ -341,7 +344,7 @@ def find_candidates_in_parent_dirs(filenames, path):
return (candidates, path) return (candidates, path)
def check_swarm_only_config(service_dicts): def check_swarm_only_config(service_dicts, compatibility=False):
warning_template = ( warning_template = (
"Some services ({services}) use the '{key}' key, which will be ignored. " "Some services ({services}) use the '{key}' key, which will be ignored. "
"Compose does not support '{key}' configuration - use " "Compose does not support '{key}' configuration - use "
@ -357,13 +360,13 @@ def check_swarm_only_config(service_dicts):
key=key key=key
) )
) )
if not compatibility:
check_swarm_only_key(service_dicts, 'deploy') check_swarm_only_key(service_dicts, 'deploy')
check_swarm_only_key(service_dicts, 'credential_spec') check_swarm_only_key(service_dicts, 'credential_spec')
check_swarm_only_key(service_dicts, 'configs') check_swarm_only_key(service_dicts, 'configs')
def load(config_details): def load(config_details, compatibility=False):
"""Load the configuration from a working directory and a list of """Load the configuration from a working directory and a list of
configuration files. Files are loaded in order, and merged on top configuration files. Files are loaded in order, and merged on top
of each other to create the final configuration. of each other to create the final configuration.
@ -391,15 +394,17 @@ def load(config_details):
configs = load_mapping( configs = load_mapping(
config_details.config_files, 'get_configs', 'Config', config_details.working_dir config_details.config_files, 'get_configs', 'Config', config_details.working_dir
) )
service_dicts = load_services(config_details, main_file) service_dicts = load_services(config_details, main_file, compatibility)
if main_file.version != V1: if main_file.version != V1:
for service_dict in service_dicts: for service_dict in service_dicts:
match_named_volumes(service_dict, volumes) match_named_volumes(service_dict, volumes)
check_swarm_only_config(service_dicts) check_swarm_only_config(service_dicts, compatibility)
return Config(main_file.version, service_dicts, volumes, networks, secrets, configs) version = V2_3 if compatibility and main_file.version >= V3_0 else main_file.version
return Config(version, service_dicts, volumes, networks, secrets, configs)
def load_mapping(config_files, get_func, entity_type, working_dir=None): def load_mapping(config_files, get_func, entity_type, working_dir=None):
@ -441,7 +446,7 @@ def validate_external(entity_type, name, config, version):
entity_type, name, ', '.join(k for k in config if k != 'external'))) entity_type, name, ', '.join(k for k in config if k != 'external')))
def load_services(config_details, config_file): def load_services(config_details, config_file, compatibility=False):
def build_service(service_name, service_dict, service_names): def build_service(service_name, service_dict, service_names):
service_config = ServiceConfig.with_abs_paths( service_config = ServiceConfig.with_abs_paths(
config_details.working_dir, config_details.working_dir,
@ -459,7 +464,9 @@ def load_services(config_details, config_file):
service_config, service_config,
service_names, service_names,
config_file.version, config_file.version,
config_details.environment) config_details.environment,
compatibility
)
return service_dict return service_dict
def build_services(service_config): def build_services(service_config):
@ -729,9 +736,9 @@ def process_service(service_config):
if field in service_dict: if field in service_dict:
service_dict[field] = to_list(service_dict[field]) service_dict[field] = to_list(service_dict[field])
service_dict = process_blkio_config(process_ports( service_dict = process_security_opt(process_blkio_config(process_ports(
process_healthcheck(service_dict) process_healthcheck(service_dict)
)) )))
return service_dict return service_dict
@ -827,7 +834,7 @@ def finalize_service_volumes(service_dict, environment):
return service_dict return service_dict
def finalize_service(service_config, service_names, version, environment): def finalize_service(service_config, service_names, version, environment, compatibility):
service_dict = dict(service_config.config) service_dict = dict(service_config.config)
if 'environment' in service_dict or 'env_file' in service_dict: if 'environment' in service_dict or 'env_file' in service_dict:
@ -868,10 +875,80 @@ def finalize_service(service_config, service_names, version, environment):
normalize_build(service_dict, service_config.working_dir, environment) normalize_build(service_dict, service_config.working_dir, environment)
if compatibility:
service_dict, ignored_keys = translate_deploy_keys_to_container_config(
service_dict
)
if ignored_keys:
log.warn(
'The following deploy sub-keys are not supported in compatibility mode and have'
' been ignored: {}'.format(', '.join(ignored_keys))
)
service_dict['name'] = service_config.name service_dict['name'] = service_config.name
return normalize_v1_service_format(service_dict) return normalize_v1_service_format(service_dict)
def translate_resource_keys_to_container_config(resources_dict, service_dict):
if 'limits' in resources_dict:
service_dict['mem_limit'] = resources_dict['limits'].get('memory')
if 'cpus' in resources_dict['limits']:
service_dict['cpus'] = float(resources_dict['limits']['cpus'])
if 'reservations' in resources_dict:
service_dict['mem_reservation'] = resources_dict['reservations'].get('memory')
if 'cpus' in resources_dict['reservations']:
return ['resources.reservations.cpus']
return []
def convert_restart_policy(name):
try:
return {
'any': 'always',
'none': 'no',
'on-failure': 'on-failure'
}[name]
except KeyError:
raise ConfigurationError('Invalid restart policy "{}"'.format(name))
def translate_deploy_keys_to_container_config(service_dict):
if 'deploy' not in service_dict:
return service_dict, []
deploy_dict = service_dict['deploy']
ignored_keys = [
k for k in ['endpoint_mode', 'labels', 'update_config', 'placement']
if k in deploy_dict
]
if 'replicas' in deploy_dict and deploy_dict.get('mode', 'replicated') == 'replicated':
service_dict['scale'] = deploy_dict['replicas']
if 'restart_policy' in deploy_dict:
service_dict['restart'] = {
'Name': convert_restart_policy(deploy_dict['restart_policy'].get('condition', 'any')),
'MaximumRetryCount': deploy_dict['restart_policy'].get('max_attempts', 0)
}
for k in deploy_dict['restart_policy'].keys():
if k != 'condition' and k != 'max_attempts':
ignored_keys.append('restart_policy.{}'.format(k))
ignored_keys.extend(
translate_resource_keys_to_container_config(
deploy_dict.get('resources', {}), service_dict
)
)
del service_dict['deploy']
if 'credential_spec' in service_dict:
del service_dict['credential_spec']
if 'configs' in service_dict:
del service_dict['configs']
return service_dict, ignored_keys
def normalize_v1_service_format(service_dict): def normalize_v1_service_format(service_dict):
if 'log_driver' in service_dict or 'log_opt' in service_dict: if 'log_driver' in service_dict or 'log_opt' in service_dict:
if 'logging' not in service_dict: if 'logging' not in service_dict:
@ -969,7 +1046,7 @@ def merge_service_dicts(base, override, version):
for field in [ for field in [
'cap_add', 'cap_drop', 'expose', 'external_links', 'cap_add', 'cap_drop', 'expose', 'external_links',
'security_opt', 'volumes_from', 'security_opt', 'volumes_from', 'device_cgroup_rules',
]: ]:
md.merge_field(field, merge_unique_items_lists, default=[]) md.merge_field(field, merge_unique_items_lists, default=[])
@ -1301,6 +1378,16 @@ def split_path_mapping(volume_path):
return (volume_path, None) return (volume_path, None)
def process_security_opt(service_dict):
security_opts = service_dict.get('security_opt', [])
result = []
for value in security_opts:
result.append(SecurityOpt.parse(value))
if result:
service_dict['security_opt'] = result
return service_dict
def join_path_mapping(pair): def join_path_mapping(pair):
(container, host) = pair (container, host) = pair
if isinstance(host, dict): if isinstance(host, dict):

View File

@ -99,8 +99,8 @@
} }
] ]
}, },
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cap_add": {"$ref": "#/definitions/list_of_strings"},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cap_drop": {"$ref": "#/definitions/list_of_strings"},
"cgroup_parent": {"type": "string"}, "cgroup_parent": {"type": "string"},
"command": { "command": {
"oneOf": [ "oneOf": [
@ -137,7 +137,8 @@
} }
] ]
}, },
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "device_cgroup_rules": {"$ref": "#/definitions/list_of_strings"},
"devices": {"$ref": "#/definitions/list_of_strings"},
"dns_opt": { "dns_opt": {
"type": "array", "type": "array",
"items": { "items": {
@ -184,7 +185,7 @@
] ]
}, },
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "external_links": {"$ref": "#/definitions/list_of_strings"},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"}, "extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"}, "healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"}, "hostname": {"type": "string"},
@ -193,7 +194,7 @@
"ipc": {"type": "string"}, "ipc": {"type": "string"},
"isolation": {"type": "string"}, "isolation": {"type": "string"},
"labels": {"$ref": "#/definitions/labels"}, "labels": {"$ref": "#/definitions/labels"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "links": {"$ref": "#/definitions/list_of_strings"},
"logging": { "logging": {
"type": "object", "type": "object",
@ -264,7 +265,7 @@
"restart": {"type": "string"}, "restart": {"type": "string"},
"runtime": {"type": "string"}, "runtime": {"type": "string"},
"scale": {"type": "integer"}, "scale": {"type": "integer"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "security_opt": {"$ref": "#/definitions/list_of_strings"},
"shm_size": {"type": ["number", "string"]}, "shm_size": {"type": ["number", "string"]},
"sysctls": {"$ref": "#/definitions/list_or_dict"}, "sysctls": {"$ref": "#/definitions/list_or_dict"},
"pids_limit": {"type": ["number", "string"]}, "pids_limit": {"type": ["number", "string"]},
@ -321,6 +322,12 @@
"properties": { "properties": {
"nocopy": {"type": "boolean"} "nocopy": {"type": "boolean"}
} }
},
"tmpfs": {
"type": "object",
"properties": {
"size": {"type": ["integer", "string"]}
}
} }
} }
} }
@ -329,7 +336,7 @@
} }
}, },
"volume_driver": {"type": "string"}, "volume_driver": {"type": "string"},
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "volumes_from": {"$ref": "#/definitions/list_of_strings"},
"working_dir": {"type": "string"} "working_dir": {"type": "string"}
}, },

View File

@ -0,0 +1,582 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.6.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/secret"
}
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/config"
}
},
"additionalProperties": false
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"},
"target": {"type": "string"},
"shm_size": {"type": ["integer", "string"]}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"configs": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"container_name": {"type": "string"},
"credential_spec": {"type": "object", "properties": {
"file": {"type": "string"},
"registry": {"type": "string"}
}},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"isolation": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"oneOf": [
{"type": "number", "format": "ports"},
{"type": "string", "format": "ports"},
{
"type": "object",
"properties": {
"mode": {"type": "string"},
"target": {"type": "integer"},
"published": {"type": "integer"},
"protocol": {"type": "string"}
},
"additionalProperties": false
}
]
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"secrets": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string"},
"source": {"type": "string"},
"target": {"type": "string"},
"read_only": {"type": "boolean"},
"consistency": {"type": "string"},
"bind": {
"type": "object",
"properties": {
"propagation": {"type": "string"}
}
},
"volume": {
"type": "object",
"properties": {
"nocopy": {"type": "boolean"}
}
},
"tmpfs": {
"type": "object",
"properties": {
"size": {
"type": "integer",
"minimum": 0
}
}
}
},
"additionalProperties": false
}
],
"uniqueItems": true
}
},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string", "format": "duration"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string", "format": "duration"},
"start_period": {"type": "string", "format": "duration"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"endpoint_mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"},
"order": {"type": "string", "enum": [
"start-first", "stop-first"
]}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"reservations": {
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"},
"generic_resources": {"$ref": "#/definitions/generic_resources"}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}},
"preferences": {
"type": "array",
"items": {
"type": "object",
"properties": {
"spread": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"generic_resources": {
"id": "#/definitions/generic_resources",
"type": "array",
"items": {
"type": "object",
"properties": {
"discrete_resource_spec": {
"type": "object",
"properties": {
"kind": {"type": "string"},
"value": {"type": "number"}
},
"additionalProperties": false
}
},
"additionalProperties": false
}
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"name": {"type": "string"},
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"attachable": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"name": {"type": "string"},
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
"properties": {
"name": {"type": "string"},
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"config": {
"id": "#/definitions/config",
"type": "object",
"properties": {
"name": {"type": "string"},
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@ -9,6 +9,7 @@ import six
from .errors import ConfigurationError from .errors import ConfigurationError
from compose.const import COMPOSEFILE_V2_0 as V2_0 from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.utils import parse_bytes
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -215,6 +216,13 @@ def to_str(o):
return o return o
def bytes_to_int(s):
v = parse_bytes(s)
if v is None:
raise ValueError('"{}" is not a valid byte value'.format(s))
return v
class ConversionMap(object): class ConversionMap(object):
map = { map = {
service_path('blkio_config', 'weight'): to_int, service_path('blkio_config', 'weight'): to_int,
@ -247,6 +255,7 @@ class ConversionMap(object):
service_path('tty'): to_boolean, service_path('tty'): to_boolean,
service_path('volumes', 'read_only'): to_boolean, service_path('volumes', 'read_only'): to_boolean,
service_path('volumes', 'volume', 'nocopy'): to_boolean, service_path('volumes', 'volume', 'nocopy'): to_boolean,
service_path('volumes', 'tmpfs', 'size'): bytes_to_int,
re_path_basic('network', 'attachable'): to_boolean, re_path_basic('network', 'attachable'): to_boolean,
re_path_basic('network', 'external'): to_boolean, re_path_basic('network', 'external'): to_boolean,
re_path_basic('network', 'internal'): to_boolean, re_path_basic('network', 'internal'): to_boolean,

View File

@ -42,6 +42,7 @@ def serialize_string(dumper, data):
yaml.SafeDumper.add_representer(types.MountSpec, serialize_dict_type) yaml.SafeDumper.add_representer(types.MountSpec, serialize_dict_type)
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type) yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type) yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.SecurityOpt, serialize_config_type)
yaml.SafeDumper.add_representer(types.ServiceSecret, serialize_dict_type) yaml.SafeDumper.add_representer(types.ServiceSecret, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServiceConfig, serialize_dict_type) yaml.SafeDumper.add_representer(types.ServiceConfig, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type) yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)

View File

@ -4,6 +4,7 @@ Types for objects parsed from the configuration.
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import unicode_literals from __future__ import unicode_literals
import json
import ntpath import ntpath
import os import os
import re import re
@ -13,6 +14,7 @@ import six
from docker.utils.ports import build_port_bindings from docker.utils.ports import build_port_bindings
from ..const import COMPOSEFILE_V1 as V1 from ..const import COMPOSEFILE_V1 as V1
from ..utils import unquote_path
from .errors import ConfigurationError from .errors import ConfigurationError
from compose.const import IS_WINDOWS_PLATFORM from compose.const import IS_WINDOWS_PLATFORM
from compose.utils import splitdrive from compose.utils import splitdrive
@ -141,6 +143,9 @@ class MountSpec(object):
}, },
'bind': { 'bind': {
'propagation': 'propagation' 'propagation': 'propagation'
},
'tmpfs': {
'size': 'tmpfs_size'
} }
} }
_fields = ['type', 'source', 'target', 'read_only', 'consistency'] _fields = ['type', 'source', 'target', 'read_only', 'consistency']
@ -149,6 +154,9 @@ class MountSpec(object):
def parse(cls, mount_dict, normalize=False, win_host=False): def parse(cls, mount_dict, normalize=False, win_host=False):
normpath = ntpath.normpath if win_host else os.path.normpath normpath = ntpath.normpath if win_host else os.path.normpath
if mount_dict.get('source'): if mount_dict.get('source'):
if mount_dict['type'] == 'tmpfs':
raise ConfigurationError('tmpfs mounts can not specify a source')
mount_dict['source'] = normpath(mount_dict['source']) mount_dict['source'] = normpath(mount_dict['source'])
if normalize: if normalize:
mount_dict['source'] = normalize_path_for_engine(mount_dict['source']) mount_dict['source'] = normalize_path_for_engine(mount_dict['source'])
@ -451,3 +459,30 @@ def normalize_port_dict(port):
external_ip=port.get('external_ip', ''), external_ip=port.get('external_ip', ''),
has_ext_ip=(':' if port.get('external_ip') else ''), has_ext_ip=(':' if port.get('external_ip') else ''),
) )
class SecurityOpt(namedtuple('_SecurityOpt', 'value src_file')):
@classmethod
def parse(cls, value):
# based on https://github.com/docker/cli/blob/9de1b162f/cli/command/container/opts.go#L673-L697
con = value.split('=', 2)
if len(con) == 1 and con[0] != 'no-new-privileges':
if ':' not in value:
raise ConfigurationError('Invalid security_opt: {}'.format(value))
con = value.split(':', 2)
if con[0] == 'seccomp' and con[1] != 'unconfined':
try:
with open(unquote_path(con[1]), 'r') as f:
seccomp_data = json.load(f)
except (IOError, ValueError) as e:
raise ConfigurationError('Error reading seccomp profile: {}'.format(e))
return cls(
'seccomp={}'.format(json.dumps(seccomp_data)), con[1]
)
return cls(value, None)
def repr(self):
if self.src_file is not None:
return 'seccomp:{}'.format(self.src_file)
return self.value

View File

@ -34,6 +34,7 @@ COMPOSEFILE_V3_2 = ComposeVersion('3.2')
COMPOSEFILE_V3_3 = ComposeVersion('3.3') COMPOSEFILE_V3_3 = ComposeVersion('3.3')
COMPOSEFILE_V3_4 = ComposeVersion('3.4') COMPOSEFILE_V3_4 = ComposeVersion('3.4')
COMPOSEFILE_V3_5 = ComposeVersion('3.5') COMPOSEFILE_V3_5 = ComposeVersion('3.5')
COMPOSEFILE_V3_6 = ComposeVersion('3.6')
API_VERSIONS = { API_VERSIONS = {
COMPOSEFILE_V1: '1.21', COMPOSEFILE_V1: '1.21',
@ -47,6 +48,7 @@ API_VERSIONS = {
COMPOSEFILE_V3_3: '1.30', COMPOSEFILE_V3_3: '1.30',
COMPOSEFILE_V3_4: '1.30', COMPOSEFILE_V3_4: '1.30',
COMPOSEFILE_V3_5: '1.30', COMPOSEFILE_V3_5: '1.30',
COMPOSEFILE_V3_6: '1.36',
} }
API_VERSION_TO_ENGINE_VERSION = { API_VERSION_TO_ENGINE_VERSION = {
@ -61,4 +63,5 @@ API_VERSION_TO_ENGINE_VERSION = {
API_VERSIONS[COMPOSEFILE_V3_3]: '17.06.0', API_VERSIONS[COMPOSEFILE_V3_3]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_4]: '17.06.0', API_VERSIONS[COMPOSEFILE_V3_4]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_5]: '17.06.0', API_VERSIONS[COMPOSEFILE_V3_5]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_6]: '18.02.0',
} }

View File

@ -129,7 +129,7 @@ class Container(object):
if self.is_restarting: if self.is_restarting:
return 'Restarting' return 'Restarting'
if self.is_running: if self.is_running:
return 'Ghost' if self.get('State.Ghost') else 'Up' return 'Ghost' if self.get('State.Ghost') else self.human_readable_health_status
else: else:
return 'Exit %s' % self.get('State.ExitCode') return 'Exit %s' % self.get('State.ExitCode')
@ -172,6 +172,18 @@ class Container(object):
log_type = self.log_driver log_type = self.log_driver
return not log_type or log_type in ('json-file', 'journald') return not log_type or log_type in ('json-file', 'journald')
@property
def human_readable_health_status(self):
""" Generate UP status string with up time and health
"""
status_string = 'Up'
container_status = self.get('State.Health.Status')
if container_status == 'starting':
status_string += ' (health: starting)'
elif container_status is not None:
status_string += ' (%s)' % container_status
return status_string
def attach_log_stream(self): def attach_log_stream(self):
"""A log stream can only be attached if the container uses a json-file """A log stream can only be attached if the container uses a json-file
log driver. log driver.
@ -243,7 +255,7 @@ class Container(object):
self.inspect() self.inspect()
def wait(self): def wait(self):
return self.client.wait(self.id) return self.client.wait(self.id).get('StatusCode', 127)
def logs(self, *args, **kwargs): def logs(self, *args, **kwargs):
return self.client.logs(self.id, *args, **kwargs) return self.client.logs(self.id, *args, **kwargs)

View File

@ -446,7 +446,9 @@ class Project(object):
start=True, start=True,
always_recreate_deps=False, always_recreate_deps=False,
reset_container_image=False, reset_container_image=False,
renew_anonymous_volumes=False): renew_anonymous_volumes=False,
silent=False,
):
self.initialize() self.initialize()
if not ignore_orphans: if not ignore_orphans:
@ -460,7 +462,7 @@ class Project(object):
include_deps=start_deps) include_deps=start_deps)
for svc in services: for svc in services:
svc.ensure_image_exists(do_build=do_build) svc.ensure_image_exists(do_build=do_build, silent=silent)
plans = self._get_convergence_plans( plans = self._get_convergence_plans(
services, strategy, always_recreate_deps=always_recreate_deps) services, strategy, always_recreate_deps=always_recreate_deps)
scaled_services = self.get_scaled_services(services, scale_override) scaled_services = self.get_scaled_services(services, scale_override)
@ -537,8 +539,9 @@ class Project(object):
return plans return plans
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False): def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
services = self.get_services(service_names, include_deps=False) include_deps=False):
services = self.get_services(service_names, include_deps)
if parallel_pull: if parallel_pull:
def pull_service(service): def pull_service(service):

View File

@ -66,6 +66,7 @@ HOST_CONFIG_KEYS = [
'cpu_shares', 'cpu_shares',
'cpus', 'cpus',
'cpuset', 'cpuset',
'device_cgroup_rules',
'devices', 'devices',
'dns', 'dns',
'dns_search', 'dns_search',
@ -305,7 +306,7 @@ class Service(object):
raise OperationFailedError("Cannot create container for service %s: %s" % raise OperationFailedError("Cannot create container for service %s: %s" %
(self.name, ex.explanation)) (self.name, ex.explanation))
def ensure_image_exists(self, do_build=BuildAction.none): def ensure_image_exists(self, do_build=BuildAction.none, silent=False):
if self.can_be_built() and do_build == BuildAction.force: if self.can_be_built() and do_build == BuildAction.force:
self.build() self.build()
return return
@ -317,7 +318,7 @@ class Service(object):
pass pass
if not self.can_be_built(): if not self.can_be_built():
self.pull() self.pull(silent=silent)
return return
if do_build == BuildAction.skip: if do_build == BuildAction.skip:
@ -556,8 +557,8 @@ class Service(object):
container.attach_log_stream() container.attach_log_stream()
return self.start_container(container) return self.start_container(container)
def start_container(self, container): def start_container(self, container, use_network_aliases=True):
self.connect_container_to_networks(container) self.connect_container_to_networks(container, use_network_aliases)
try: try:
container.start() container.start()
except APIError as ex: except APIError as ex:
@ -573,7 +574,7 @@ class Service(object):
) )
) )
def connect_container_to_networks(self, container): def connect_container_to_networks(self, container, use_network_aliases=True):
connected_networks = container.get('NetworkSettings.Networks') connected_networks = container.get('NetworkSettings.Networks')
for network, netdefs in self.prioritized_networks.items(): for network, netdefs in self.prioritized_networks.items():
@ -582,10 +583,11 @@ class Service(object):
continue continue
self.client.disconnect_container_from_network(container.id, network) self.client.disconnect_container_from_network(container.id, network)
log.debug('Connecting to {}'.format(network)) aliases = self._get_aliases(netdefs, container) if use_network_aliases else []
self.client.connect_container_to_network( self.client.connect_container_to_network(
container.id, network, container.id, network,
aliases=self._get_aliases(netdefs, container), aliases=aliases,
ipv4_address=netdefs.get('ipv4_address', None), ipv4_address=netdefs.get('ipv4_address', None),
ipv6_address=netdefs.get('ipv6_address', None), ipv6_address=netdefs.get('ipv6_address', None),
links=self._get_links(False), links=self._get_links(False),
@ -691,9 +693,6 @@ class Service(object):
return 1 if not numbers else max(numbers) + 1 return 1 if not numbers else max(numbers) + 1
def _get_aliases(self, network, container=None): def _get_aliases(self, network, container=None):
if container and container.labels.get(LABEL_ONE_OFF) == "True":
return []
return list( return list(
{self.name} | {self.name} |
({container.short_id} if container else set()) | ({container.short_id} if container else set()) |
@ -793,8 +792,12 @@ class Service(object):
)) ))
container_options['environment'] = merge_environment( container_options['environment'] = merge_environment(
self._parse_proxy_config(),
merge_environment(
self.options.get('environment'), self.options.get('environment'),
override_options.get('environment')) override_options.get('environment')
)
)
container_options['labels'] = merge_labels( container_options['labels'] = merge_labels(
self.options.get('labels'), self.options.get('labels'),
@ -881,6 +884,10 @@ class Service(object):
init_path = options.get('init') init_path = options.get('init')
options['init'] = True options['init'] = True
security_opt = [
o.value for o in options.get('security_opt')
] if options.get('security_opt') else None
nano_cpus = None nano_cpus = None
if 'cpus' in options: if 'cpus' in options:
nano_cpus = int(options.get('cpus') * NANOCPUS_SCALE) nano_cpus = int(options.get('cpus') * NANOCPUS_SCALE)
@ -910,7 +917,7 @@ class Service(object):
extra_hosts=options.get('extra_hosts'), extra_hosts=options.get('extra_hosts'),
read_only=options.get('read_only'), read_only=options.get('read_only'),
pid_mode=self.pid_mode.mode, pid_mode=self.pid_mode.mode,
security_opt=options.get('security_opt'), security_opt=security_opt,
ipc_mode=options.get('ipc'), ipc_mode=options.get('ipc'),
cgroup_parent=options.get('cgroup_parent'), cgroup_parent=options.get('cgroup_parent'),
cpu_quota=options.get('cpu_quota'), cpu_quota=options.get('cpu_quota'),
@ -940,6 +947,7 @@ class Service(object):
device_write_bps=blkio_config.get('device_write_bps'), device_write_bps=blkio_config.get('device_write_bps'),
device_write_iops=blkio_config.get('device_write_iops'), device_write_iops=blkio_config.get('device_write_iops'),
mounts=options.get('mounts'), mounts=options.get('mounts'),
device_cgroup_rules=options.get('device_cgroup_rules'),
) )
def get_secret_volumes(self): def get_secret_volumes(self):
@ -963,6 +971,9 @@ class Service(object):
if build_args_override: if build_args_override:
build_args.update(build_args_override) build_args.update(build_args_override)
for k, v in self._parse_proxy_config().items():
build_args.setdefault(k, v)
# python2 os.stat() doesn't support unicode on some UNIX, so we # python2 os.stat() doesn't support unicode on some UNIX, so we
# encode it to a bytestring to be safe # encode it to a bytestring to be safe
path = build_opts.get('context') path = build_opts.get('context')
@ -972,7 +983,6 @@ class Service(object):
build_output = self.client.build( build_output = self.client.build(
path=path, path=path,
tag=self.image_name, tag=self.image_name,
stream=True,
rm=True, rm=True,
forcerm=force_rm, forcerm=force_rm,
pull=pull, pull=pull,
@ -1143,6 +1153,31 @@ class Service(object):
raise HealthCheckFailed(ctnr.short_id) raise HealthCheckFailed(ctnr.short_id)
return result return result
def _parse_proxy_config(self):
client = self.client
if 'proxies' not in client._general_configs:
return {}
docker_host = getattr(client, '_original_base_url', client.base_url)
proxy_config = client._general_configs['proxies'].get(
docker_host, client._general_configs['proxies'].get('default')
) or {}
permitted = {
'ftpProxy': 'FTP_PROXY',
'httpProxy': 'HTTP_PROXY',
'httpsProxy': 'HTTPS_PROXY',
'noProxy': 'NO_PROXY',
}
result = {}
for k, v in proxy_config.items():
if k not in permitted:
continue
result[permitted[k]] = result[permitted[k].lower()] = v
return result
def short_id_alias_exists(container, network): def short_id_alias_exists(container, network):
aliases = container.get( aliases = container.get(

View File

@ -143,3 +143,11 @@ def parse_bytes(n):
return sdk_parse_bytes(n) return sdk_parse_bytes(n)
except DockerException: except DockerException:
return None return None
def unquote_path(s):
if not s:
return s
if s[0] == '"' and s[-1] == '"':
return s[1:-1]
return s

View File

@ -16,6 +16,8 @@
# below to your .bashrc after bash completion features are loaded # below to your .bashrc after bash completion features are loaded
# . ~/.docker-compose-completion.sh # . ~/.docker-compose-completion.sh
__docker_compose_previous_extglob_setting=$(shopt -p extglob)
shopt -s extglob
__docker_compose_q() { __docker_compose_q() {
docker-compose 2>/dev/null "${top_level_options[@]}" "$@" docker-compose 2>/dev/null "${top_level_options[@]}" "$@"
@ -243,7 +245,7 @@ _docker_compose_exec() {
case "$cur" in case "$cur" in
-*) -*)
COMPREPLY=( $( compgen -W "-d --help --index --privileged -T --user -u" -- "$cur" ) ) COMPREPLY=( $( compgen -W "-d --detach --help --index --privileged -T --user -u" -- "$cur" ) )
;; ;;
*) *)
__docker_compose_services_running __docker_compose_services_running
@ -259,7 +261,7 @@ _docker_compose_help() {
_docker_compose_images() { _docker_compose_images() {
case "$cur" in case "$cur" in
-*) -*)
COMPREPLY=( $( compgen -W "--help -q" -- "$cur" ) ) COMPREPLY=( $( compgen -W "--help --quiet -q" -- "$cur" ) )
;; ;;
*) *)
__docker_compose_services_all __docker_compose_services_all
@ -361,7 +363,7 @@ _docker_compose_ps() {
case "$cur" in case "$cur" in
-*) -*)
COMPREPLY=( $( compgen -W "--help -q --services --filter" -- "$cur" ) ) COMPREPLY=( $( compgen -W "--help --quiet -q --services --filter" -- "$cur" ) )
;; ;;
*) *)
__docker_compose_services_all __docker_compose_services_all
@ -373,7 +375,7 @@ _docker_compose_ps() {
_docker_compose_pull() { _docker_compose_pull() {
case "$cur" in case "$cur" in
-*) -*)
COMPREPLY=( $( compgen -W "--help --ignore-pull-failures --parallel --quiet" -- "$cur" ) ) COMPREPLY=( $( compgen -W "--help --ignore-pull-failures --parallel --quiet -q" -- "$cur" ) )
;; ;;
*) *)
__docker_compose_services_from_image __docker_compose_services_from_image
@ -442,7 +444,7 @@ _docker_compose_run() {
case "$cur" in case "$cur" in
-*) -*)
COMPREPLY=( $( compgen -W "-d --entrypoint -e --help --label -l --name --no-deps --publish -p --rm --service-ports -T --user -u --volume -v --workdir -w" -- "$cur" ) ) COMPREPLY=( $( compgen -W "-d --detach --entrypoint -e --help --label -l --name --no-deps --publish -p --rm --service-ports -T --user -u --volume -v --workdir -w" -- "$cur" ) )
;; ;;
*) *)
__docker_compose_services_all __docker_compose_services_all
@ -550,7 +552,7 @@ _docker_compose_up() {
case "$cur" in case "$cur" in
-*) -*)
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --build -d --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) ) COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
;; ;;
*) *)
__docker_compose_services_all __docker_compose_services_all
@ -658,4 +660,7 @@ _docker_compose() {
return 0 return 0
} }
eval "$__docker_compose_previous_extglob_setting"
unset __docker_compose_previous_extglob_setting
complete -F _docker_compose docker-compose docker-compose.exe complete -F _docker_compose docker-compose docker-compose.exe

View File

@ -72,6 +72,11 @@ exe = EXE(pyz,
'compose/config/config_schema_v3.5.json', 'compose/config/config_schema_v3.5.json',
'DATA' 'DATA'
), ),
(
'compose/config/config_schema_v3.6.json',
'compose/config/config_schema_v3.6.json',
'DATA'
),
( (
'compose/GITSHA', 'compose/GITSHA',
'compose/GITSHA', 'compose/GITSHA',

View File

@ -1 +1 @@
pyinstaller==3.2.1 pyinstaller==3.3.1

View File

@ -1,5 +1,5 @@
coverage==3.7.1 coverage==4.4.2
flake8==3.5.0 flake8==3.5.0
mock>=1.0.1 mock>=1.0.1
pytest==2.7.2 pytest==2.9.2
pytest-cov==2.1.0 pytest-cov==2.5.1

View File

@ -2,7 +2,7 @@ backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.3.0 cached-property==1.3.0
certifi==2017.4.17 certifi==2017.4.17
chardet==3.0.4 chardet==3.0.4
docker==2.7.0 docker==3.1.0
docker-pycreds==0.2.1 docker-pycreds==0.2.1
dockerpty==0.4.1 dockerpty==0.4.1
docopt==0.6.2 docopt==0.6.2
@ -12,7 +12,8 @@ git+git://github.com/tartley/colorama.git@bd378c725b45eba0b8e5cc091c3ca76a954c92
idna==2.5 idna==2.5
ipaddress==1.0.18 ipaddress==1.0.18
jsonschema==2.6.0 jsonschema==2.6.0
pypiwin32==219; sys_platform == 'win32' pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==220; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.6.7 PySocks==1.6.7
PyYAML==3.12 PyYAML==3.12
requests==2.18.4 requests==2.18.4

View File

@ -3,7 +3,7 @@
set -ex set -ex
TARGET=dist/docker-compose-$(uname -s)-$(uname -m) TARGET=dist/docker-compose-$(uname -s)-$(uname -m)
VENV=/code/.tox/py27 VENV=/code/.tox/py36
mkdir -p `pwd`/dist mkdir -p `pwd`/dist
chmod 777 `pwd`/dist chmod 777 `pwd`/dist

View File

@ -5,7 +5,7 @@ PATH="/usr/local/bin:$PATH"
rm -rf venv rm -rf venv
virtualenv -p /usr/local/bin/python venv virtualenv -p /usr/local/bin/python3 venv
venv/bin/pip install -r requirements.txt venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-build.txt venv/bin/pip install -r requirements-build.txt
venv/bin/pip install --no-deps . venv/bin/pip install --no-deps .

View File

@ -6,17 +6,17 @@
# #
# http://git-scm.com/download/win # http://git-scm.com/download/win
# #
# 2. Install Python 2.7.10: # 2. Install Python 3.6.4:
# #
# https://www.python.org/downloads/ # https://www.python.org/downloads/
# #
# 3. Append ";C:\Python27;C:\Python27\Scripts" to the "Path" environment variable: # 3. Append ";C:\Python36;C:\Python36\Scripts" to the "Path" environment variable:
# #
# https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true # https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true
# #
# 4. In Powershell, run the following commands: # 4. In Powershell, run the following commands:
# #
# $ pip install virtualenv # $ pip install 'virtualenv>=15.1.0'
# $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned # $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
# #
# 5. Clone the repository: # 5. Clone the repository:
@ -45,7 +45,12 @@ virtualenv .\venv
$ErrorActionPreference = "Continue" $ErrorActionPreference = "Continue"
# Install dependencies # Install dependencies
.\venv\Scripts\pip install pypiwin32==219 # Fix for https://github.com/pypa/pip/issues/3964
# Remove-Item -Recurse -Force .\venv\Lib\site-packages\pip
# .\venv\Scripts\easy_install pip==9.0.1
# .\venv\Scripts\pip install --upgrade pip setuptools
# End fix
.\venv\Scripts\pip install pypiwin32==220
.\venv\Scripts\pip install -r requirements.txt .\venv\Scripts\pip install -r requirements.txt
.\venv\Scripts\pip install --no-deps . .\venv\Scripts\pip install --no-deps .
.\venv\Scripts\pip install --allow-external pyinstaller -r requirements-build.txt .\venv\Scripts\pip install --allow-external pyinstaller -r requirements-build.txt

View File

@ -1,5 +1,7 @@
#!/bin/bash #!/bin/bash
set -x
curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \ curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \
https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH} https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH}

View File

@ -2,6 +2,7 @@
set -e set -e
find . -type f -name '*.pyc' -delete find . -type f -name '*.pyc' -delete
rm -rf .coverage-binfiles
find . -name .coverage.* -delete find . -name .coverage.* -delete
find . -name __pycache__ -delete find . -name __pycache__ -delete
rm -rf docs/_site build dist docker-compose.egg-info rm -rf docs/_site build dist docker-compose.egg-info

View File

@ -15,7 +15,7 @@
set -e set -e
VERSION="1.19.0" VERSION="1.20.0-rc1"
IMAGE="docker/compose:$VERSION" IMAGE="docker/compose:$VERSION"

View File

@ -6,11 +6,36 @@ python_version() {
python -V 2>&1 python -V 2>&1
} }
python3_version() {
python3 -V 2>&1
}
openssl_version() { openssl_version() {
python -c "import ssl; print ssl.OPENSSL_VERSION" python -c "import ssl; print ssl.OPENSSL_VERSION"
} }
echo "*** Using $(python_version)" desired_python3_version="3.6.4"
desired_python3_brew_version="3.6.4_2"
python3_formula="https://raw.githubusercontent.com/Homebrew/homebrew-core/b4e69a9a592232fa5a82741f6acecffc2f1d198d/Formula/python3.rb"
PATH="/usr/local/bin:$PATH"
if !(which brew); then
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
fi
brew update > /dev/null
if !(python3_version | grep "$desired_python3_version"); then
if brew list | grep python3; then
brew unlink python3
fi
brew install "$python3_formula"
brew switch python3 "$desired_python3_brew_version"
fi
echo "*** Using $(python3_version) ; $(python_version)"
echo "*** Using $(openssl_version)" echo "*** Using $(openssl_version)"
if !(which virtualenv); then if !(which virtualenv); then

View File

@ -24,7 +24,7 @@ fi
BUILD_NUMBER=${BUILD_NUMBER-$USER} BUILD_NUMBER=${BUILD_NUMBER-$USER}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py27,py34} PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py27,py36}
for version in $DOCKER_VERSIONS; do for version in $DOCKER_VERSIONS; do
>&2 echo "Running tests against Docker $version" >&2 echo "Running tests against Docker $version"

View File

@ -14,7 +14,7 @@ set -ex
docker version docker version
export DOCKER_VERSIONS=all export DOCKER_VERSIONS=${DOCKER_VERSIONS:-all}
STORAGE_DRIVER=${STORAGE_DRIVER:-overlay} STORAGE_DRIVER=${STORAGE_DRIVER:-overlay}
export DOCKER_DAEMON_ARGS="--storage-driver=$STORAGE_DRIVER" export DOCKER_DAEMON_ARGS="--storage-driver=$STORAGE_DRIVER"

View File

@ -36,7 +36,7 @@ install_requires = [
'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.19', 'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.19',
'texttable >= 0.9.0, < 0.10', 'texttable >= 0.9.0, < 0.10',
'websocket-client >= 0.32.0, < 1.0', 'websocket-client >= 0.32.0, < 1.0',
'docker >= 2.7.0, < 3.0', 'docker >= 3.1.0, < 4.0',
'dockerpty >= 0.4.1, < 0.5', 'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2', 'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3', 'jsonschema >= 2.5.1, < 3',
@ -99,5 +99,6 @@ setup(
'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.6',
], ],
) )

View File

@ -207,13 +207,13 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = None self.base_dir = None
result = self.dispatch([ result = self.dispatch([
'-f', 'tests/fixtures/invalid-composefile/invalid.yml', '-f', 'tests/fixtures/invalid-composefile/invalid.yml',
'config', '-q' 'config', '--quiet'
], returncode=1) ], returncode=1)
assert "'notaservice' must be a mapping" in result.stderr assert "'notaservice' must be a mapping" in result.stderr
def test_config_quiet(self): def test_config_quiet(self):
self.base_dir = 'tests/fixtures/v2-full' self.base_dir = 'tests/fixtures/v2-full'
assert self.dispatch(['config', '-q']).stdout == '' assert self.dispatch(['config', '--quiet']).stdout == ''
def test_config_default(self): def test_config_default(self):
self.base_dir = 'tests/fixtures/v2-full' self.base_dir = 'tests/fixtures/v2-full'
@ -395,7 +395,7 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['config']) result = self.dispatch(['config'])
assert yaml.load(result.stdout) == { assert yaml.load(result.stdout) == {
'version': '3.2', 'version': '3.5',
'volumes': { 'volumes': {
'foobar': { 'foobar': {
'labels': { 'labels': {
@ -419,22 +419,25 @@ class CLITestCase(DockerClientTestCase):
}, },
'resources': { 'resources': {
'limits': { 'limits': {
'cpus': '0.001', 'cpus': '0.05',
'memory': '50M', 'memory': '50M',
}, },
'reservations': { 'reservations': {
'cpus': '0.0001', 'cpus': '0.01',
'memory': '20M', 'memory': '20M',
}, },
}, },
'restart_policy': { 'restart_policy': {
'condition': 'on_failure', 'condition': 'on-failure',
'delay': '5s', 'delay': '5s',
'max_attempts': 3, 'max_attempts': 3,
'window': '120s', 'window': '120s',
}, },
'placement': { 'placement': {
'constraints': ['node=foo'], 'constraints': [
'node.hostname==foo', 'node.role != manager'
],
'preferences': [{'spread': 'node.labels.datacenter'}]
}, },
}, },
@ -464,6 +467,27 @@ class CLITestCase(DockerClientTestCase):
}, },
} }
def test_config_compatibility_mode(self):
self.base_dir = 'tests/fixtures/compatibility-mode'
result = self.dispatch(['--compatibility', 'config'])
assert yaml.load(result.stdout) == {
'version': '2.3',
'volumes': {'foo': {'driver': 'default'}},
'services': {
'foo': {
'command': '/bin/true',
'image': 'alpine:3.7',
'scale': 3,
'restart': 'always:7',
'mem_limit': '300M',
'mem_reservation': '100M',
'cpus': 0.7,
'volumes': ['foo:/bar:rw']
}
}
}
def test_ps(self): def test_ps(self):
self.project.get_service('simple').create_container() self.project.get_service('simple').create_container()
result = self.dispatch(['ps']) result = self.dispatch(['ps'])
@ -567,6 +591,21 @@ class CLITestCase(DockerClientTestCase):
result.stderr result.stderr
) )
def test_pull_with_no_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
result = self.dispatch(['pull', 'web'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling web (busybox:latest)...',
]
def test_pull_with_include_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
result = self.dispatch(['pull', '--include-deps', 'web'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling db (busybox:latest)...',
'Pulling web (busybox:latest)...',
]
def test_build_plain(self): def test_build_plain(self):
self.base_dir = 'tests/fixtures/simple-dockerfile' self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple']) self.dispatch(['build', 'simple'])
@ -604,6 +643,20 @@ class CLITestCase(DockerClientTestCase):
assert BUILD_CACHE_TEXT not in result.stdout assert BUILD_CACHE_TEXT not in result.stdout
assert BUILD_PULL_TEXT in result.stdout assert BUILD_PULL_TEXT in result.stdout
def test_build_log_level(self):
self.base_dir = 'tests/fixtures/simple-dockerfile'
result = self.dispatch(['--log-level', 'warning', 'build', 'simple'])
assert result.stderr == ''
result = self.dispatch(['--log-level', 'debug', 'build', 'simple'])
assert 'Building simple' in result.stderr
assert 'Using configuration file' in result.stderr
self.base_dir = 'tests/fixtures/simple-failing-dockerfile'
result = self.dispatch(['--log-level', 'critical', 'build', 'simple'], returncode=1)
assert result.stderr == ''
result = self.dispatch(['--log-level', 'debug', 'build', 'simple'], returncode=1)
assert 'Building simple' in result.stderr
assert 'non-zero code' in result.stderr
def test_build_failed(self): def test_build_failed(self):
self.base_dir = 'tests/fixtures/simple-failing-dockerfile' self.base_dir = 'tests/fixtures/simple-failing-dockerfile'
self.dispatch(['build', 'simple'], returncode=1) self.dispatch(['build', 'simple'], returncode=1)
@ -643,6 +696,33 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['build', '--no-cache', '--memory', '96m', 'service'], None) result = self.dispatch(['build', '--no-cache', '--memory', '96m', 'service'], None)
assert 'memory: 100663296' in result.stdout # 96 * 1024 * 1024 assert 'memory: 100663296' in result.stdout # 96 * 1024 * 1024
def test_build_with_buildarg_from_compose_file(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-args'
result = self.dispatch(['build'], None)
assert 'Favorite Touhou Character: mariya.kirisame' in result.stdout
def test_build_with_buildarg_cli_override(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-args'
result = self.dispatch(['build', '--build-arg', 'favorite_th_character=sakuya.izayoi'], None)
assert 'Favorite Touhou Character: sakuya.izayoi' in result.stdout
@mock.patch.dict(os.environ)
def test_build_with_buildarg_old_api_version(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-args'
os.environ['COMPOSE_API_VERSION'] = '1.24'
result = self.dispatch(
['build', '--build-arg', 'favorite_th_character=reimu.hakurei'], None, returncode=1
)
assert '--build-arg is only supported when services are specified' in result.stderr
result = self.dispatch(
['build', '--build-arg', 'favorite_th_character=hong.meiling', 'web'], None
)
assert 'Favorite Touhou Character: hong.meiling' in result.stdout
def test_bundle_with_digests(self): def test_bundle_with_digests(self):
self.base_dir = 'tests/fixtures/bundle-with-digests/' self.base_dir = 'tests/fixtures/bundle-with-digests/'
tmpdir = pytest.ensuretemp('cli_test_bundle') tmpdir = pytest.ensuretemp('cli_test_bundle')
@ -869,6 +949,19 @@ class CLITestCase(DockerClientTestCase):
assert not container.get('Config.AttachStdout') assert not container.get('Config.AttachStdout')
assert not container.get('Config.AttachStdin') assert not container.get('Config.AttachStdin')
def test_up_detached_long_form(self):
self.dispatch(['up', '--detach'])
service = self.project.get_service('simple')
another = self.project.get_service('another')
assert len(service.containers()) == 1
assert len(another.containers()) == 1
# Ensure containers don't have stdin and stdout connected in -d mode
container, = service.containers()
assert not container.get('Config.AttachStderr')
assert not container.get('Config.AttachStdout')
assert not container.get('Config.AttachStdin')
def test_up_attached(self): def test_up_attached(self):
self.base_dir = 'tests/fixtures/echo-services' self.base_dir = 'tests/fixtures/echo-services'
result = self.dispatch(['up', '--no-color']) result = self.dispatch(['up', '--no-color'])
@ -1448,6 +1541,15 @@ class CLITestCase(DockerClientTestCase):
assert stderr == "" assert stderr == ""
assert stdout == "/\n" assert stdout == "/\n"
def test_exec_detach_long_form(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '--detach', 'console'])
assert len(self.project.containers()) == 1
stdout, stderr = self.dispatch(['exec', '-T', 'console', 'ls', '-1d', '/'])
assert stderr == ""
assert stdout == "/\n"
def test_exec_custom_user(self): def test_exec_custom_user(self):
self.base_dir = 'tests/fixtures/links-composefile' self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'console']) self.dispatch(['up', '-d', 'console'])
@ -1595,6 +1697,18 @@ class CLITestCase(DockerClientTestCase):
assert container.get('Config.Entrypoint') == ['printf'] assert container.get('Config.Entrypoint') == ['printf']
assert container.get('Config.Cmd') == ['default', 'args'] assert container.get('Config.Cmd') == ['default', 'args']
def test_run_service_with_unset_entrypoint(self):
self.base_dir = 'tests/fixtures/entrypoint-dockerfile'
self.dispatch(['run', '--entrypoint=""', 'test', 'true'])
container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0]
assert container.get('Config.Entrypoint') is None
assert container.get('Config.Cmd') == ['true']
self.dispatch(['run', '--entrypoint', '""', 'test', 'true'])
container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0]
assert container.get('Config.Entrypoint') is None
assert container.get('Config.Cmd') == ['true']
def test_run_service_with_dockerfile_entrypoint_overridden(self): def test_run_service_with_dockerfile_entrypoint_overridden(self):
self.base_dir = 'tests/fixtures/entrypoint-dockerfile' self.base_dir = 'tests/fixtures/entrypoint-dockerfile'
self.dispatch(['run', '--entrypoint', 'echo', 'test']) self.dispatch(['run', '--entrypoint', 'echo', 'test'])
@ -1801,6 +1915,28 @@ class CLITestCase(DockerClientTestCase):
container = service.containers(stopped=True, one_off=True)[0] container = service.containers(stopped=True, one_off=True)[0]
assert workdir == container.get('Config.WorkingDir') assert workdir == container.get('Config.WorkingDir')
@v2_only()
def test_run_service_with_use_aliases(self):
filename = 'network-aliases.yml'
self.base_dir = 'tests/fixtures/networks'
self.dispatch(['-f', filename, 'run', '-d', '--use-aliases', 'web', 'top'])
back_name = '{}_back'.format(self.project.name)
front_name = '{}_front'.format(self.project.name)
web_container = self.project.get_service('web').containers(one_off=OneOffFilter.only)[0]
back_aliases = web_container.get(
'NetworkSettings.Networks.{}.Aliases'.format(back_name)
)
assert 'web' in back_aliases
front_aliases = web_container.get(
'NetworkSettings.Networks.{}.Aliases'.format(front_name)
)
assert 'web' in front_aliases
assert 'forward_facing' in front_aliases
assert 'ahead' in front_aliases
@v2_only() @v2_only()
def test_run_interactive_connects_to_network(self): def test_run_interactive_connects_to_network(self):
self.base_dir = 'tests/fixtures/networks' self.base_dir = 'tests/fixtures/networks'
@ -1876,6 +2012,19 @@ class CLITestCase(DockerClientTestCase):
'simplecomposefile_simple_run_1', 'simplecomposefile_simple_run_1',
'exited')) 'exited'))
def test_run_handles_sighup(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simplecomposefile_simple_run_1',
'running'))
os.kill(proc.pid, signal.SIGHUP)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simplecomposefile_simple_run_1',
'exited'))
@mock.patch.dict(os.environ) @mock.patch.dict(os.environ)
def test_run_unicode_env_values_from_system(self): def test_run_unicode_env_values_from_system(self):
value = 'ą, ć, ę, ł, ń, ó, ś, ź, ż' value = 'ą, ć, ę, ł, ń, ó, ś, ź, ż'

4
tests/fixtures/build-args/Dockerfile vendored Normal file
View File

@ -0,0 +1,4 @@
FROM busybox:latest
LABEL com.docker.compose.test_image=true
ARG favorite_th_character
RUN echo "Favorite Touhou Character: ${favorite_th_character}"

View File

@ -0,0 +1,7 @@
version: '2.2'
services:
web:
build:
context: .
args:
- favorite_th_character=mariya.kirisame

View File

@ -0,0 +1,22 @@
version: '3.5'
services:
foo:
image: alpine:3.7
command: /bin/true
deploy:
replicas: 3
restart_policy:
condition: any
max_attempts: 7
resources:
limits:
memory: 300M
cpus: '0.7'
reservations:
memory: 100M
volumes:
- foo:/bar
volumes:
foo:
driver: default

View File

@ -1,8 +1,7 @@
version: "3.2" version: "3.5"
services: services:
web: web:
image: busybox image: busybox
deploy: deploy:
mode: replicated mode: replicated
replicas: 6 replicas: 6
@ -15,18 +14,22 @@ services:
max_failure_ratio: 0.3 max_failure_ratio: 0.3
resources: resources:
limits: limits:
cpus: '0.001' cpus: '0.05'
memory: 50M memory: 50M
reservations: reservations:
cpus: '0.0001' cpus: '0.01'
memory: 20M memory: 20M
restart_policy: restart_policy:
condition: on_failure condition: on-failure
delay: 5s delay: 5s
max_attempts: 3 max_attempts: 3
window: 120s window: 120s
placement: placement:
constraints: [node=foo] constraints:
- node.hostname==foo
- node.role != manager
preferences:
- spread: node.labels.datacenter
healthcheck: healthcheck:
test: cat /etc/passwd test: cat /etc/passwd

View File

@ -32,7 +32,7 @@ def create_custom_host_file(client, filename, content):
) )
try: try:
client.start(container) client.start(container)
exitcode = client.wait(container) exitcode = client.wait(container)['StatusCode']
if exitcode != 0: if exitcode != 0:
output = client.logs(container) output = client.logs(container)

View File

@ -1,8 +1,10 @@
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import unicode_literals from __future__ import unicode_literals
import os.path import json
import os
import random import random
import tempfile
import py import py
import pytest import pytest
@ -1834,3 +1836,35 @@ class ProjectTest(DockerClientTestCase):
assert 'svc1' in svc2.get_dependency_names() assert 'svc1' in svc2.get_dependency_names()
with pytest.raises(NoHealthCheckConfigured): with pytest.raises(NoHealthCheckConfigured):
svc1.is_healthy() svc1.is_healthy()
def test_project_up_seccomp_profile(self):
seccomp_data = {
'defaultAction': 'SCMP_ACT_ALLOW',
'syscalls': []
}
fd, profile_path = tempfile.mkstemp('_seccomp.json')
self.addCleanup(os.remove, profile_path)
with os.fdopen(fd, 'w') as f:
json.dump(seccomp_data, f)
config_dict = {
'version': '2.3',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'security_opt': ['seccomp:"{}"'.format(profile_path)]
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(name='composetest', config_data=config_data, client=self.client)
project.up()
containers = project.containers()
assert len(containers) == 1
remote_secopts = containers[0].get('HostConfig.SecurityOpt')
assert len(remote_secopts) == 1
assert remote_secopts[0].startswith('seccomp=')
assert json.loads(remote_secopts[0].lstrip('seccomp=')) == seccomp_data

View File

@ -23,6 +23,7 @@ from .testcases import SWARM_SKIP_CONTAINERS_ALL
from .testcases import SWARM_SKIP_CPU_SHARES from .testcases import SWARM_SKIP_CPU_SHARES
from compose import __version__ from compose import __version__
from compose.config.types import MountSpec from compose.config.types import MountSpec
from compose.config.types import SecurityOpt
from compose.config.types import VolumeFromSpec from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec from compose.config.types import VolumeSpec
from compose.const import IS_WINDOWS_PLATFORM from compose.const import IS_WINDOWS_PLATFORM
@ -238,11 +239,11 @@ class ServiceTest(DockerClientTestCase):
}] }]
def test_create_container_with_security_opt(self): def test_create_container_with_security_opt(self):
security_opt = ['label:disable'] security_opt = [SecurityOpt.parse('label:disable')]
service = self.create_service('db', security_opt=security_opt) service = self.create_service('db', security_opt=security_opt)
container = service.create_container() container = service.create_container()
service.start_container(container) service.start_container(container)
assert set(container.get('HostConfig.SecurityOpt')) == set(security_opt) assert set(container.get('HostConfig.SecurityOpt')) == set([o.repr() for o in security_opt])
@pytest.mark.xfail(True, reason='Not supported on most drivers') @pytest.mark.xfail(True, reason='Not supported on most drivers')
def test_create_container_with_storage_opt(self): def test_create_container_with_storage_opt(self):
@ -264,6 +265,11 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container) service.start_container(container)
assert container.inspect()['Config']['MacAddress'] == '02:42:ac:11:65:43' assert container.inspect()['Config']['MacAddress'] == '02:42:ac:11:65:43'
def test_create_container_with_device_cgroup_rules(self):
service = self.create_service('db', device_cgroup_rules=['c 7:128 rwm'])
container = service.create_container()
assert container.get('HostConfig.DeviceCgroupRules') == ['c 7:128 rwm']
def test_create_container_with_specified_volume(self): def test_create_container_with_specified_volume(self):
host_path = '/tmp/host-path' host_path = '/tmp/host-path'
container_path = '/container-path' container_path = '/container-path'
@ -315,6 +321,23 @@ class ServiceTest(DockerClientTestCase):
assert mount assert mount
assert mount['Type'] == 'tmpfs' assert mount['Type'] == 'tmpfs'
@v2_3_only()
def test_create_container_with_tmpfs_mount_tmpfs_size(self):
container_path = '/container-tmpfs'
service = self.create_service(
'db',
volumes=[MountSpec(type='tmpfs', target=container_path, tmpfs={'size': 5368709})]
)
container = service.create_container()
service.start_container(container)
mount = container.get_mount(container_path)
assert mount
print(container.dictionary)
assert mount['Type'] == 'tmpfs'
assert container.get('HostConfig.Mounts')[0]['TmpfsOptions'] == {
'SizeBytes': 5368709
}
@v2_3_only() @v2_3_only()
def test_create_container_with_volume_mount(self): def test_create_container_with_volume_mount(self):
container_path = '/container-volume' container_path = '/container-volume'

View File

@ -22,7 +22,10 @@ class DockerClientTestCase(unittest.TestCase):
def test_docker_client_no_home(self): def test_docker_client_no_home(self):
with mock.patch.dict(os.environ): with mock.patch.dict(os.environ):
try:
del os.environ['HOME'] del os.environ['HOME']
except KeyError:
pass
docker_client(os.environ) docker_client(os.environ)
@mock.patch.dict(os.environ) @mock.patch.dict(os.environ)
@ -65,9 +68,10 @@ class DockerClientTestCase(unittest.TestCase):
class TLSConfigTestCase(unittest.TestCase): class TLSConfigTestCase(unittest.TestCase):
ca_cert = os.path.join('tests/fixtures/tls/', 'ca.pem') cert_path = 'tests/fixtures/tls/'
client_cert = os.path.join('tests/fixtures/tls/', 'cert.pem') ca_cert = os.path.join(cert_path, 'ca.pem')
key = os.path.join('tests/fixtures/tls/', 'key.pem') client_cert = os.path.join(cert_path, 'cert.pem')
key = os.path.join(cert_path, 'key.pem')
def test_simple_tls(self): def test_simple_tls(self):
options = {'--tls': True} options = {'--tls': True}
@ -199,7 +203,8 @@ class TLSConfigTestCase(unittest.TestCase):
def test_tls_verify_flag_no_override(self): def test_tls_verify_flag_no_override(self):
environment = Environment({ environment = Environment({
'DOCKER_TLS_VERIFY': 'true', 'DOCKER_TLS_VERIFY': 'true',
'COMPOSE_TLS_VERSION': 'TLSv1' 'COMPOSE_TLS_VERSION': 'TLSv1',
'DOCKER_CERT_PATH': self.cert_path
}) })
options = {'--tls': True, '--tlsverify': False} options = {'--tls': True, '--tlsverify': False}
@ -216,6 +221,17 @@ class TLSConfigTestCase(unittest.TestCase):
options = {'--tls': True} options = {'--tls': True}
assert tls_config_from_options(options, environment) is True assert tls_config_from_options(options, environment) is True
def test_tls_verify_default_cert_path(self):
environment = Environment({'DOCKER_TLS_VERIFY': '1'})
options = {'--tls': True}
with mock.patch('compose.cli.docker_client.default_cert_path') as dcp:
dcp.return_value = 'tests/fixtures/tls/'
result = tls_config_from_options(options, environment)
assert isinstance(result, docker.tls.TLSConfig)
assert result.verify is True
assert result.ca_cert == self.ca_cert
assert result.cert == (self.client_cert, self.key)
class TestGetTlsVersion(object): class TestGetTlsVersion(object):
def test_get_tls_version_default(self): def test_get_tls_version_default(self):

View File

@ -9,6 +9,7 @@ import pytest
from compose import container from compose import container
from compose.cli.errors import UserError from compose.cli.errors import UserError
from compose.cli.formatter import ConsoleWarningFormatter from compose.cli.formatter import ConsoleWarningFormatter
from compose.cli.main import call_docker
from compose.cli.main import convergence_strategy_from_opts from compose.cli.main import convergence_strategy_from_opts
from compose.cli.main import filter_containers_to_service_names from compose.cli.main import filter_containers_to_service_names
from compose.cli.main import setup_console_handler from compose.cli.main import setup_console_handler
@ -112,3 +113,44 @@ class TestConvergeStrategyFromOptsTestCase(object):
convergence_strategy_from_opts(options) == convergence_strategy_from_opts(options) ==
ConvergenceStrategy.changed ConvergenceStrategy.changed
) )
def mock_find_executable(exe):
return exe
@mock.patch('compose.cli.main.find_executable', mock_find_executable)
class TestCallDocker(object):
def test_simple_no_options(self):
with mock.patch('subprocess.call') as fake_call:
call_docker(['ps'], {})
assert fake_call.call_args[0][0] == ['docker', 'ps']
def test_simple_tls_option(self):
with mock.patch('subprocess.call') as fake_call:
call_docker(['ps'], {'--tls': True})
assert fake_call.call_args[0][0] == ['docker', '--tls', 'ps']
def test_advanced_tls_options(self):
with mock.patch('subprocess.call') as fake_call:
call_docker(['ps'], {
'--tls': True,
'--tlscacert': './ca.pem',
'--tlscert': './cert.pem',
'--tlskey': './key.pem',
})
assert fake_call.call_args[0][0] == [
'docker', '--tls', '--tlscacert', './ca.pem', '--tlscert',
'./cert.pem', '--tlskey', './key.pem', 'ps'
]
def test_with_host_option(self):
with mock.patch('subprocess.call') as fake_call:
call_docker(['ps'], {'--host': 'tcp://mydocker.net:2333'})
assert fake_call.call_args[0][0] == [
'docker', '--host', 'tcp://mydocker.net:2333', 'ps'
]

View File

@ -3,7 +3,7 @@ from __future__ import unicode_literals
import unittest import unittest
from compose.cli.utils import unquote_path from compose.utils import unquote_path
class UnquotePathTest(unittest.TestCase): class UnquotePathTest(unittest.TestCase):

View File

@ -102,6 +102,7 @@ class CLITestCase(unittest.TestCase):
os.environ['COMPOSE_INTERACTIVE_NO_CLI'] = 'true' os.environ['COMPOSE_INTERACTIVE_NO_CLI'] = 'true'
mock_client = mock.create_autospec(docker.APIClient) mock_client = mock.create_autospec(docker.APIClient)
mock_client.api_version = DEFAULT_DOCKER_API_VERSION mock_client.api_version = DEFAULT_DOCKER_API_VERSION
mock_client._general_configs = {}
project = Project.from_config( project = Project.from_config(
name='composetest', name='composetest',
client=mock_client, client=mock_client,
@ -119,10 +120,11 @@ class CLITestCase(unittest.TestCase):
'--label': [], '--label': [],
'--user': None, '--user': None,
'--no-deps': None, '--no-deps': None,
'-d': False, '--detach': False,
'-T': None, '-T': None,
'--entrypoint': None, '--entrypoint': None,
'--service-ports': None, '--service-ports': None,
'--use-aliases': None,
'--publish': [], '--publish': [],
'--volume': [], '--volume': [],
'--rm': None, '--rm': None,
@ -136,6 +138,7 @@ class CLITestCase(unittest.TestCase):
def test_run_service_with_restart_always(self): def test_run_service_with_restart_always(self):
mock_client = mock.create_autospec(docker.APIClient) mock_client = mock.create_autospec(docker.APIClient)
mock_client.api_version = DEFAULT_DOCKER_API_VERSION mock_client.api_version = DEFAULT_DOCKER_API_VERSION
mock_client._general_configs = {}
project = Project.from_config( project = Project.from_config(
name='composetest', name='composetest',
@ -156,10 +159,11 @@ class CLITestCase(unittest.TestCase):
'--label': [], '--label': [],
'--user': None, '--user': None,
'--no-deps': None, '--no-deps': None,
'-d': True, '--detach': True,
'-T': None, '-T': None,
'--entrypoint': None, '--entrypoint': None,
'--service-ports': None, '--service-ports': None,
'--use-aliases': None,
'--publish': [], '--publish': [],
'--volume': [], '--volume': [],
'--rm': None, '--rm': None,
@ -177,10 +181,11 @@ class CLITestCase(unittest.TestCase):
'--label': [], '--label': [],
'--user': None, '--user': None,
'--no-deps': None, '--no-deps': None,
'-d': True, '--detach': True,
'-T': None, '-T': None,
'--entrypoint': None, '--entrypoint': None,
'--service-ports': None, '--service-ports': None,
'--use-aliases': None,
'--publish': [], '--publish': [],
'--volume': [], '--volume': [],
'--rm': True, '--rm': True,
@ -208,10 +213,11 @@ class CLITestCase(unittest.TestCase):
'--label': [], '--label': [],
'--user': None, '--user': None,
'--no-deps': None, '--no-deps': None,
'-d': True, '--detach': True,
'-T': None, '-T': None,
'--entrypoint': None, '--entrypoint': None,
'--service-ports': True, '--service-ports': True,
'--use-aliases': None,
'--publish': ['80:80'], '--publish': ['80:80'],
'--rm': None, '--rm': None,
'--name': None, '--name': None,

View File

@ -2558,6 +2558,21 @@ class ConfigTest(unittest.TestCase):
actual = config.merge_service_dicts(base, override, V2_3) actual = config.merge_service_dicts(base, override, V2_3)
assert actual['healthcheck'] == override['healthcheck'] assert actual['healthcheck'] == override['healthcheck']
def test_merge_device_cgroup_rules(self):
base = {
'image': 'bar',
'device_cgroup_rules': ['c 7:128 rwm', 'x 3:244 rw']
}
override = {
'device_cgroup_rules': ['c 7:128 rwm', 'f 0:128 n']
}
actual = config.merge_service_dicts(base, override, V2_3)
assert sorted(actual['device_cgroup_rules']) == sorted(
['c 7:128 rwm', 'x 3:244 rw', 'f 0:128 n']
)
def test_external_volume_config(self): def test_external_volume_config(self):
config_details = build_config_details({ config_details = build_config_details({
'version': '2', 'version': '2',
@ -3303,6 +3318,82 @@ class InterpolationTest(unittest.TestCase):
assert 'BAR' in warnings[0] assert 'BAR' in warnings[0]
assert 'FOO' in warnings[1] assert 'FOO' in warnings[1]
def test_compatibility_mode_warnings(self):
config_details = build_config_details({
'version': '3.5',
'services': {
'web': {
'deploy': {
'labels': ['abc=def'],
'endpoint_mode': 'dnsrr',
'update_config': {'max_failure_ratio': 0.4},
'placement': {'constraints': ['node.id==deadbeef']},
'resources': {
'reservations': {'cpus': '0.2'}
},
'restart_policy': {
'delay': '2s',
'window': '12s'
}
},
'image': 'busybox'
}
}
})
with mock.patch('compose.config.config.log') as log:
config.load(config_details, compatibility=True)
assert log.warn.call_count == 1
warn_message = log.warn.call_args[0][0]
assert warn_message.startswith(
'The following deploy sub-keys are not supported in compatibility mode'
)
assert 'labels' in warn_message
assert 'endpoint_mode' in warn_message
assert 'update_config' in warn_message
assert 'placement' in warn_message
assert 'resources.reservations.cpus' in warn_message
assert 'restart_policy.delay' in warn_message
assert 'restart_policy.window' in warn_message
def test_compatibility_mode_load(self):
config_details = build_config_details({
'version': '3.5',
'services': {
'foo': {
'image': 'alpine:3.7',
'deploy': {
'replicas': 3,
'restart_policy': {
'condition': 'any',
'max_attempts': 7,
},
'resources': {
'limits': {'memory': '300M', 'cpus': '0.7'},
'reservations': {'memory': '100M'},
},
},
},
},
})
with mock.patch('compose.config.config.log') as log:
cfg = config.load(config_details, compatibility=True)
assert log.warn.call_count == 0
service_dict = cfg.services[0]
assert service_dict == {
'image': 'alpine:3.7',
'scale': 3,
'restart': {'MaximumRetryCount': 7, 'Name': 'always'},
'mem_limit': '300M',
'mem_reservation': '100M',
'cpus': 0.7,
'name': 'foo'
}
@mock.patch.dict(os.environ) @mock.patch.dict(os.environ)
def test_invalid_interpolation(self): def test_invalid_interpolation(self):
with pytest.raises(config.ConfigurationError) as cm: with pytest.raises(config.ConfigurationError) as cm:

View File

@ -27,6 +27,7 @@ def mock_env():
'NEGINT': '-200', 'NEGINT': '-200',
'FLOAT': '0.145', 'FLOAT': '0.145',
'MODE': '0600', 'MODE': '0600',
'BYTES': '512m',
}) })
@ -147,6 +148,9 @@ def test_interpolate_environment_services_convert_types_v2(mock_env):
'read_only': '${DEFAULT:-no}', 'read_only': '${DEFAULT:-no}',
'tty': '${DEFAULT:-N}', 'tty': '${DEFAULT:-N}',
'stdin_open': '${DEFAULT-on}', 'stdin_open': '${DEFAULT-on}',
'volumes': [
{'type': 'tmpfs', 'target': '/target', 'tmpfs': {'size': '$BYTES'}}
]
} }
} }
@ -177,6 +181,9 @@ def test_interpolate_environment_services_convert_types_v2(mock_env):
'read_only': False, 'read_only': False,
'tty': False, 'tty': False,
'stdin_open': True, 'stdin_open': True,
'volumes': [
{'type': 'tmpfs', 'target': '/target', 'tmpfs': {'size': 536870912}}
]
} }
} }

View File

@ -129,6 +129,73 @@ class ContainerTest(unittest.TestCase):
assert container.get_local_port(45454, protocol='tcp') == '0.0.0.0:49197' assert container.get_local_port(45454, protocol='tcp') == '0.0.0.0:49197'
def test_human_readable_states_no_health(self):
container = Container(None, {
"State": {
"Status": "running",
"Running": True,
"Paused": False,
"Restarting": False,
"OOMKilled": False,
"Dead": False,
"Pid": 7623,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-01-29T00:34:25.2052414Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
}, has_been_inspected=True)
expected = "Up"
assert container.human_readable_state == expected
def test_human_readable_states_starting(self):
container = Container(None, {
"State": {
"Status": "running",
"Running": True,
"Paused": False,
"Restarting": False,
"OOMKilled": False,
"Dead": False,
"Pid": 11744,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-02-03T07:56:20.3591233Z",
"FinishedAt": "2018-01-31T08:56:11.0505228Z",
"Health": {
"Status": "starting",
"FailingStreak": 0,
"Log": []
}
}
}, has_been_inspected=True)
expected = "Up (health: starting)"
assert container.human_readable_state == expected
def test_human_readable_states_healthy(self):
container = Container(None, {
"State": {
"Status": "running",
"Running": True,
"Paused": False,
"Restarting": False,
"OOMKilled": False,
"Dead": False,
"Pid": 5674,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-02-03T08:32:05.3281831Z",
"FinishedAt": "2018-02-03T08:11:35.7872706Z",
"Health": {
"Status": "healthy",
"FailingStreak": 0,
"Log": []
}
}
}, has_been_inspected=True)
expected = "Up (healthy)"
assert container.human_readable_state == expected
def test_get(self): def test_get(self):
container = Container(None, { container = Container(None, {
"Status": "Up 8 seconds", "Status": "Up 8 seconds",

View File

@ -24,6 +24,7 @@ from compose.service import Service
class ProjectTest(unittest.TestCase): class ProjectTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient) self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client._general_configs = {}
def test_from_config_v1(self): def test_from_config_v1(self):
config = Config( config = Config(

View File

@ -25,6 +25,7 @@ from compose.service import build_ulimits
from compose.service import build_volume_binding from compose.service import build_volume_binding
from compose.service import BuildAction from compose.service import BuildAction
from compose.service import ContainerNetworkMode from compose.service import ContainerNetworkMode
from compose.service import format_environment
from compose.service import formatted_ports from compose.service import formatted_ports
from compose.service import get_container_data_volumes from compose.service import get_container_data_volumes
from compose.service import ImageType from compose.service import ImageType
@ -43,6 +44,7 @@ class ServiceTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient) self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_containers(self): def test_containers(self):
service = Service('db', self.mock_client, 'myproject', image='foo') service = Service('db', self.mock_client, 'myproject', image='foo')
@ -471,7 +473,6 @@ class ServiceTest(unittest.TestCase):
self.mock_client.build.assert_called_once_with( self.mock_client.build.assert_called_once_with(
tag='default_foo', tag='default_foo',
dockerfile=None, dockerfile=None,
stream=True,
path='.', path='.',
pull=False, pull=False,
forcerm=False, forcerm=False,
@ -514,7 +515,6 @@ class ServiceTest(unittest.TestCase):
self.mock_client.build.assert_called_once_with( self.mock_client.build.assert_called_once_with(
tag='default_foo', tag='default_foo',
dockerfile=None, dockerfile=None,
stream=True,
path='.', path='.',
pull=False, pull=False,
forcerm=False, forcerm=False,
@ -744,14 +744,159 @@ class ServiceTest(unittest.TestCase):
'The "{}" service specifies a port on the host. If multiple containers ' 'The "{}" service specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'.format(name)) 'for this service are created on a single host, the port will clash.'.format(name))
def test_parse_proxy_config(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.mycorp.com:3129',
'ftpProxy': 'http://ftpproxy.mycorp.com:21',
'noProxy': '*.intra.mycorp.com',
}
class TestServiceNetwork(object): self.mock_client.base_url = 'http+docker://localunixsocket'
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
}
}
service = Service('foo', client=self.mock_client)
assert service._parse_proxy_config() == {
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': default_proxy_config['httpsProxy'],
'https_proxy': default_proxy_config['httpsProxy'],
'FTP_PROXY': default_proxy_config['ftpProxy'],
'ftp_proxy': default_proxy_config['ftpProxy'],
'NO_PROXY': default_proxy_config['noProxy'],
'no_proxy': default_proxy_config['noProxy'],
}
def test_parse_proxy_config_per_host(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.mycorp.com:3129',
'ftpProxy': 'http://ftpproxy.mycorp.com:21',
'noProxy': '*.intra.mycorp.com',
}
host_specific_proxy_config = {
'httpProxy': 'http://proxy.example.com:3128',
'httpsProxy': 'https://user:password@proxy.example.com:3129',
'ftpProxy': 'http://ftpproxy.example.com:21',
'noProxy': '*.intra.example.com'
}
self.mock_client.base_url = 'http+docker://localunixsocket'
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
'tcp://example.docker.com:2376': host_specific_proxy_config,
}
}
service = Service('foo', client=self.mock_client)
assert service._parse_proxy_config() == {
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': default_proxy_config['httpsProxy'],
'https_proxy': default_proxy_config['httpsProxy'],
'FTP_PROXY': default_proxy_config['ftpProxy'],
'ftp_proxy': default_proxy_config['ftpProxy'],
'NO_PROXY': default_proxy_config['noProxy'],
'no_proxy': default_proxy_config['noProxy'],
}
self.mock_client._original_base_url = 'tcp://example.docker.com:2376'
assert service._parse_proxy_config() == {
'HTTP_PROXY': host_specific_proxy_config['httpProxy'],
'http_proxy': host_specific_proxy_config['httpProxy'],
'HTTPS_PROXY': host_specific_proxy_config['httpsProxy'],
'https_proxy': host_specific_proxy_config['httpsProxy'],
'FTP_PROXY': host_specific_proxy_config['ftpProxy'],
'ftp_proxy': host_specific_proxy_config['ftpProxy'],
'NO_PROXY': host_specific_proxy_config['noProxy'],
'no_proxy': host_specific_proxy_config['noProxy'],
}
def test_build_service_with_proxy_config(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.example.com:3129',
}
buildargs = {
'HTTPS_PROXY': 'https://rdcf.th08.jp:8911',
'https_proxy': 'https://rdcf.th08.jp:8911',
}
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
}
}
self.mock_client.base_url = 'http+docker://localunixsocket'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service('foo', client=self.mock_client, build={'context': '.', 'args': buildargs})
service.build()
assert self.mock_client.build.call_count == 1
assert self.mock_client.build.call_args[1]['buildargs'] == {
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': buildargs['HTTPS_PROXY'],
'https_proxy': buildargs['HTTPS_PROXY'],
}
def test_get_create_options_with_proxy_config(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.mycorp.com:3129',
'ftpProxy': 'http://ftpproxy.mycorp.com:21',
}
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
}
}
self.mock_client.base_url = 'http+docker://localunixsocket'
override_options = {
'environment': {
'FTP_PROXY': 'ftp://xdge.exo.au:21',
'ftp_proxy': 'ftp://xdge.exo.au:21',
}
}
environment = {
'HTTPS_PROXY': 'https://rdcf.th08.jp:8911',
'https_proxy': 'https://rdcf.th08.jp:8911',
}
service = Service('foo', client=self.mock_client, environment=environment)
create_opts = service._get_container_create_options(override_options, 1)
assert set(create_opts['environment']) == set(format_environment({
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': environment['HTTPS_PROXY'],
'https_proxy': environment['HTTPS_PROXY'],
'FTP_PROXY': override_options['environment']['FTP_PROXY'],
'ftp_proxy': override_options['environment']['FTP_PROXY'],
}))
class TestServiceNetwork(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_connect_container_to_networks_short_aliase_exists(self): def test_connect_container_to_networks_short_aliase_exists(self):
mock_client = mock.create_autospec(docker.APIClient)
service = Service( service = Service(
'db', 'db',
mock_client, self.mock_client,
'myproject', 'myproject',
image='foo', image='foo',
networks={'project_default': {}}) networks={'project_default': {}})
@ -770,8 +915,8 @@ class TestServiceNetwork(object):
True) True)
service.connect_container_to_networks(container) service.connect_container_to_networks(container)
assert not mock_client.disconnect_container_from_network.call_count assert not self.mock_client.disconnect_container_from_network.call_count
assert not mock_client.connect_container_to_network.call_count assert not self.mock_client.connect_container_to_network.call_count
def sort_by_name(dictionary_list): def sort_by_name(dictionary_list):
@ -816,6 +961,10 @@ class BuildUlimitsTestCase(unittest.TestCase):
class NetTestCase(unittest.TestCase): class NetTestCase(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_network_mode(self): def test_network_mode(self):
network_mode = NetworkMode('host') network_mode = NetworkMode('host')
@ -833,12 +982,11 @@ class NetTestCase(unittest.TestCase):
def test_network_mode_service(self): def test_network_mode_service(self):
container_id = 'bbbb' container_id = 'bbbb'
service_name = 'web' service_name = 'web'
mock_client = mock.create_autospec(docker.APIClient) self.mock_client.containers.return_value = [
mock_client.containers.return_value = [
{'Id': container_id, 'Name': container_id, 'Image': 'abcd'}, {'Id': container_id, 'Name': container_id, 'Image': 'abcd'},
] ]
service = Service(name=service_name, client=mock_client) service = Service(name=service_name, client=self.mock_client)
network_mode = ServiceNetworkMode(service) network_mode = ServiceNetworkMode(service)
assert network_mode.id == service_name assert network_mode.id == service_name
@ -847,10 +995,9 @@ class NetTestCase(unittest.TestCase):
def test_network_mode_service_no_containers(self): def test_network_mode_service_no_containers(self):
service_name = 'web' service_name = 'web'
mock_client = mock.create_autospec(docker.APIClient) self.mock_client.containers.return_value = []
mock_client.containers.return_value = []
service = Service(name=service_name, client=mock_client) service = Service(name=service_name, client=self.mock_client)
network_mode = ServiceNetworkMode(service) network_mode = ServiceNetworkMode(service)
assert network_mode.id == service_name assert network_mode.id == service_name
@ -886,6 +1033,7 @@ class ServiceVolumesTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient) self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_build_volume_binding(self): def test_build_volume_binding(self):
binding = build_volume_binding(VolumeSpec.parse('/outside:/inside', True)) binding = build_volume_binding(VolumeSpec.parse('/outside:/inside', True))
@ -1120,6 +1268,8 @@ class ServiceVolumesTest(unittest.TestCase):
class ServiceSecretTest(unittest.TestCase): class ServiceSecretTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient) self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_get_secret_volumes(self): def test_get_secret_volumes(self):
secret1 = { secret1 = {

View File

@ -1,8 +1,9 @@
[tox] [tox]
envlist = py27,py34,pre-commit envlist = py27,py36,pre-commit
[testenv] [testenv]
usedevelop=True usedevelop=True
whitelist_externals=mkdir
passenv = passenv =
LD_LIBRARY_PATH LD_LIBRARY_PATH
DOCKER_HOST DOCKER_HOST
@ -17,6 +18,7 @@ deps =
-rrequirements.txt -rrequirements.txt
-rrequirements-dev.txt -rrequirements-dev.txt
commands = commands =
mkdir -p .coverage-binfiles
py.test -v \ py.test -v \
--cov=compose \ --cov=compose \
--cov-report html \ --cov-report html \
@ -35,6 +37,7 @@ commands =
# Coverage configuration # Coverage configuration
[run] [run]
branch = True branch = True
data_file = .coverage-binfiles/.coverage
[report] [report]
show_missing = true show_missing = true