Merge pull request #4292 from docker/bump-1.10.0-rc1

Bump 1.10.0 rc1
This commit is contained in:
Joffrey F 2017-01-04 14:53:19 -08:00 committed by GitHub
commit 3dc5f91942
47 changed files with 1368 additions and 161 deletions

View File

@ -1,6 +1,46 @@
Change log Change log
========== ==========
1.10.0 (2017-01-18)
-------------------
### New Features
#### Compose file version 3.0
- Introduced version 3.0 of the `docker-compose.yml` specification. This
version requires to be used with Docker Engine 1.13 or above and is
specifically designed to work with the `docker stack` commands.
- Added support for the `stop_grace_period` option in service definitions.
#### Compose file version 2.1 and up
- Healthcheck configuration can now be done in the service definition using
the `healthcheck` parameter
- Containers dependencies can now be set up to wait on positive healthchecks
when declared using `depends_on`. See the documentation for the updated
syntax.
**Note:** This feature will not be ported to version 3 Compose files.
- Added support for the `sysctls` parameter in service definitions
- Added support for the `userns_mode` parameter in service definitions
- Compose now adds identifying labels to networks and volumes it creates
### Bugfixes
- Colored output now works properly on Windows.
- Fixed a bug where docker-compose run would fail to set up link aliases
in interactive mode on Windows.
- Networks created by Compose are now always made attachable
(Compose files v2.1 and up).
1.9.0 (2016-11-16) 1.9.0 (2016-11-16)
----------------- -----------------
@ -814,7 +854,7 @@ Fig has been renamed to Docker Compose, or just Compose for short. This has seve
- The command you type is now `docker-compose`, not `fig`. - The command you type is now `docker-compose`, not `fig`.
- You should rename your fig.yml to docker-compose.yml. - You should rename your fig.yml to docker-compose.yml.
- If youre installing via PyPi, the package is now `docker-compose`, so install it with `pip install docker-compose`. - If youre installing via PyPI, the package is now `docker-compose`, so install it with `pip install docker-compose`.
Besides that, theres a lot of new stuff in this release: Besides that, theres a lot of new stuff in this release:

30
Jenkinsfile vendored
View File

@ -2,17 +2,10 @@
def image def image
def checkDocs = { ->
wrappedNode(label: 'linux') {
deleteDir(); checkout(scm)
documentationChecker("docs")
}
}
def buildImage = { -> def buildImage = { ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) { wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("build image") { stage("build image") {
deleteDir(); checkout(scm) checkout(scm)
def imageName = "dockerbuildbot/compose:${gitCommit()}" def imageName = "dockerbuildbot/compose:${gitCommit()}"
image = docker.image(imageName) image = docker.image(imageName)
try { try {
@ -39,7 +32,7 @@ def runTests = { Map settings ->
{ -> { ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) { wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions}") { stage("test python=${pythonVersions} / docker=${dockerVersions}") {
deleteDir(); checkout(scm) checkout(scm)
def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim() def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}" echo "Using local system's storage driver: ${storageDriver}"
sh """docker run \\ sh """docker run \\
@ -62,19 +55,10 @@ def runTests = { Map settings ->
} }
} }
def buildAndTest = { -> buildImage()
buildImage() // TODO: break this out into meaningful "DOCKER_VERSIONS" values instead of all
// TODO: break this out into meaningful "DOCKER_VERSIONS" values instead of all
parallel(
failFast: true,
all_py27: runTests(pythonVersions: "py27", dockerVersions: "all"),
all_py34: runTests(pythonVersions: "py34", dockerVersions: "all"),
)
}
parallel( parallel(
failFast: false, failFast: true,
docs: checkDocs, all_py27: runTests(pythonVersions: "py27", dockerVersions: "all"),
test: buildAndTest all_py34: runTests(pythonVersions: "py34", dockerVersions: "all"),
) )

View File

@ -6,11 +6,11 @@ Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services. With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features). see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
Compose is great for development, testing, and staging environments, as well as Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases). [Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
Using Compose is basically a three-step process. Using Compose is basically a three-step process.
@ -35,7 +35,7 @@ A `docker-compose.yml` looks like this:
image: redis image: redis
For more information about the Compose file, see the For more information about the Compose file, see the
[Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md) [Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file.md)
Compose has commands for managing the whole lifecycle of your application: Compose has commands for managing the whole lifecycle of your application:

View File

@ -1,4 +1,4 @@
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = '1.9.0' __version__ = '1.10.0-rc1'

View File

@ -1,5 +1,8 @@
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import unicode_literals from __future__ import unicode_literals
import colorama
NAMES = [ NAMES = [
'grey', 'grey',
'red', 'red',
@ -30,6 +33,7 @@ def make_color_fn(code):
return lambda s: ansi_color(code, s) return lambda s: ansi_color(code, s)
colorama.init()
for (name, code) in get_pairs(): for (name, code) in get_pairs():
globals()[name] = make_color_fn(code) globals()[name] = make_color_fn(code)

View File

@ -3,7 +3,7 @@ from __future__ import unicode_literals
import logging import logging
from docker import Client from docker import APIClient
from docker.errors import TLSParameterError from docker.errors import TLSParameterError
from docker.tls import TLSConfig from docker.tls import TLSConfig
from docker.utils import kwargs_from_env from docker.utils import kwargs_from_env
@ -71,4 +71,4 @@ def docker_client(environment, version=None, tls_config=None, host=None,
kwargs['user_agent'] = generate_user_agent() kwargs['user_agent'] = generate_user_agent()
return Client(**kwargs) return APIClient(**kwargs)

View File

@ -24,7 +24,6 @@ from ..config import ConfigurationError
from ..config import parse_environment from ..config import parse_environment
from ..config.environment import Environment from ..config.environment import Environment
from ..config.serialize import serialize_config from ..config.serialize import serialize_config
from ..const import DEFAULT_TIMEOUT
from ..const import IS_WINDOWS_PLATFORM from ..const import IS_WINDOWS_PLATFORM
from ..errors import StreamParseError from ..errors import StreamParseError
from ..progress_stream import StreamOutputError from ..progress_stream import StreamOutputError
@ -726,7 +725,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds. -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10) (default: 10)
""" """
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) timeout = timeout_from_opts(options)
for s in options['SERVICE=NUM']: for s in options['SERVICE=NUM']:
if '=' not in s: if '=' not in s:
@ -760,7 +759,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds. -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10) (default: 10)
""" """
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) timeout = timeout_from_opts(options)
self.project.stop(service_names=options['SERVICE'], timeout=timeout) self.project.stop(service_names=options['SERVICE'], timeout=timeout)
def restart(self, options): def restart(self, options):
@ -773,7 +772,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds. -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10) (default: 10)
""" """
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) timeout = timeout_from_opts(options)
containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout) containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
exit_if(not containers, 'No containers to restart', 1) exit_if(not containers, 'No containers to restart', 1)
@ -831,7 +830,7 @@ class TopLevelCommand(object):
start_deps = not options['--no-deps'] start_deps = not options['--no-deps']
cascade_stop = options['--abort-on-container-exit'] cascade_stop = options['--abort-on-container-exit']
service_names = options['SERVICE'] service_names = options['SERVICE']
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) timeout = timeout_from_opts(options)
remove_orphans = options['--remove-orphans'] remove_orphans = options['--remove-orphans']
detached = options.get('-d') detached = options.get('-d')
@ -896,6 +895,11 @@ def convergence_strategy_from_opts(options):
return ConvergenceStrategy.changed return ConvergenceStrategy.changed
def timeout_from_opts(options):
timeout = options.get('--timeout')
return None if timeout is None else int(timeout)
def image_type_from_opt(flag, value): def image_type_from_opt(flag, value):
if not value: if not value:
return ImageType.none return ImageType.none
@ -984,6 +988,7 @@ def run_one_off_container(container_options, project, service, options):
try: try:
try: try:
if IS_WINDOWS_PLATFORM: if IS_WINDOWS_PLATFORM:
service.connect_container_to_networks(container)
exit_code = call_docker(["start", "--attach", "--interactive", container.id]) exit_code = call_docker(["start", "--attach", "--interactive", container.id])
else: else:
operation = RunOperation( operation = RunOperation(

View File

@ -15,7 +15,9 @@ from cached_property import cached_property
from ..const import COMPOSEFILE_V1 as V1 from ..const import COMPOSEFILE_V1 as V1
from ..const import COMPOSEFILE_V2_0 as V2_0 from ..const import COMPOSEFILE_V2_0 as V2_0
from ..const import COMPOSEFILE_V2_1 as V2_1 from ..const import COMPOSEFILE_V2_1 as V2_1
from ..const import COMPOSEFILE_V3_0 as V3_0
from ..utils import build_string_dict from ..utils import build_string_dict
from ..utils import parse_nanoseconds_int
from ..utils import splitdrive from ..utils import splitdrive
from .environment import env_vars_from_file from .environment import env_vars_from_file
from .environment import Environment from .environment import Environment
@ -64,6 +66,7 @@ DOCKER_CONFIG_KEYS = [
'extra_hosts', 'extra_hosts',
'group_add', 'group_add',
'hostname', 'hostname',
'healthcheck',
'image', 'image',
'ipc', 'ipc',
'labels', 'labels',
@ -83,8 +86,10 @@ DOCKER_CONFIG_KEYS = [
'shm_size', 'shm_size',
'stdin_open', 'stdin_open',
'stop_signal', 'stop_signal',
'sysctls',
'tty', 'tty',
'user', 'user',
'userns_mode',
'volume_driver', 'volume_driver',
'volumes', 'volumes',
'volumes_from', 'volumes_from',
@ -175,7 +180,10 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
if version == '2': if version == '2':
version = V2_0 version = V2_0
if version not in (V2_0, V2_1): if version == '3':
version = V3_0
if version not in (V2_0, V2_1, V3_0):
raise ConfigurationError( raise ConfigurationError(
'Version in "{}" is unsupported. {}' 'Version in "{}" is unsupported. {}'
.format(self.filename, VERSION_EXPLANATION)) .format(self.filename, VERSION_EXPLANATION))
@ -326,6 +334,14 @@ def load(config_details):
for service_dict in service_dicts: for service_dict in service_dicts:
match_named_volumes(service_dict, volumes) match_named_volumes(service_dict, volumes)
services_using_deploy = [s for s in service_dicts if s.get('deploy')]
if services_using_deploy:
log.warn(
"Some services ({}) use the 'deploy' key, which will be ignored. "
"Compose does not support deploy configuration - use "
"`docker stack deploy` to deploy to a swarm."
.format(", ".join(sorted(s['name'] for s in services_using_deploy))))
return Config(main_file.version, service_dicts, volumes, networks) return Config(main_file.version, service_dicts, volumes, networks)
@ -433,7 +449,7 @@ def process_config_file(config_file, environment, service_name=None):
'service', 'service',
environment) environment)
if config_file.version in (V2_0, V2_1): if config_file.version in (V2_0, V2_1, V3_0):
processed_config = dict(config_file.config) processed_config = dict(config_file.config)
processed_config['services'] = services processed_config['services'] = services
processed_config['volumes'] = interpolate_config_section( processed_config['volumes'] = interpolate_config_section(
@ -446,9 +462,10 @@ def process_config_file(config_file, environment, service_name=None):
config_file.get_networks(), config_file.get_networks(),
'network', 'network',
environment) environment)
elif config_file.version == V1:
if config_file.version == V1:
processed_config = services processed_config = services
else:
raise Exception("Unsupported version: {}".format(repr(config_file.version)))
config_file = config_file._replace(config=processed_config) config_file = config_file._replace(config=processed_config)
validate_against_config_schema(config_file) validate_against_config_schema(config_file)
@ -629,10 +646,53 @@ def process_service(service_config):
if 'extra_hosts' in service_dict: if 'extra_hosts' in service_dict:
service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts']) service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
if 'sysctls' in service_dict:
service_dict['sysctls'] = build_string_dict(parse_sysctls(service_dict['sysctls']))
service_dict = process_depends_on(service_dict)
for field in ['dns', 'dns_search', 'tmpfs']: for field in ['dns', 'dns_search', 'tmpfs']:
if field in service_dict: if field in service_dict:
service_dict[field] = to_list(service_dict[field]) service_dict[field] = to_list(service_dict[field])
service_dict = process_healthcheck(service_dict, service_config.name)
return service_dict
def process_depends_on(service_dict):
if 'depends_on' in service_dict and not isinstance(service_dict['depends_on'], dict):
service_dict['depends_on'] = dict([
(svc, {'condition': 'service_started'}) for svc in service_dict['depends_on']
])
return service_dict
def process_healthcheck(service_dict, service_name):
if 'healthcheck' not in service_dict:
return service_dict
hc = {}
raw = service_dict['healthcheck']
if raw.get('disable'):
if len(raw) > 1:
raise ConfigurationError(
'Service "{}" defines an invalid healthcheck: '
'"disable: true" cannot be combined with other options'
.format(service_name))
hc['test'] = ['NONE']
elif 'test' in raw:
hc['test'] = raw['test']
if 'interval' in raw:
hc['interval'] = parse_nanoseconds_int(raw['interval'])
if 'timeout' in raw:
hc['timeout'] = parse_nanoseconds_int(raw['timeout'])
if 'retries' in raw:
hc['retries'] = raw['retries']
service_dict['healthcheck'] = hc
return service_dict return service_dict
@ -757,6 +817,7 @@ def merge_service_dicts(base, override, version):
md.merge_mapping('labels', parse_labels) md.merge_mapping('labels', parse_labels)
md.merge_mapping('ulimits', parse_ulimits) md.merge_mapping('ulimits', parse_ulimits)
md.merge_mapping('networks', parse_networks) md.merge_mapping('networks', parse_networks)
md.merge_mapping('sysctls', parse_sysctls)
md.merge_sequence('links', ServiceLink.parse) md.merge_sequence('links', ServiceLink.parse)
for field in ['volumes', 'devices']: for field in ['volumes', 'devices']:
@ -831,11 +892,11 @@ def merge_environment(base, override):
return env return env
def split_label(label): def split_kv(kvpair):
if '=' in label: if '=' in kvpair:
return label.split('=', 1) return kvpair.split('=', 1)
else: else:
return label, '' return kvpair, ''
def parse_dict_or_list(split_func, type_name, arguments): def parse_dict_or_list(split_func, type_name, arguments):
@ -856,8 +917,9 @@ def parse_dict_or_list(split_func, type_name, arguments):
parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments') parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment') parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels') parse_labels = functools.partial(parse_dict_or_list, split_kv, 'labels')
parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks') parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks')
parse_sysctls = functools.partial(parse_dict_or_list, split_kv, 'sysctls')
def parse_ulimits(ulimits): def parse_ulimits(ulimits):

View File

@ -77,7 +77,28 @@
"cpu_shares": {"type": ["number", "string"]}, "cpu_shares": {"type": ["number", "string"]},
"cpu_quota": {"type": ["number", "string"]}, "cpu_quota": {"type": ["number", "string"]},
"cpuset": {"type": "string"}, "cpuset": {"type": "string"},
"depends_on": {"$ref": "#/definitions/list_of_strings"}, "depends_on": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"additionalProperties": false,
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"type": "object",
"additionalProperties": false,
"properties": {
"condition": {
"type": "string",
"enum": ["service_started", "service_healthy"]
}
},
"required": ["condition"]
}
}
}
]
},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"}, "dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"}, "dns_search": {"$ref": "#/definitions/string_or_list"},
@ -120,6 +141,7 @@
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"}, "extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"}, "hostname": {"type": "string"},
"image": {"type": "string"}, "image": {"type": "string"},
"ipc": {"type": "string"}, "ipc": {"type": "string"},
@ -193,6 +215,7 @@
"restart": {"type": "string"}, "restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]}, "shm_size": {"type": ["number", "string"]},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"}, "stdin_open": {"type": "boolean"},
"stop_signal": {"type": "string"}, "stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"}, "tmpfs": {"$ref": "#/definitions/string_or_list"},
@ -217,6 +240,7 @@
} }
}, },
"user": {"type": "string"}, "user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"volume_driver": {"type": "string"}, "volume_driver": {"type": "string"},
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
@ -229,6 +253,24 @@
"additionalProperties": false "additionalProperties": false
}, },
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"network": { "network": {
"id": "#/definitions/network", "id": "#/definitions/network",
"type": "object", "type": "object",

View File

@ -0,0 +1,381 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.0.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
}
},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"container_name": {"type": "string"},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "ports"
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_signal": {"type": "string"},
"stop_grace_period": {"type": "string", "format": "duration"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
}
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"resource": {
"id": "#/definitions/resource",
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionaProperties": false
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@ -3,8 +3,8 @@ from __future__ import unicode_literals
VERSION_EXPLANATION = ( VERSION_EXPLANATION = (
'You might be seeing this error because you\'re using the wrong Compose ' 'You might be seeing this error because you\'re using the wrong Compose file version. '
'file version. Either specify a version of "2" (or "2.0") and place your ' 'Either specify a supported version ("2.0", "2.1", "3.0") and place your '
'service definitions under the `services` key, or omit the `version` key ' 'service definitions under the `services` key, or omit the `version` key '
'and place your service definitions at the root of the file to use ' 'and place your service definitions at the root of the file to use '
'version 1.\nFor more on the Compose file format versions, see ' 'version 1.\nFor more on the Compose file format versions, see '

View File

@ -6,7 +6,6 @@ import yaml
from compose.config import types from compose.config import types
from compose.config.config import V1 from compose.config.config import V1
from compose.config.config import V2_0
from compose.config.config import V2_1 from compose.config.config import V2_1
@ -34,7 +33,7 @@ def denormalize_config(config):
del net_conf['external_name'] del net_conf['external_name']
version = config.version version = config.version
if version not in (V2_0, V2_1): if version == V1:
version = V2_1 version = V2_1
return { return {

View File

@ -180,11 +180,13 @@ def validate_links(service_config, service_names):
def validate_depends_on(service_config, service_names): def validate_depends_on(service_config, service_names):
for dependency in service_config.config.get('depends_on', []): deps = service_config.config.get('depends_on', {})
for dependency in deps.keys():
if dependency not in service_names: if dependency not in service_names:
raise ConfigurationError( raise ConfigurationError(
"Service '{s.name}' depends on service '{dep}' which is " "Service '{s.name}' depends on service '{dep}' which is "
"undefined.".format(s=service_config, dep=dependency)) "undefined.".format(s=service_config, dep=dependency)
)
def get_unsupported_config_msg(path, error_key): def get_unsupported_config_msg(path, error_key):
@ -201,7 +203,7 @@ def anglicize_json_type(json_type):
def is_service_dict_schema(schema_id): def is_service_dict_schema(schema_id):
return schema_id in ('config_schema_v1.json', '#/properties/services') return schema_id in ('config_schema_v1.json', '#/properties/services')
def handle_error_for_schema_with_id(error, path): def handle_error_for_schema_with_id(error, path):

View File

@ -11,21 +11,26 @@ LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff' LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project' LABEL_PROJECT = 'com.docker.compose.project'
LABEL_SERVICE = 'com.docker.compose.service' LABEL_SERVICE = 'com.docker.compose.service'
LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version' LABEL_VERSION = 'com.docker.compose.version'
LABEL_VOLUME = 'com.docker.compose.volume'
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash' LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
COMPOSEFILE_V1 = '1' COMPOSEFILE_V1 = '1'
COMPOSEFILE_V2_0 = '2.0' COMPOSEFILE_V2_0 = '2.0'
COMPOSEFILE_V2_1 = '2.1' COMPOSEFILE_V2_1 = '2.1'
COMPOSEFILE_V3_0 = '3.0'
API_VERSIONS = { API_VERSIONS = {
COMPOSEFILE_V1: '1.21', COMPOSEFILE_V1: '1.21',
COMPOSEFILE_V2_0: '1.22', COMPOSEFILE_V2_0: '1.22',
COMPOSEFILE_V2_1: '1.24', COMPOSEFILE_V2_1: '1.24',
COMPOSEFILE_V3_0: '1.25',
} }
API_VERSION_TO_ENGINE_VERSION = { API_VERSION_TO_ENGINE_VERSION = {
API_VERSIONS[COMPOSEFILE_V1]: '1.9.0', API_VERSIONS[COMPOSEFILE_V1]: '1.9.0',
API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0', API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0',
API_VERSIONS[COMPOSEFILE_V2_1]: '1.12.0', API_VERSIONS[COMPOSEFILE_V2_1]: '1.12.0',
API_VERSIONS[COMPOSEFILE_V3_0]: '1.13.0',
} }

View File

@ -10,3 +10,24 @@ class OperationFailedError(Exception):
class StreamParseError(RuntimeError): class StreamParseError(RuntimeError):
def __init__(self, reason): def __init__(self, reason):
self.msg = reason self.msg = reason
class HealthCheckException(Exception):
def __init__(self, reason):
self.msg = reason
class HealthCheckFailed(HealthCheckException):
def __init__(self, container_id):
super(HealthCheckFailed, self).__init__(
'Container "{}" is unhealthy.'.format(container_id)
)
class NoHealthCheckConfigured(HealthCheckException):
def __init__(self, service_name):
super(NoHealthCheckConfigured, self).__init__(
'Service "{}" is missing a healthcheck configuration'.format(
service_name
)
)

View File

@ -4,10 +4,14 @@ from __future__ import unicode_literals
import logging import logging
from docker.errors import NotFound from docker.errors import NotFound
from docker.utils import create_ipam_config from docker.types import IPAMConfig
from docker.utils import create_ipam_pool from docker.types import IPAMPool
from docker.utils import version_gte
from docker.utils import version_lt
from .config import ConfigurationError from .config import ConfigurationError
from .const import LABEL_NETWORK
from .const import LABEL_PROJECT
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -71,7 +75,8 @@ class Network(object):
ipam=self.ipam, ipam=self.ipam,
internal=self.internal, internal=self.internal,
enable_ipv6=self.enable_ipv6, enable_ipv6=self.enable_ipv6,
labels=self.labels, labels=self._labels,
attachable=version_gte(self.client._version, '1.24') or None,
) )
def remove(self): def remove(self):
@ -91,15 +96,26 @@ class Network(object):
return self.external_name return self.external_name
return '{0}_{1}'.format(self.project, self.name) return '{0}_{1}'.format(self.project, self.name)
@property
def _labels(self):
if version_lt(self.client._version, '1.23'):
return None
labels = self.labels.copy() if self.labels else {}
labels.update({
LABEL_PROJECT: self.project,
LABEL_NETWORK: self.name,
})
return labels
def create_ipam_config_from_dict(ipam_dict): def create_ipam_config_from_dict(ipam_dict):
if not ipam_dict: if not ipam_dict:
return None return None
return create_ipam_config( return IPAMConfig(
driver=ipam_dict.get('driver'), driver=ipam_dict.get('driver'),
pool_configs=[ pool_configs=[
create_ipam_pool( IPAMPool(
subnet=config.get('subnet'), subnet=config.get('subnet'),
iprange=config.get('ip_range'), iprange=config.get('ip_range'),
gateway=config.get('gateway'), gateway=config.get('gateway'),

View File

@ -165,13 +165,14 @@ def feed_queue(objects, func, get_deps, results, state):
for obj in pending: for obj in pending:
deps = get_deps(obj) deps = get_deps(obj)
if any(dep in state.failed for dep in deps): if any(dep[0] in state.failed for dep in deps):
log.debug('{} has upstream errors - not processing'.format(obj)) log.debug('{} has upstream errors - not processing'.format(obj))
results.put((obj, None, UpstreamError())) results.put((obj, None, UpstreamError()))
state.failed.add(obj) state.failed.add(obj)
elif all( elif all(
dep not in objects or dep in state.finished dep not in objects or (
for dep in deps dep in state.finished and (not ready_check or ready_check(dep))
) for dep, ready_check in deps
): ):
log.debug('Starting producer thread for {}'.format(obj)) log.debug('Starting producer thread for {}'.format(obj))
t = Thread(target=producer, args=(obj, func, results)) t = Thread(target=producer, args=(obj, func, results))
@ -248,7 +249,3 @@ def parallel_unpause(containers, options):
def parallel_kill(containers, options): def parallel_kill(containers, options):
parallel_operation(containers, 'kill', options, 'Killing') parallel_operation(containers, 'kill', options, 'Killing')
def parallel_restart(containers, options):
parallel_operation(containers, 'restart', options, 'Restarting')

View File

@ -32,12 +32,11 @@ def stream_output(output, stream):
if not image_id: if not image_id:
continue continue
if image_id in lines: if image_id not in lines:
diff = len(lines) - lines[image_id]
else:
lines[image_id] = len(lines) lines[image_id] = len(lines)
stream.write("\n") stream.write("\n")
diff = 0
diff = len(lines) - lines[image_id]
# move cursor up `diff` rows # move cursor up `diff` rows
stream.write("%c[%dA" % (27, diff)) stream.write("%c[%dA" % (27, diff))

View File

@ -14,7 +14,6 @@ from .config import ConfigurationError
from .config.config import V1 from .config.config import V1
from .config.sort_services import get_container_name_from_network_mode from .config.sort_services import get_container_name_from_network_mode
from .config.sort_services import get_service_name_from_network_mode from .config.sort_services import get_service_name_from_network_mode
from .const import DEFAULT_TIMEOUT
from .const import IMAGE_EVENTS from .const import IMAGE_EVENTS
from .const import LABEL_ONE_OFF from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT from .const import LABEL_PROJECT
@ -228,7 +227,10 @@ class Project(object):
services = self.get_services(service_names) services = self.get_services(service_names)
def get_deps(service): def get_deps(service):
return {self.get_service(dep) for dep in service.get_dependency_names()} return {
(self.get_service(dep), config)
for dep, config in service.get_dependency_configs().items()
}
parallel.parallel_execute( parallel.parallel_execute(
services, services,
@ -244,13 +246,13 @@ class Project(object):
def get_deps(container): def get_deps(container):
# actually returning inversed dependencies # actually returning inversed dependencies
return {other for other in containers return {(other, None) for other in containers
if container.service in if container.service in
self.get_service(other.service).get_dependency_names()} self.get_service(other.service).get_dependency_names()}
parallel.parallel_execute( parallel.parallel_execute(
containers, containers,
operator.methodcaller('stop', **options), self.build_container_operation_with_timeout_func('stop', options),
operator.attrgetter('name'), operator.attrgetter('name'),
'Stopping', 'Stopping',
get_deps) get_deps)
@ -291,7 +293,12 @@ class Project(object):
def restart(self, service_names=None, **options): def restart(self, service_names=None, **options):
containers = self.containers(service_names, stopped=True) containers = self.containers(service_names, stopped=True)
parallel.parallel_restart(containers, options)
parallel.parallel_execute(
containers,
self.build_container_operation_with_timeout_func('restart', options),
operator.attrgetter('name'),
'Restarting')
return containers return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False): def build(self, service_names=None, no_cache=False, pull=False, force_rm=False):
@ -365,7 +372,7 @@ class Project(object):
start_deps=True, start_deps=True,
strategy=ConvergenceStrategy.changed, strategy=ConvergenceStrategy.changed,
do_build=BuildAction.none, do_build=BuildAction.none,
timeout=DEFAULT_TIMEOUT, timeout=None,
detached=False, detached=False,
remove_orphans=False): remove_orphans=False):
@ -390,7 +397,10 @@ class Project(object):
) )
def get_deps(service): def get_deps(service):
return {self.get_service(dep) for dep in service.get_dependency_names()} return {
(self.get_service(dep), config)
for dep, config in service.get_dependency_configs().items()
}
results, errors = parallel.parallel_execute( results, errors = parallel.parallel_execute(
services, services,
@ -506,6 +516,14 @@ class Project(object):
dep_services.append(service) dep_services.append(service)
return acc + dep_services return acc + dep_services
def build_container_operation_with_timeout_func(self, operation, options):
def container_operation_with_timeout(container):
if options.get('timeout') is None:
service = self.get_service(container.service)
options['timeout'] = service.stop_timeout(None)
return getattr(container, operation)(**options)
return container_operation_with_timeout
def get_volumes_from(project, service_dict): def get_volumes_from(project, service_dict):
volumes_from = service_dict.pop('volumes_from', None) volumes_from = service_dict.pop('volumes_from', None)
@ -547,9 +565,7 @@ def warn_for_swarm_mode(client):
"Compose does not use swarm mode to deploy services to multiple nodes in a swarm. " "Compose does not use swarm mode to deploy services to multiple nodes in a swarm. "
"All containers will be scheduled on the current node.\n\n" "All containers will be scheduled on the current node.\n\n"
"To deploy your application across the swarm, " "To deploy your application across the swarm, "
"use the bundle feature of the Docker experimental build.\n\n" "use `docker stack deploy`.\n"
"More info:\n"
"https://docs.docker.com/compose/bundles\n"
) )

View File

@ -11,7 +11,7 @@ import enum
import six import six
from docker.errors import APIError from docker.errors import APIError
from docker.errors import NotFound from docker.errors import NotFound
from docker.utils import LogConfig from docker.types import LogConfig
from docker.utils.ports import build_port_bindings from docker.utils.ports import build_port_bindings
from docker.utils.ports import split_port from docker.utils.ports import split_port
@ -28,12 +28,15 @@ from .const import LABEL_PROJECT
from .const import LABEL_SERVICE from .const import LABEL_SERVICE
from .const import LABEL_VERSION from .const import LABEL_VERSION
from .container import Container from .container import Container
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
from .errors import OperationFailedError from .errors import OperationFailedError
from .parallel import parallel_execute from .parallel import parallel_execute
from .parallel import parallel_start from .parallel import parallel_start
from .progress_stream import stream_output from .progress_stream import stream_output
from .progress_stream import StreamOutputError from .progress_stream import StreamOutputError
from .utils import json_hash from .utils import json_hash
from .utils import parse_seconds_float
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -63,9 +66,14 @@ DOCKER_START_KEYS = [
'restart', 'restart',
'security_opt', 'security_opt',
'shm_size', 'shm_size',
'sysctls',
'userns_mode',
'volumes_from', 'volumes_from',
] ]
CONDITION_STARTED = 'service_started'
CONDITION_HEALTHY = 'service_healthy'
class BuildError(Exception): class BuildError(Exception):
def __init__(self, service, reason): def __init__(self, service, reason):
@ -169,7 +177,7 @@ class Service(object):
self.start_container_if_stopped(c, **options) self.start_container_if_stopped(c, **options)
return containers return containers
def scale(self, desired_num, timeout=DEFAULT_TIMEOUT): def scale(self, desired_num, timeout=None):
""" """
Adjusts the number of containers to the specified number and ensures Adjusts the number of containers to the specified number and ensures
they are running. they are running.
@ -196,7 +204,7 @@ class Service(object):
return container return container
def stop_and_remove(container): def stop_and_remove(container):
container.stop(timeout=timeout) container.stop(timeout=self.stop_timeout(timeout))
container.remove() container.remove()
running_containers = self.containers(stopped=False) running_containers = self.containers(stopped=False)
@ -374,7 +382,7 @@ class Service(object):
def execute_convergence_plan(self, def execute_convergence_plan(self,
plan, plan,
timeout=DEFAULT_TIMEOUT, timeout=None,
detached=False, detached=False,
start=True): start=True):
(action, containers) = plan (action, containers) = plan
@ -421,7 +429,7 @@ class Service(object):
def recreate_container( def recreate_container(
self, self,
container, container,
timeout=DEFAULT_TIMEOUT, timeout=None,
attach_logs=False, attach_logs=False,
start_new_container=True): start_new_container=True):
"""Recreate a container. """Recreate a container.
@ -432,7 +440,7 @@ class Service(object):
""" """
log.info("Recreating %s" % container.name) log.info("Recreating %s" % container.name)
container.stop(timeout=timeout) container.stop(timeout=self.stop_timeout(timeout))
container.rename_to_tmp_name() container.rename_to_tmp_name()
new_container = self.create_container( new_container = self.create_container(
previous_container=container, previous_container=container,
@ -446,6 +454,14 @@ class Service(object):
container.remove() container.remove()
return new_container return new_container
def stop_timeout(self, timeout):
if timeout is not None:
return timeout
timeout = parse_seconds_float(self.options.get('stop_grace_period'))
if timeout is not None:
return timeout
return DEFAULT_TIMEOUT
def start_container_if_stopped(self, container, attach_logs=False, quiet=False): def start_container_if_stopped(self, container, attach_logs=False, quiet=False):
if not container.is_running: if not container.is_running:
if not quiet: if not quiet:
@ -483,10 +499,10 @@ class Service(object):
link_local_ips=netdefs.get('link_local_ips', None), link_local_ips=netdefs.get('link_local_ips', None),
) )
def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT): def remove_duplicate_containers(self, timeout=None):
for c in self.duplicate_containers(): for c in self.duplicate_containers():
log.info('Removing %s' % c.name) log.info('Removing %s' % c.name)
c.stop(timeout=timeout) c.stop(timeout=self.stop_timeout(timeout))
c.remove() c.remove()
def duplicate_containers(self): def duplicate_containers(self):
@ -522,10 +538,38 @@ class Service(object):
def get_dependency_names(self): def get_dependency_names(self):
net_name = self.network_mode.service_name net_name = self.network_mode.service_name
return (self.get_linked_service_names() + return (
self.get_volumes_from_names() + self.get_linked_service_names() +
([net_name] if net_name else []) + self.get_volumes_from_names() +
self.options.get('depends_on', [])) ([net_name] if net_name else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
net_name = self.network_mode.service_name
configs = dict(
[(name, None) for name in self.get_linked_service_names()]
)
configs.update(dict(
[(name, None) for name in self.get_volumes_from_names()]
))
configs.update({net_name: None} if net_name else {})
configs.update(self.options.get('depends_on', {}))
for svc, config in self.options.get('depends_on', {}).items():
if config['condition'] == CONDITION_STARTED:
configs[svc] = lambda s: True
elif config['condition'] == CONDITION_HEALTHY:
configs[svc] = lambda s: s.is_healthy()
else:
# The config schema already prevents this, but it might be
# bypassed if Compose is called programmatically.
raise ValueError(
'depends_on condition "{}" is invalid.'.format(
config['condition']
)
)
return configs
def get_linked_service_names(self): def get_linked_service_names(self):
return [service.name for (service, _) in self.links] return [service.name for (service, _) in self.links]
@ -708,10 +752,12 @@ class Service(object):
cgroup_parent=options.get('cgroup_parent'), cgroup_parent=options.get('cgroup_parent'),
cpu_quota=options.get('cpu_quota'), cpu_quota=options.get('cpu_quota'),
shm_size=options.get('shm_size'), shm_size=options.get('shm_size'),
sysctls=options.get('sysctls'),
tmpfs=options.get('tmpfs'), tmpfs=options.get('tmpfs'),
oom_score_adj=options.get('oom_score_adj'), oom_score_adj=options.get('oom_score_adj'),
mem_swappiness=options.get('mem_swappiness'), mem_swappiness=options.get('mem_swappiness'),
group_add=options.get('group_add') group_add=options.get('group_add'),
userns_mode=options.get('userns_mode')
) )
# TODO: Add as an argument to create_host_config once it's supported # TODO: Add as an argument to create_host_config once it's supported
@ -858,6 +904,24 @@ class Service(object):
else: else:
log.error(six.text_type(e)) log.error(six.text_type(e))
def is_healthy(self):
""" Check that all containers for this service report healthy.
Returns false if at least one healthcheck is pending.
If an unhealthy container is detected, raise a HealthCheckFailed
exception.
"""
result = True
for ctnr in self.containers():
ctnr.inspect()
status = ctnr.get('State.Health.Status')
if status is None:
raise NoHealthCheckConfigured(self.name)
elif status == 'starting':
result = False
elif status == 'unhealthy':
raise HealthCheckFailed(ctnr.short_id)
return result
def short_id_alias_exists(container, network): def short_id_alias_exists(container, network):
aliases = container.get( aliases = container.get(

96
compose/timeparse.py Normal file
View File

@ -0,0 +1,96 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
timeparse.py
(c) Will Roberts <wildwilhelm@gmail.com> 1 February, 2014
This is a vendored and modified copy of:
github.com/wroberts/pytimeparse @ cc0550d
It has been modified to mimic the behaviour of
https://golang.org/pkg/time/#ParseDuration
'''
# MIT LICENSE
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import absolute_import
from __future__ import unicode_literals
import re
HOURS = r'(?P<hours>[\d.]+)h'
MINS = r'(?P<mins>[\d.]+)m'
SECS = r'(?P<secs>[\d.]+)s'
MILLI = r'(?P<milli>[\d.]+)ms'
MICRO = r'(?P<micro>[\d.]+)(?:us|µs)'
NANO = r'(?P<nano>[\d.]+)ns'
def opt(x):
return r'(?:{x})?'.format(x=x)
TIMEFORMAT = r'{HOURS}{MINS}{SECS}{MILLI}{MICRO}{NANO}'.format(
HOURS=opt(HOURS),
MINS=opt(MINS),
SECS=opt(SECS),
MILLI=opt(MILLI),
MICRO=opt(MICRO),
NANO=opt(NANO),
)
MULTIPLIERS = dict([
('hours', 60 * 60),
('mins', 60),
('secs', 1),
('milli', 1.0 / 1000),
('micro', 1.0 / 1000.0 / 1000),
('nano', 1.0 / 1000.0 / 1000.0 / 1000.0),
])
def timeparse(sval):
"""Parse a time expression, returning it as a number of seconds. If
possible, the return value will be an `int`; if this is not
possible, the return will be a `float`. Returns `None` if a time
expression cannot be parsed from the given string.
Arguments:
- `sval`: the string value to parse
>>> timeparse('1m24s')
84
>>> timeparse('1.2 minutes')
72
>>> timeparse('1.2 seconds')
1.2
"""
match = re.match(r'\s*' + TIMEFORMAT + r'\s*$', sval, re.I)
if not match or not match.group(0).strip():
return
mdict = match.groupdict()
return sum(
MULTIPLIERS[k] * cast(v) for (k, v) in mdict.items() if v is not None)
def cast(value):
return int(value, 10) if value.isdigit() else float(value)

View File

@ -11,6 +11,7 @@ import ntpath
import six import six
from .errors import StreamParseError from .errors import StreamParseError
from .timeparse import timeparse
json_decoder = json.JSONDecoder() json_decoder = json.JSONDecoder()
@ -107,6 +108,21 @@ def microseconds_from_time_nano(time_nano):
return int(time_nano % 1000000000 / 1000) return int(time_nano % 1000000000 / 1000)
def nanoseconds_from_time_seconds(time_seconds):
return time_seconds * 1000000000
def parse_seconds_float(value):
return timeparse(value or '')
def parse_nanoseconds_int(value):
parsed = timeparse(value or '')
if parsed is None:
return None
return int(parsed * 1000000000)
def build_string_dict(source_dict): def build_string_dict(source_dict):
return dict((k, str(v if v is not None else '')) for k, v in source_dict.items()) return dict((k, str(v if v is not None else '')) for k, v in source_dict.items())

View File

@ -4,8 +4,11 @@ from __future__ import unicode_literals
import logging import logging
from docker.errors import NotFound from docker.errors import NotFound
from docker.utils import version_lt
from .config import ConfigurationError from .config import ConfigurationError
from .const import LABEL_PROJECT
from .const import LABEL_VOLUME
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -23,7 +26,7 @@ class Volume(object):
def create(self): def create(self):
return self.client.create_volume( return self.client.create_volume(
self.full_name, self.driver, self.driver_opts, labels=self.labels self.full_name, self.driver, self.driver_opts, labels=self._labels
) )
def remove(self): def remove(self):
@ -53,6 +56,17 @@ class Volume(object):
return self.external_name return self.external_name
return '{0}_{1}'.format(self.project, self.name) return '{0}_{1}'.format(self.project, self.name)
@property
def _labels(self):
if version_lt(self.client._version, '1.23'):
return None
labels = self.labels.copy() if self.labels else {}
labels.update({
LABEL_PROJECT: self.project,
LABEL_VOLUME: self.name,
})
return labels
class ProjectVolumes(object): class ProjectVolumes(object):

View File

@ -32,6 +32,11 @@ exe = EXE(pyz,
'compose/config/config_schema_v2.1.json', 'compose/config/config_schema_v2.1.json',
'DATA' 'DATA'
), ),
(
'compose/config/config_schema_v3.0.json',
'compose/config/config_schema_v3.0.json',
'DATA'
),
( (
'compose/GITSHA', 'compose/GITSHA',
'compose/GITSHA', 'compose/GITSHA',

View File

@ -20,18 +20,30 @@ release.
As part of this script you'll be asked to: As part of this script you'll be asked to:
1. Update the version in `docs/install.md` and `compose/__init__.py`. 1. Update the version in `compose/__init__.py` and `script/run/run.sh`.
If the next release will be an RC, append `rcN`, e.g. `1.4.0rc1`. If the next release will be an RC, append `-rcN`, e.g. `1.4.0-rc1`.
2. Write release notes in `CHANGES.md`. 2. Write release notes in `CHANGES.md`.
Almost every feature enhancement should be mentioned, with the most visible/exciting ones first. Use descriptive sentences and give context where appropriate. Almost every feature enhancement should be mentioned, with the most
visible/exciting ones first. Use descriptive sentences and give context
where appropriate.
Bug fixes are worth mentioning if it's likely that they've affected lots of people, or if they were regressions in the previous version. Bug fixes are worth mentioning if it's likely that they've affected lots
of people, or if they were regressions in the previous version.
Improvements to the code are not worth mentioning. Improvements to the code are not worth mentioning.
3. Create a new repository on [bintray](https://bintray.com/docker-compose).
The name has to match the name of the branch (e.g. `bump-1.9.0`) and the
type should be "Generic". Other fields can be left blank.
4. Check that the `vnext-compose` branch on
[the docs repo](https://github.com/docker/docker.github.io/) has
documentation for all the new additions in the upcoming release, and create
a PR there for what needs to be amended.
## When a PR is merged into master that we want in the release ## When a PR is merged into master that we want in the release
@ -55,8 +67,8 @@ Check out the bump branch and run the `build-binaries` script
When prompted build the non-linux binaries and test them. When prompted build the non-linux binaries and test them.
1. Download the osx binary from Bintray. Make sure that the latest build has 1. Download the osx binary from Bintray. Make sure that the latest Travis
finished, otherwise you'll be downloading an old binary. build has finished, otherwise you'll be downloading an old binary.
https://dl.bintray.com/docker-compose/$BRANCH_NAME/ https://dl.bintray.com/docker-compose/$BRANCH_NAME/
@ -67,22 +79,24 @@ When prompted build the non-linux binaries and test them.
3. Draft a release from the tag on GitHub (the script will open the window for 3. Draft a release from the tag on GitHub (the script will open the window for
you) you)
In the "Tag version" dropdown, select the tag you just pushed. The tag will only be present on Github when you run the `push-release`
script in step 7, but you can pre-fill it at that point.
4. Paste in installation instructions and release notes. Here's an example - change the Compose version and Docker version as appropriate: 4. Paste in installation instructions and release notes. Here's an example -
change the Compose version and Docker version as appropriate:
Firstly, note that Compose 1.5.0 requires Docker 1.8.0 or later. If you're a Mac or Windows user, the best way to install Compose and keep it up-to-date is **[Docker for Mac and Windows](https://www.docker.com/products/docker)**.
Secondly, if you're a Mac user, the **[Docker Toolbox](https://www.docker.com/toolbox)** will install Compose 1.5.0 for you, alongside the latest versions of the Docker Engine, Machine and Kitematic. Note that Compose 1.9.0 requires Docker Engine 1.10.0 or later for version 2 of the Compose File format, and Docker Engine 1.9.1 or later for version 1. Docker for Mac and Windows will automatically install the latest version of Docker Engine for you.
Otherwise, you can use the usual commands to install/upgrade. Either download the binary: Alternatively, you can use the usual commands to install or upgrade Compose:
curl -L https://github.com/docker/compose/releases/download/1.5.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose ```
chmod +x /usr/local/bin/docker-compose curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
Or install the PyPi package: See the [install docs](https://docs.docker.com/compose/install/) for more install options and instructions.
pip install -U docker-compose==1.5.0
Here's what's new: Here's what's new:
@ -99,6 +113,8 @@ When prompted build the non-linux binaries and test them.
./script/release/push-release ./script/release/push-release
8. Merge the bump PR.
8. Publish the release on GitHub. 8. Publish the release on GitHub.
9. Check that all the binaries download (following the install instructions) and run. 9. Check that all the binaries download (following the install instructions) and run.
@ -107,19 +123,7 @@ When prompted build the non-linux binaries and test them.
## If its a stable release (not an RC) ## If its a stable release (not an RC)
1. Merge the bump PR. 1. Close the releases milestone.
2. Make sure `origin/release` is updated locally:
git fetch origin
3. Update the `docs` branch on the upstream repo:
git push git@github.com:docker/compose.git origin/release:docs
4. Let the docs team know that its been updated so they can publish it.
5. Close the releases milestone.
## If its a minor release (1.x.0), rather than a patch release (1.x.y) ## If its a minor release (1.x.0), rather than a patch release (1.x.y)

View File

@ -1,7 +1,8 @@
PyYAML==3.11 PyYAML==3.11
backports.ssl-match-hostname==3.5.0.1; python_version < '3' backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.2.0 cached-property==1.2.0
docker-py==1.10.6 colorama==0.3.7
docker==2.0.0
dockerpty==0.4.1 dockerpty==0.4.1
docopt==0.6.1 docopt==0.6.1
enum34==1.0.4; python_version < '3.4' enum34==1.0.4; python_version < '3.4'

View File

@ -65,8 +65,8 @@ git config "branch.${BRANCH}.release" $VERSION
editor=${EDITOR:-vim} editor=${EDITOR:-vim}
echo "Update versions in compose/__init__.py, script/run/run.sh" echo "Update versions in docs/install.md, compose/__init__.py, script/run/run.sh"
# $editor docs/install.md $editor docs/install.md
$editor compose/__init__.py $editor compose/__init__.py
$editor script/run/run.sh $editor script/run/run.sh

View File

@ -54,7 +54,7 @@ git push $GITHUB_REPO $VERSION
echo "Uploading the docker image" echo "Uploading the docker image"
docker push docker/compose:$VERSION docker push docker/compose:$VERSION
echo "Uploading sdist to pypi" echo "Uploading sdist to PyPI"
pandoc -f markdown -t rst README.md -o README.rst pandoc -f markdown -t rst README.md -o README.rst
sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst
./script/build/write-git-sha ./script/build/write-git-sha

View File

@ -15,7 +15,7 @@
set -e set -e
VERSION="1.9.0" VERSION="1.10.0-rc1"
IMAGE="docker/compose:$VERSION" IMAGE="docker/compose:$VERSION"
@ -35,7 +35,7 @@ if [ "$(pwd)" != '/' ]; then
VOLUMES="-v $(pwd):$(pwd)" VOLUMES="-v $(pwd):$(pwd)"
fi fi
if [ -n "$COMPOSE_FILE" ]; then if [ -n "$COMPOSE_FILE" ]; then
compose_dir=$(dirname $COMPOSE_FILE) compose_dir=$(realpath $(dirname $COMPOSE_FILE))
fi fi
# TODO: also check --file argument # TODO: also check --file argument
if [ -n "$compose_dir" ]; then if [ -n "$compose_dir" ]; then

View File

@ -29,12 +29,13 @@ def find_version(*file_paths):
install_requires = [ install_requires = [
'cached-property >= 1.2.0, < 2', 'cached-property >= 1.2.0, < 2',
'colorama >= 0.3.7, < 0.4',
'docopt >= 0.6.1, < 0.7', 'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4', 'PyYAML >= 3.10, < 4',
'requests >= 2.6.1, != 2.11.0, < 2.12', 'requests >= 2.6.1, != 2.11.0, < 2.12',
'texttable >= 0.8.1, < 0.9', 'texttable >= 0.8.1, < 0.9',
'websocket-client >= 0.32.0, < 1.0', 'websocket-client >= 0.32.0, < 1.0',
'docker-py >= 1.10.6, < 2.0', 'docker >= 2.0.0, < 3.0',
'dockerpty >= 0.4.1, < 0.5', 'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2', 'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3', 'jsonschema >= 2.5.1, < 3',

View File

@ -21,11 +21,13 @@ from .. import mock
from compose.cli.command import get_project from compose.cli.command import get_project
from compose.container import Container from compose.container import Container
from compose.project import OneOffFilter from compose.project import OneOffFilter
from compose.utils import nanoseconds_from_time_seconds
from tests.integration.testcases import DockerClientTestCase from tests.integration.testcases import DockerClientTestCase
from tests.integration.testcases import get_links from tests.integration.testcases import get_links
from tests.integration.testcases import pull_busybox from tests.integration.testcases import pull_busybox
from tests.integration.testcases import v2_1_only from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
ProcessResult = namedtuple('ProcessResult', 'stdout stderr') ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
@ -285,6 +287,62 @@ class CLITestCase(DockerClientTestCase):
'volumes': {}, 'volumes': {},
} }
@v3_only()
def test_config_v3(self):
self.base_dir = 'tests/fixtures/v3-full'
result = self.dispatch(['config'])
assert yaml.load(result.stdout) == {
'version': '3.0',
'networks': {},
'volumes': {},
'services': {
'web': {
'image': 'busybox',
'deploy': {
'mode': 'replicated',
'replicas': 6,
'labels': ['FOO=BAR'],
'update_config': {
'parallelism': 3,
'delay': '10s',
'failure_action': 'continue',
'monitor': '60s',
'max_failure_ratio': 0.3,
},
'resources': {
'limits': {
'cpus': '0.001',
'memory': '50M',
},
'reservations': {
'cpus': '0.0001',
'memory': '20M',
},
},
'restart_policy': {
'condition': 'on_failure',
'delay': '5s',
'max_attempts': 3,
'window': '120s',
},
'placement': {
'constraints': ['node=foo'],
},
},
'healthcheck': {
'test': 'cat /etc/passwd',
'interval': 10000000000,
'timeout': 1000000000,
'retries': 5,
},
'stop_grace_period': '20s',
},
},
}
def test_ps(self): def test_ps(self):
self.project.get_service('simple').create_container() self.project.get_service('simple').create_container()
result = self.dispatch(['ps']) result = self.dispatch(['ps'])
@ -792,8 +850,8 @@ class CLITestCase(DockerClientTestCase):
] ]
assert [n['Name'] for n in networks] == [network_with_label] assert [n['Name'] for n in networks] == [network_with_label]
assert 'label_key' in networks[0]['Labels']
assert networks[0]['Labels'] == {'label_key': 'label_val'} assert networks[0]['Labels']['label_key'] == 'label_val'
@v2_1_only() @v2_1_only()
def test_up_with_volume_labels(self): def test_up_with_volume_labels(self):
@ -812,8 +870,8 @@ class CLITestCase(DockerClientTestCase):
] ]
assert [v['Name'] for v in volumes] == [volume_with_label] assert [v['Name'] for v in volumes] == [volume_with_label]
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels'] == {'label_key': 'label_val'} assert volumes[0]['Labels']['label_key'] == 'label_val'
@v2_only() @v2_only()
def test_up_no_services(self): def test_up_no_services(self):
@ -870,6 +928,50 @@ class CLITestCase(DockerClientTestCase):
assert foo_container.get('HostConfig.NetworkMode') == \ assert foo_container.get('HostConfig.NetworkMode') == \
'container:{}'.format(bar_container.id) 'container:{}'.format(bar_container.id)
@v3_only()
def test_up_with_healthcheck(self):
def wait_on_health_status(container, status):
def condition():
container.inspect()
return container.get('State.Health.Status') == status
return wait_on_condition(condition, delay=0.5)
self.base_dir = 'tests/fixtures/healthcheck'
self.dispatch(['up', '-d'], None)
passes = self.project.get_service('passes')
passes_container = passes.containers()[0]
assert passes_container.get('Config.Healthcheck') == {
"Test": ["CMD-SHELL", "/bin/true"],
"Interval": nanoseconds_from_time_seconds(1),
"Timeout": nanoseconds_from_time_seconds(30 * 60),
"Retries": 1,
}
wait_on_health_status(passes_container, 'healthy')
fails = self.project.get_service('fails')
fails_container = fails.containers()[0]
assert fails_container.get('Config.Healthcheck') == {
"Test": ["CMD", "/bin/false"],
"Interval": nanoseconds_from_time_seconds(2.5),
"Retries": 2,
}
wait_on_health_status(fails_container, 'unhealthy')
disabled = self.project.get_service('disabled')
disabled_container = disabled.containers()[0]
assert disabled_container.get('Config.Healthcheck') == {
"Test": ["NONE"],
}
assert 'Health' not in disabled_container.get('State')
def test_up_with_no_deps(self): def test_up_with_no_deps(self):
self.base_dir = 'tests/fixtures/links-composefile' self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', '--no-deps', 'web'], None) self.dispatch(['up', '-d', '--no-deps', 'web'], None)

View File

@ -0,0 +1,24 @@
version: "3"
services:
passes:
image: busybox
command: top
healthcheck:
test: "/bin/true"
interval: 1s
timeout: 30m
retries: 1
fails:
image: busybox
command: top
healthcheck:
test: ["CMD", "/bin/false"]
interval: 2.5s
retries: 2
disabled:
image: busybox
command: top
healthcheck:
disable: true

View File

@ -0,0 +1,37 @@
version: "3"
services:
web:
image: busybox
deploy:
mode: replicated
replicas: 6
labels: [FOO=BAR]
update_config:
parallelism: 3
delay: 10s
failure_action: continue
monitor: 60s
max_failure_ratio: 0.3
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
restart_policy:
condition: on_failure
delay: 5s
max_attempts: 3
window: 120s
placement:
constraints: [node=foo]
healthcheck:
test: cat /etc/passwd
interval: 10s
timeout: 1s
retries: 5
stop_grace_period: 20s

View File

@ -0,0 +1,17 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from .testcases import DockerClientTestCase
from compose.const import LABEL_NETWORK
from compose.const import LABEL_PROJECT
from compose.network import Network
class NetworkTest(DockerClientTestCase):
def test_network_default_labels(self):
net = Network(self.client, 'composetest', 'foonet')
net.ensure()
net_data = net.inspect()
labels = net_data['Labels']
assert labels[LABEL_NETWORK] == net.name
assert labels[LABEL_PROJECT] == net.project

View File

@ -19,6 +19,8 @@ from compose.config.types import VolumeSpec
from compose.const import LABEL_PROJECT from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE from compose.const import LABEL_SERVICE
from compose.container import Container from compose.container import Container
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.project import Project from compose.project import Project
from compose.project import ProjectError from compose.project import ProjectError
from compose.service import ConvergenceStrategy from compose.service import ConvergenceStrategy
@ -942,8 +944,8 @@ class ProjectTest(DockerClientTestCase):
] ]
assert [n['Name'] for n in networks] == ['composetest_{}'.format(network_name)] assert [n['Name'] for n in networks] == ['composetest_{}'.format(network_name)]
assert 'label_key' in networks[0]['Labels']
assert networks[0]['Labels'] == {'label_key': 'label_val'} assert networks[0]['Labels']['label_key'] == 'label_val'
@v2_only() @v2_only()
def test_project_up_volumes(self): def test_project_up_volumes(self):
@ -1009,7 +1011,8 @@ class ProjectTest(DockerClientTestCase):
assert [v['Name'] for v in volumes] == ['composetest_{}'.format(volume_name)] assert [v['Name'] for v in volumes] == ['composetest_{}'.format(volume_name)]
assert volumes[0]['Labels'] == {'label_key': 'label_val'} assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@v2_only() @v2_only()
def test_project_up_logging_with_multiple_files(self): def test_project_up_logging_with_multiple_files(self):
@ -1374,3 +1377,115 @@ class ProjectTest(DockerClientTestCase):
ctnr for ctnr in project._labeled_containers() ctnr for ctnr in project._labeled_containers()
if ctnr.labels.get(LABEL_SERVICE) == 'service1' if ctnr.labels.get(LABEL_SERVICE) == 'service1'
]) == 0 ]) == 0
@v2_1_only()
def test_project_up_healthy_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'healthcheck': {
'test': 'exit 0',
'retries': 1,
'timeout': '10s',
'interval': '0.1s'
},
},
'svc2': {
'image': 'busybox:latest',
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_healthy'},
}
}
}
}
config_data = build_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
project.up()
containers = project.containers()
assert len(containers) == 2
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
assert svc1.is_healthy()
@v2_1_only()
def test_project_up_unhealthy_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'healthcheck': {
'test': 'exit 1',
'retries': 1,
'timeout': '10s',
'interval': '0.1s'
},
},
'svc2': {
'image': 'busybox:latest',
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_healthy'},
}
}
}
}
config_data = build_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(HealthCheckFailed):
project.up()
containers = project.containers()
assert len(containers) == 1
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
with pytest.raises(HealthCheckFailed):
svc1.is_healthy()
@v2_1_only()
def test_project_up_no_healthcheck_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'healthcheck': {
'disable': True
},
},
'svc2': {
'image': 'busybox:latest',
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_healthy'},
}
}
}
}
config_data = build_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(NoHealthCheckConfigured):
project.up()
containers = project.containers()
assert len(containers) == 1
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
with pytest.raises(NoHealthCheckConfigured):
svc1.is_healthy()

View File

@ -30,6 +30,7 @@ from compose.service import ConvergencePlan
from compose.service import ConvergenceStrategy from compose.service import ConvergenceStrategy
from compose.service import NetworkMode from compose.service import NetworkMode
from compose.service import Service from compose.service import Service
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only from tests.integration.testcases import v2_only
@ -842,6 +843,18 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service) container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), 'host') self.assertEqual(container.get('HostConfig.PidMode'), 'host')
@v2_1_only()
def test_userns_mode_none_defined(self):
service = self.create_service('web', userns_mode=None)
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.UsernsMode'), '')
@v2_1_only()
def test_userns_mode_host(self):
service = self.create_service('web', userns_mode='host')
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.UsernsMode'), 'host')
def test_dns_no_value(self): def test_dns_no_value(self):
service = self.create_service('web') service = self.create_service('web')
container = create_and_start_container(service) container = create_and_start_container(service)

View File

@ -45,11 +45,11 @@ def engine_max_version():
return V2_1 return V2_1
def v2_only(): def build_version_required_decorator(ignored_versions):
def decorator(f): def decorator(f):
@functools.wraps(f) @functools.wraps(f)
def wrapper(self, *args, **kwargs): def wrapper(self, *args, **kwargs):
if engine_max_version() == V1: if engine_max_version() in ignored_versions:
skip("Engine version is too low") skip("Engine version is too low")
return return
return f(self, *args, **kwargs) return f(self, *args, **kwargs)
@ -58,17 +58,16 @@ def v2_only():
return decorator return decorator
def v2_1_only(): def v2_only():
def decorator(f): return build_version_required_decorator((V1,))
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if engine_max_version() in (V1, V2_0):
skip('Engine version is too low')
return
return f(self, *args, **kwargs)
return wrapper
return decorator
def v2_1_only():
return build_version_required_decorator((V1, V2_0))
def v3_only():
return build_version_required_decorator((V1, V2_0, V2_1))
class DockerClientTestCase(unittest.TestCase): class DockerClientTestCase(unittest.TestCase):

View File

@ -4,6 +4,8 @@ from __future__ import unicode_literals
from docker.errors import DockerException from docker.errors import DockerException
from .testcases import DockerClientTestCase from .testcases import DockerClientTestCase
from compose.const import LABEL_PROJECT
from compose.const import LABEL_VOLUME
from compose.volume import Volume from compose.volume import Volume
@ -94,3 +96,11 @@ class VolumeTest(DockerClientTestCase):
assert vol.exists() is False assert vol.exists() is False
vol.create() vol.create()
assert vol.exists() is True assert vol.exists() is True
def test_volume_default_labels(self):
vol = self.create_volume('volume01')
vol.create()
vol_data = vol.inspect()
labels = vol_data['Labels']
assert labels[LABEL_VOLUME] == vol.name
assert labels[LABEL_PROJECT] == vol.project

View File

@ -15,7 +15,7 @@ from compose.config.config import Config
def mock_service(): def mock_service():
return mock.create_autospec( return mock.create_autospec(
service.Service, service.Service,
client=mock.create_autospec(docker.Client), client=mock.create_autospec(docker.APIClient),
options={}) options={})

View File

@ -97,7 +97,7 @@ class CLITestCase(unittest.TestCase):
@mock.patch('compose.cli.main.RunOperation', autospec=True) @mock.patch('compose.cli.main.RunOperation', autospec=True)
@mock.patch('compose.cli.main.PseudoTerminal', autospec=True) @mock.patch('compose.cli.main.PseudoTerminal', autospec=True)
def test_run_interactive_passes_logs_false(self, mock_pseudo_terminal, mock_run_operation): def test_run_interactive_passes_logs_false(self, mock_pseudo_terminal, mock_run_operation):
mock_client = mock.create_autospec(docker.Client) mock_client = mock.create_autospec(docker.APIClient)
project = Project.from_config( project = Project.from_config(
name='composetest', name='composetest',
client=mock_client, client=mock_client,
@ -128,7 +128,7 @@ class CLITestCase(unittest.TestCase):
assert call_kwargs['logs'] is False assert call_kwargs['logs'] is False
def test_run_service_with_restart_always(self): def test_run_service_with_restart_always(self):
mock_client = mock.create_autospec(docker.Client) mock_client = mock.create_autospec(docker.APIClient)
project = Project.from_config( project = Project.from_config(
name='composetest', name='composetest',

View File

@ -18,11 +18,13 @@ from compose.config.config import resolve_environment
from compose.config.config import V1 from compose.config.config import V1
from compose.config.config import V2_0 from compose.config.config import V2_0
from compose.config.config import V2_1 from compose.config.config import V2_1
from compose.config.config import V3_0
from compose.config.environment import Environment from compose.config.environment import Environment
from compose.config.errors import ConfigurationError from compose.config.errors import ConfigurationError
from compose.config.errors import VERSION_EXPLANATION from compose.config.errors import VERSION_EXPLANATION
from compose.config.types import VolumeSpec from compose.config.types import VolumeSpec
from compose.const import IS_WINDOWS_PLATFORM from compose.const import IS_WINDOWS_PLATFORM
from compose.utils import nanoseconds_from_time_seconds
from tests import mock from tests import mock
from tests import unittest from tests import unittest
@ -156,9 +158,14 @@ class ConfigTest(unittest.TestCase):
for version in ['2', '2.0']: for version in ['2', '2.0']:
cfg = config.load(build_config_details({'version': version})) cfg = config.load(build_config_details({'version': version}))
assert cfg.version == V2_0 assert cfg.version == V2_0
cfg = config.load(build_config_details({'version': '2.1'})) cfg = config.load(build_config_details({'version': '2.1'}))
assert cfg.version == V2_1 assert cfg.version == V2_1
for version in ['3', '3.0']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.version == V3_0
def test_v1_file_version(self): def test_v1_file_version(self):
cfg = config.load(build_config_details({'web': {'image': 'busybox'}})) cfg = config.load(build_config_details({'web': {'image': 'busybox'}}))
assert cfg.version == V1 assert cfg.version == V1
@ -913,7 +920,10 @@ class ConfigTest(unittest.TestCase):
'build': {'context': os.path.abspath('/')}, 'build': {'context': os.path.abspath('/')},
'image': 'example/web', 'image': 'example/web',
'volumes': [VolumeSpec.parse('/home/user/project:/code')], 'volumes': [VolumeSpec.parse('/home/user/project:/code')],
'depends_on': ['db', 'other'], 'depends_on': {
'db': {'condition': 'service_started'},
'other': {'condition': 'service_started'},
},
}, },
{ {
'name': 'db', 'name': 'db',
@ -3048,7 +3058,9 @@ class ExtendsTest(unittest.TestCase):
image: example image: example
""") """)
services = load_from_filename(str(tmpdir.join('docker-compose.yml'))) services = load_from_filename(str(tmpdir.join('docker-compose.yml')))
assert service_sort(services)[2]['depends_on'] == ['other'] assert service_sort(services)[2]['depends_on'] == {
'other': {'condition': 'service_started'}
}
@pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
@ -3165,6 +3177,54 @@ class BuildPathTest(unittest.TestCase):
assert 'build path' in exc.exconly() assert 'build path' in exc.exconly()
class HealthcheckTest(unittest.TestCase):
def test_healthcheck(self):
service_dict = make_service_dict(
'test',
{'healthcheck': {
'test': ['CMD', 'true'],
'interval': '1s',
'timeout': '1m',
'retries': 3,
}},
'.',
)
assert service_dict['healthcheck'] == {
'test': ['CMD', 'true'],
'interval': nanoseconds_from_time_seconds(1),
'timeout': nanoseconds_from_time_seconds(60),
'retries': 3,
}
def test_disable(self):
service_dict = make_service_dict(
'test',
{'healthcheck': {
'disable': True,
}},
'.',
)
assert service_dict['healthcheck'] == {
'test': ['NONE'],
}
def test_disable_with_other_config_is_invalid(self):
with pytest.raises(ConfigurationError) as excinfo:
make_service_dict(
'invalid-healthcheck',
{'healthcheck': {
'disable': True,
'interval': '1s',
}},
'.',
)
assert 'invalid-healthcheck' in excinfo.exconly()
assert 'disable' in excinfo.exconly()
class GetDefaultConfigFilesTestCase(unittest.TestCase): class GetDefaultConfigFilesTestCase(unittest.TestCase):
files = [ files = [

View File

@ -98,7 +98,7 @@ class ContainerTest(unittest.TestCase):
self.assertEqual(container.name_without_project, "custom_name_of_container") self.assertEqual(container.name_without_project, "custom_name_of_container")
def test_inspect_if_not_inspected(self): def test_inspect_if_not_inspected(self):
mock_client = mock.create_autospec(docker.Client) mock_client = mock.create_autospec(docker.APIClient)
container = Container(mock_client, dict(Id="the_id")) container = Container(mock_client, dict(Id="the_id"))
container.inspect_if_not_inspected() container.inspect_if_not_inspected()

View File

@ -25,7 +25,7 @@ deps = {
def get_deps(obj): def get_deps(obj):
return deps[obj] return [(dep, None) for dep in deps[obj]]
def test_parallel_execute(): def test_parallel_execute():

View File

@ -19,7 +19,7 @@ from compose.service import Service
class ProjectTest(unittest.TestCase): class ProjectTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.Client) self.mock_client = mock.create_autospec(docker.APIClient)
def test_from_config(self): def test_from_config(self):
config = Config( config = Config(

View File

@ -34,7 +34,7 @@ from compose.service import warn_on_masked_volume
class ServiceTest(unittest.TestCase): class ServiceTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.Client) self.mock_client = mock.create_autospec(docker.APIClient)
def test_containers(self): def test_containers(self):
service = Service('db', self.mock_client, 'myproject', image='foo') service = Service('db', self.mock_client, 'myproject', image='foo')
@ -666,7 +666,7 @@ class ServiceTest(unittest.TestCase):
class TestServiceNetwork(object): class TestServiceNetwork(object):
def test_connect_container_to_networks_short_aliase_exists(self): def test_connect_container_to_networks_short_aliase_exists(self):
mock_client = mock.create_autospec(docker.Client) mock_client = mock.create_autospec(docker.APIClient)
service = Service( service = Service(
'db', 'db',
mock_client, mock_client,
@ -751,7 +751,7 @@ class NetTestCase(unittest.TestCase):
def test_network_mode_service(self): def test_network_mode_service(self):
container_id = 'bbbb' container_id = 'bbbb'
service_name = 'web' service_name = 'web'
mock_client = mock.create_autospec(docker.Client) mock_client = mock.create_autospec(docker.APIClient)
mock_client.containers.return_value = [ mock_client.containers.return_value = [
{'Id': container_id, 'Name': container_id, 'Image': 'abcd'}, {'Id': container_id, 'Name': container_id, 'Image': 'abcd'},
] ]
@ -765,7 +765,7 @@ class NetTestCase(unittest.TestCase):
def test_network_mode_service_no_containers(self): def test_network_mode_service_no_containers(self):
service_name = 'web' service_name = 'web'
mock_client = mock.create_autospec(docker.Client) mock_client = mock.create_autospec(docker.APIClient)
mock_client.containers.return_value = [] mock_client.containers.return_value = []
service = Service(name=service_name, client=mock_client) service = Service(name=service_name, client=mock_client)
@ -783,7 +783,7 @@ def build_mount(destination, source, mode='rw'):
class ServiceVolumesTest(unittest.TestCase): class ServiceVolumesTest(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_client = mock.create_autospec(docker.Client) self.mock_client = mock.create_autospec(docker.APIClient)
def test_build_volume_binding(self): def test_build_volume_binding(self):
binding = build_volume_binding(VolumeSpec.parse('/outside:/inside', True)) binding = build_volume_binding(VolumeSpec.parse('/outside:/inside', True))

View File

@ -0,0 +1,56 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from compose import timeparse
def test_milli():
assert timeparse.timeparse('5ms') == 0.005
def test_milli_float():
assert timeparse.timeparse('50.5ms') == 0.0505
def test_second_milli():
assert timeparse.timeparse('200s5ms') == 200.005
def test_second_milli_micro():
assert timeparse.timeparse('200s5ms10us') == 200.00501
def test_second():
assert timeparse.timeparse('200s') == 200
def test_second_as_float():
assert timeparse.timeparse('20.5s') == 20.5
def test_minute():
assert timeparse.timeparse('32m') == 1920
def test_hour_minute():
assert timeparse.timeparse('2h32m') == 9120
def test_minute_as_float():
assert timeparse.timeparse('1.5m') == 90
def test_hour_minute_second():
assert timeparse.timeparse('5h34m56s') == 20096
def test_invalid_with_space():
assert timeparse.timeparse('5h 34m 56s') is None
def test_invalid_with_comma():
assert timeparse.timeparse('5h,34m,56s') is None
def test_invalid_with_empty_string():
assert timeparse.timeparse('') is None

View File

@ -10,7 +10,7 @@ from tests import mock
@pytest.fixture @pytest.fixture
def mock_client(): def mock_client():
return mock.create_autospec(docker.Client) return mock.create_autospec(docker.APIClient)
class TestVolume(object): class TestVolume(object):