mirror of
https://github.com/docker/compose.git
synced 2025-07-27 07:34:10 +02:00
commit
c1b7d6c6ad
57
CHANGELOG.md
57
CHANGELOG.md
@ -1,6 +1,61 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
1.6.1 (2016-02-23)
|
||||
------------------
|
||||
|
||||
Bug Fixes
|
||||
|
||||
- Fixed a bug where recreating a container multiple times would cause the
|
||||
new container to be started without the previous volumes.
|
||||
|
||||
- Fixed a bug where Compose would set the value of unset environment variables
|
||||
to an empty string, instead of a key without a value.
|
||||
|
||||
- Provide a better error message when Compose requires a more recent version
|
||||
of the Docker API.
|
||||
|
||||
- Add a missing config field `network.aliases` which allows setting a network
|
||||
scoped alias for a service.
|
||||
|
||||
- Fixed a bug where `run` would not start services listed in `depends_on`.
|
||||
|
||||
- Fixed a bug where `networks` and `network_mode` where not merged when using
|
||||
extends or multiple Compose files.
|
||||
|
||||
- Fixed a bug with service aliases where the short container id alias was
|
||||
only contained 10 characters, instead of the 12 characters used in previous
|
||||
versions.
|
||||
|
||||
- Added a missing log message when creating a new named volume.
|
||||
|
||||
- Fixed a bug where `build.args` was not merged when using `extends` or
|
||||
multiple Compose files.
|
||||
|
||||
- Fixed some bugs with config validation when null values or incorrect types
|
||||
were used instead of a mapping.
|
||||
|
||||
- Fixed a bug where a `build` section without a `context` would show a stack
|
||||
trace instead of a helpful validation message.
|
||||
|
||||
- Improved compatibility with swarm by only setting a container affinity to
|
||||
the previous instance of a services' container when the service uses an
|
||||
anonymous container volume. Previously the affinity was always set on all
|
||||
containers.
|
||||
|
||||
- Fixed the validation of some `driver_opts` would cause an error if a number
|
||||
was used instead of a string.
|
||||
|
||||
- Some improvements to the `run.sh` script used by the Compose container install
|
||||
option.
|
||||
|
||||
- Fixed a bug with `up --abort-on-container-exit` where Compose would exit,
|
||||
but would not stop other containers.
|
||||
|
||||
- Corrected the warning message that is printed when a boolean value is used
|
||||
as a value in a mapping.
|
||||
|
||||
|
||||
1.6.0 (2016-01-15)
|
||||
------------------
|
||||
|
||||
@ -14,7 +69,7 @@ Major Features:
|
||||
1.6 exactly as they do today.
|
||||
|
||||
Check the upgrade guide for full details:
|
||||
https://docs.docker.com/compose/compose-file/upgrading
|
||||
https://docs.docker.com/compose/compose-file#upgrading
|
||||
|
||||
- Support for networking has exited experimental status and is the recommended
|
||||
way to enable communication between containers.
|
||||
|
@ -6,11 +6,11 @@ Compose is a tool for defining and running multi-container Docker applications.
|
||||
With Compose, you use a Compose file to configure your application's services.
|
||||
Then, using a single command, you create and start all the services
|
||||
from your configuration. To learn more about all the features of Compose
|
||||
see [the list of features](docs/index.md#features).
|
||||
see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features).
|
||||
|
||||
Compose is great for development, testing, and staging environments, as well as
|
||||
CI workflows. You can learn more about each case in
|
||||
[Common Use Cases](docs/index.md#common-use-cases).
|
||||
[Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases).
|
||||
|
||||
Using Compose is basically a three-step process.
|
||||
|
||||
@ -34,7 +34,7 @@ A `docker-compose.yml` looks like this:
|
||||
image: redis
|
||||
|
||||
For more information about the Compose file, see the
|
||||
[Compose file reference](docs/compose-file.md)
|
||||
[Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
|
||||
|
||||
Compose has commands for managing the whole lifecycle of your application:
|
||||
|
||||
|
40
SWARM.md
40
SWARM.md
@ -1,39 +1 @@
|
||||
Docker Compose/Swarm integration
|
||||
================================
|
||||
|
||||
Eventually, Compose and Swarm aim to have full integration, meaning you can point a Compose app at a Swarm cluster and have it all just work as if you were using a single Docker host.
|
||||
|
||||
However, integration is currently incomplete: Compose can create containers on a Swarm cluster, but the majority of Compose apps won’t work out of the box unless all containers are scheduled on one host, because links between containers do not work across hosts.
|
||||
|
||||
Docker networking is [getting overhauled](https://github.com/docker/libnetwork) in such a way that it’ll fit the multi-host model much better. For now, linked containers are automatically scheduled on the same host.
|
||||
|
||||
Building
|
||||
--------
|
||||
|
||||
Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the resulting image will only live on a single node and won't be distributed to other nodes.
|
||||
|
||||
If you want to use Compose to scale the service in question to multiple nodes, you'll have to build it yourself, push it to a registry (e.g. the Docker Hub) and reference it from `docker-compose.yml`:
|
||||
|
||||
$ docker build -t myusername/web .
|
||||
$ docker push myusername/web
|
||||
|
||||
$ cat docker-compose.yml
|
||||
web:
|
||||
image: myusername/web
|
||||
|
||||
$ docker-compose up -d
|
||||
$ docker-compose scale web=3
|
||||
|
||||
Scheduling
|
||||
----------
|
||||
|
||||
Swarm offers a rich set of scheduling and affinity hints, enabling you to control where containers are located. They are specified via container environment variables, so you can use Compose's `environment` option to set them.
|
||||
|
||||
environment:
|
||||
# Schedule containers on a node that has the 'storage' label set to 'ssd'
|
||||
- "constraint:storage==ssd"
|
||||
|
||||
# Schedule containers where the 'redis' image is already pulled
|
||||
- "affinity:image==redis"
|
||||
|
||||
For the full set of available filters and expressions, see the [Swarm documentation](https://docs.docker.com/swarm/scheduler/filter/).
|
||||
This file has moved to: https://docs.docker.com/compose/swarm/
|
||||
|
@ -1,4 +1,4 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__version__ = '1.6.0'
|
||||
__version__ = '1.6.1'
|
||||
|
@ -19,6 +19,7 @@ from ..config import config
|
||||
from ..config import ConfigurationError
|
||||
from ..config import parse_environment
|
||||
from ..config.serialize import serialize_config
|
||||
from ..const import API_VERSION_TO_ENGINE_VERSION
|
||||
from ..const import DEFAULT_TIMEOUT
|
||||
from ..const import HTTP_TIMEOUT
|
||||
from ..const import IS_WINDOWS_PLATFORM
|
||||
@ -64,7 +65,7 @@ def main():
|
||||
log.error("No such command: %s\n\n%s", e.command, commands)
|
||||
sys.exit(1)
|
||||
except APIError as e:
|
||||
log.error(e.explanation)
|
||||
log_api_error(e)
|
||||
sys.exit(1)
|
||||
except BuildError as e:
|
||||
log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
|
||||
@ -84,6 +85,22 @@ def main():
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def log_api_error(e):
|
||||
if 'client is newer than server' in e.explanation:
|
||||
# we need JSON formatted errors. In the meantime...
|
||||
# TODO: fix this by refactoring project dispatch
|
||||
# http://github.com/docker/compose/pull/2832#commitcomment-15923800
|
||||
client_version = e.explanation.split('client API version: ')[1].split(',')[0]
|
||||
log.error(
|
||||
"The engine version is lesser than the minimum required by "
|
||||
"compose. Your current project requires a Docker Engine of "
|
||||
"version {version} or superior.".format(
|
||||
version=API_VERSION_TO_ENGINE_VERSION[client_version]
|
||||
))
|
||||
else:
|
||||
log.error(e.explanation)
|
||||
|
||||
|
||||
def setup_logging():
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.addHandler(console_handler)
|
||||
@ -645,6 +662,10 @@ class TopLevelCommand(DocoptCommand):
|
||||
print("Attaching to", list_containers(log_printer.containers))
|
||||
log_printer.run()
|
||||
|
||||
if cascade_stop:
|
||||
print("Aborting on container exit...")
|
||||
project.stop(service_names=service_names, timeout=timeout)
|
||||
|
||||
def version(self, project, options):
|
||||
"""
|
||||
Show version informations
|
||||
@ -686,7 +707,7 @@ def image_type_from_opt(flag, value):
|
||||
|
||||
def run_one_off_container(container_options, project, service, options):
|
||||
if not options['--no-deps']:
|
||||
deps = service.get_linked_service_names()
|
||||
deps = service.get_dependency_names()
|
||||
if deps:
|
||||
project.up(
|
||||
service_names=deps,
|
||||
|
@ -16,6 +16,7 @@ from cached_property import cached_property
|
||||
|
||||
from ..const import COMPOSEFILE_V1 as V1
|
||||
from ..const import COMPOSEFILE_V2_0 as V2_0
|
||||
from ..utils import build_string_dict
|
||||
from .errors import CircularReference
|
||||
from .errors import ComposeFileNotFound
|
||||
from .errors import ConfigurationError
|
||||
@ -32,11 +33,11 @@ from .types import VolumeSpec
|
||||
from .validation import match_named_volumes
|
||||
from .validation import validate_against_fields_schema
|
||||
from .validation import validate_against_service_schema
|
||||
from .validation import validate_config_section
|
||||
from .validation import validate_depends_on
|
||||
from .validation import validate_extends_file_path
|
||||
from .validation import validate_network_mode
|
||||
from .validation import validate_top_level_object
|
||||
from .validation import validate_top_level_service_objects
|
||||
from .validation import validate_ulimits
|
||||
|
||||
|
||||
@ -87,6 +88,7 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
|
||||
'container_name',
|
||||
'dockerfile',
|
||||
'logging',
|
||||
'network_mode',
|
||||
]
|
||||
|
||||
DOCKER_VALID_URL_PREFIXES = (
|
||||
@ -290,8 +292,12 @@ def load(config_details):
|
||||
config_details = config_details._replace(config_files=processed_files)
|
||||
|
||||
main_file = config_details.config_files[0]
|
||||
volumes = load_mapping(config_details.config_files, 'get_volumes', 'Volume')
|
||||
networks = load_mapping(config_details.config_files, 'get_networks', 'Network')
|
||||
volumes = load_mapping(
|
||||
config_details.config_files, 'get_volumes', 'Volume'
|
||||
)
|
||||
networks = load_mapping(
|
||||
config_details.config_files, 'get_networks', 'Network'
|
||||
)
|
||||
service_dicts = load_services(
|
||||
config_details.working_dir,
|
||||
main_file,
|
||||
@ -331,6 +337,11 @@ def load_mapping(config_files, get_func, entity_type):
|
||||
|
||||
mapping[name] = config
|
||||
|
||||
if 'driver_opts' in config:
|
||||
config['driver_opts'] = build_string_dict(
|
||||
config['driver_opts']
|
||||
)
|
||||
|
||||
return mapping
|
||||
|
||||
|
||||
@ -376,22 +387,31 @@ def load_services(working_dir, config_file, service_configs):
|
||||
return build_services(service_config)
|
||||
|
||||
|
||||
def process_config_file(config_file, service_name=None):
|
||||
service_dicts = config_file.get_service_dicts()
|
||||
validate_top_level_service_objects(config_file.filename, service_dicts)
|
||||
def interpolate_config_section(filename, config, section):
|
||||
validate_config_section(filename, config, section)
|
||||
return interpolate_environment_variables(config, section)
|
||||
|
||||
interpolated_config = interpolate_environment_variables(service_dicts, 'service')
|
||||
|
||||
def process_config_file(config_file, service_name=None):
|
||||
services = interpolate_config_section(
|
||||
config_file.filename,
|
||||
config_file.get_service_dicts(),
|
||||
'service')
|
||||
|
||||
if config_file.version == V2_0:
|
||||
processed_config = dict(config_file.config)
|
||||
processed_config['services'] = services = interpolated_config
|
||||
processed_config['volumes'] = interpolate_environment_variables(
|
||||
config_file.get_volumes(), 'volume')
|
||||
processed_config['networks'] = interpolate_environment_variables(
|
||||
config_file.get_networks(), 'network')
|
||||
processed_config['services'] = services
|
||||
processed_config['volumes'] = interpolate_config_section(
|
||||
config_file.filename,
|
||||
config_file.get_volumes(),
|
||||
'volume')
|
||||
processed_config['networks'] = interpolate_config_section(
|
||||
config_file.filename,
|
||||
config_file.get_networks(),
|
||||
'network')
|
||||
|
||||
if config_file.version == V1:
|
||||
processed_config = services = interpolated_config
|
||||
processed_config = services
|
||||
|
||||
config_file = config_file._replace(config=processed_config)
|
||||
validate_against_fields_schema(config_file)
|
||||
@ -600,6 +620,9 @@ def finalize_service(service_config, service_names, version):
|
||||
else:
|
||||
service_dict['network_mode'] = network_mode
|
||||
|
||||
if 'networks' in service_dict:
|
||||
service_dict['networks'] = parse_networks(service_dict['networks'])
|
||||
|
||||
if 'restart' in service_dict:
|
||||
service_dict['restart'] = parse_restart_spec(service_dict['restart'])
|
||||
|
||||
@ -689,6 +712,7 @@ def merge_service_dicts(base, override, version):
|
||||
md.merge_mapping('environment', parse_environment)
|
||||
md.merge_mapping('labels', parse_labels)
|
||||
md.merge_mapping('ulimits', parse_ulimits)
|
||||
md.merge_mapping('networks', parse_networks)
|
||||
md.merge_sequence('links', ServiceLink.parse)
|
||||
|
||||
for field in ['volumes', 'devices']:
|
||||
@ -711,29 +735,24 @@ def merge_service_dicts(base, override, version):
|
||||
|
||||
if version == V1:
|
||||
legacy_v1_merge_image_or_build(md, base, override)
|
||||
else:
|
||||
merge_build(md, base, override)
|
||||
elif md.needs_merge('build'):
|
||||
md['build'] = merge_build(md, base, override)
|
||||
|
||||
return dict(md)
|
||||
|
||||
|
||||
def merge_build(output, base, override):
|
||||
build = {}
|
||||
def to_dict(service):
|
||||
build_config = service.get('build', {})
|
||||
if isinstance(build_config, six.string_types):
|
||||
return {'context': build_config}
|
||||
return build_config
|
||||
|
||||
if 'build' in base:
|
||||
if isinstance(base['build'], six.string_types):
|
||||
build['context'] = base['build']
|
||||
else:
|
||||
build.update(base['build'])
|
||||
|
||||
if 'build' in override:
|
||||
if isinstance(override['build'], six.string_types):
|
||||
build['context'] = override['build']
|
||||
else:
|
||||
build.update(override['build'])
|
||||
|
||||
if build:
|
||||
output['build'] = build
|
||||
md = MergeDict(to_dict(base), to_dict(override))
|
||||
md.merge_scalar('context')
|
||||
md.merge_scalar('dockerfile')
|
||||
md.merge_mapping('args', parse_build_arguments)
|
||||
return dict(md)
|
||||
|
||||
|
||||
def legacy_v1_merge_image_or_build(output, base, override):
|
||||
@ -790,6 +809,7 @@ def parse_dict_or_list(split_func, type_name, arguments):
|
||||
parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
|
||||
parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
|
||||
parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels')
|
||||
parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks')
|
||||
|
||||
|
||||
def parse_ulimits(ulimits):
|
||||
@ -806,7 +826,7 @@ def resolve_env_var(key, val):
|
||||
elif key in os.environ:
|
||||
return key, os.environ[key]
|
||||
else:
|
||||
return key, ''
|
||||
return key, None
|
||||
|
||||
|
||||
def env_vars_from_file(filename):
|
||||
@ -853,7 +873,7 @@ def normalize_build(service_dict, working_dir):
|
||||
else:
|
||||
build.update(service_dict['build'])
|
||||
if 'args' in build:
|
||||
build['args'] = resolve_build_args(build)
|
||||
build['args'] = build_string_dict(resolve_build_args(build))
|
||||
|
||||
service_dict['build'] = build
|
||||
|
||||
@ -876,6 +896,9 @@ def validate_paths(service_dict):
|
||||
build_path = build
|
||||
elif isinstance(build, dict) and 'context' in build:
|
||||
build_path = build['context']
|
||||
else:
|
||||
# We have a build section but no context, so nothing to validate
|
||||
return
|
||||
|
||||
if (
|
||||
not is_url(build_path) and
|
||||
|
@ -21,7 +21,7 @@ def interpolate_environment_variables(config, section):
|
||||
)
|
||||
|
||||
return dict(
|
||||
(name, process_item(name, config_dict))
|
||||
(name, process_item(name, config_dict or {}))
|
||||
for name, config_dict in config.items()
|
||||
)
|
||||
|
||||
|
@ -23,7 +23,20 @@
|
||||
"properties": {
|
||||
"context": {"type": "string"},
|
||||
"dockerfile": {"type": "string"},
|
||||
"args": {"$ref": "#/definitions/list_or_dict"}
|
||||
"args": {
|
||||
"oneOf": [
|
||||
{"$ref": "#/definitions/list_of_strings"},
|
||||
{
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^.+$": {
|
||||
"type": ["string", "number"]
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
@ -107,11 +120,28 @@
|
||||
"network_mode": {"type": "string"},
|
||||
|
||||
"networks": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"uniqueItems": true
|
||||
"oneOf": [
|
||||
{"$ref": "#/definitions/list_of_strings"},
|
||||
{
|
||||
"type": "object",
|
||||
"patternProperties": {
|
||||
"^[a-zA-Z0-9._-]+$": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"aliases": {"$ref": "#/definitions/list_of_strings"}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
{"type": "null"}
|
||||
]
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
"pid": {"type": ["string", "null"]},
|
||||
|
||||
"ports": {
|
||||
@ -195,7 +225,12 @@
|
||||
"anyOf": [
|
||||
{"required": ["build"]},
|
||||
{"required": ["image"]}
|
||||
]
|
||||
],
|
||||
"properties": {
|
||||
"build": {
|
||||
"required": ["context"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -64,16 +64,16 @@ def format_expose(instance):
|
||||
|
||||
@FormatChecker.cls_checks(format="bool-value-in-mapping")
|
||||
def format_boolean_in_environment(instance):
|
||||
"""
|
||||
Check if there is a boolean in the environment and display a warning.
|
||||
"""Check if there is a boolean in the mapping sections and display a warning.
|
||||
Always return True here so the validation won't raise an error.
|
||||
"""
|
||||
if isinstance(instance, bool):
|
||||
log.warn(
|
||||
"There is a boolean value in the 'environment' key.\n"
|
||||
"Environment variables can only be strings.\n"
|
||||
"Please add quotes to any boolean values to make them string "
|
||||
"(eg, 'True', 'yes', 'N').\n"
|
||||
"There is a boolean value in the 'environment', 'labels', or "
|
||||
"'extra_hosts' field of a service.\n"
|
||||
"These sections only support string values.\n"
|
||||
"Please add quotes to any boolean values to make them strings "
|
||||
"(eg, 'True', 'false', 'yes', 'N', 'on', 'Off').\n"
|
||||
"This warning will become an error in a future release. \r\n"
|
||||
)
|
||||
return True
|
||||
@ -91,29 +91,49 @@ def match_named_volumes(service_dict, project_volumes):
|
||||
)
|
||||
|
||||
|
||||
def validate_top_level_service_objects(filename, service_dicts):
|
||||
"""Perform some high level validation of the service name and value.
|
||||
def python_type_to_yaml_type(type_):
|
||||
type_name = type(type_).__name__
|
||||
return {
|
||||
'dict': 'mapping',
|
||||
'list': 'array',
|
||||
'int': 'number',
|
||||
'float': 'number',
|
||||
'bool': 'boolean',
|
||||
'unicode': 'string',
|
||||
'str': 'string',
|
||||
'bytes': 'string',
|
||||
}.get(type_name, type_name)
|
||||
|
||||
This validation must happen before interpolation, which must happen
|
||||
before the rest of validation, which is why it's separate from the
|
||||
rest of the service validation.
|
||||
|
||||
def validate_config_section(filename, config, section):
|
||||
"""Validate the structure of a configuration section. This must be done
|
||||
before interpolation so it's separate from schema validation.
|
||||
"""
|
||||
for service_name, service_dict in service_dicts.items():
|
||||
if not isinstance(service_name, six.string_types):
|
||||
if not isinstance(config, dict):
|
||||
raise ConfigurationError(
|
||||
"In file '{}' service name: {} needs to be a string, eg '{}'".format(
|
||||
filename,
|
||||
service_name,
|
||||
service_name))
|
||||
"In file '{filename}', {section} must be a mapping, not "
|
||||
"{type}.".format(
|
||||
filename=filename,
|
||||
section=section,
|
||||
type=anglicize_json_type(python_type_to_yaml_type(config))))
|
||||
|
||||
if not isinstance(service_dict, dict):
|
||||
for key, value in config.items():
|
||||
if not isinstance(key, six.string_types):
|
||||
raise ConfigurationError(
|
||||
"In file '{}' service '{}' doesn\'t have any configuration options. "
|
||||
"All top level keys in your docker-compose.yml must map "
|
||||
"to a dictionary of configuration options.".format(
|
||||
filename, service_name
|
||||
)
|
||||
)
|
||||
"In file '{filename}', the {section} name {name} must be a "
|
||||
"quoted string, i.e. '{name}'.".format(
|
||||
filename=filename,
|
||||
section=section,
|
||||
name=key))
|
||||
|
||||
if not isinstance(value, (dict, type(None))):
|
||||
raise ConfigurationError(
|
||||
"In file '{filename}', {section} '{name}' must be a mapping not "
|
||||
"{type}.".format(
|
||||
filename=filename,
|
||||
section=section,
|
||||
name=key,
|
||||
type=anglicize_json_type(python_type_to_yaml_type(value))))
|
||||
|
||||
|
||||
def validate_top_level_object(config_file):
|
||||
@ -182,10 +202,10 @@ def get_unsupported_config_msg(path, error_key):
|
||||
return msg
|
||||
|
||||
|
||||
def anglicize_validator(validator):
|
||||
if validator in ["array", "object"]:
|
||||
return 'an ' + validator
|
||||
return 'a ' + validator
|
||||
def anglicize_json_type(json_type):
|
||||
if json_type.startswith(('a', 'e', 'i', 'o', 'u')):
|
||||
return 'an ' + json_type
|
||||
return 'a ' + json_type
|
||||
|
||||
|
||||
def is_service_dict_schema(schema_id):
|
||||
@ -253,10 +273,9 @@ def handle_generic_service_error(error, path):
|
||||
msg_format = "{path} contains an invalid type, it should be {msg}"
|
||||
error_msg = _parse_valid_types_from_validator(error.validator_value)
|
||||
|
||||
# TODO: no test case for this branch, there are no config options
|
||||
# which exercise this branch
|
||||
elif error.validator == 'required':
|
||||
msg_format = "{path} is invalid, {msg}"
|
||||
error_msg = ", ".join(error.validator_value)
|
||||
msg_format = "{path} is invalid, {msg} is required."
|
||||
|
||||
elif error.validator == 'dependencies':
|
||||
config_key = list(error.validator_value.keys())[0]
|
||||
@ -294,14 +313,14 @@ def _parse_valid_types_from_validator(validator):
|
||||
a valid type. Parse the valid types and prefix with the correct article.
|
||||
"""
|
||||
if not isinstance(validator, list):
|
||||
return anglicize_validator(validator)
|
||||
return anglicize_json_type(validator)
|
||||
|
||||
if len(validator) == 1:
|
||||
return anglicize_validator(validator[0])
|
||||
return anglicize_json_type(validator[0])
|
||||
|
||||
return "{}, or {}".format(
|
||||
", ".join([anglicize_validator(validator[0])] + validator[1:-1]),
|
||||
anglicize_validator(validator[-1]))
|
||||
", ".join([anglicize_json_type(validator[0])] + validator[1:-1]),
|
||||
anglicize_json_type(validator[-1]))
|
||||
|
||||
|
||||
def _parse_oneof_validator(error):
|
||||
@ -313,6 +332,10 @@ def _parse_oneof_validator(error):
|
||||
types = []
|
||||
for context in error.context:
|
||||
|
||||
if context.validator == 'oneOf':
|
||||
_, error_msg = _parse_oneof_validator(context)
|
||||
return path_string(context.path), error_msg
|
||||
|
||||
if context.validator == 'required':
|
||||
return (None, context.message)
|
||||
|
||||
|
@ -22,3 +22,8 @@ API_VERSIONS = {
|
||||
COMPOSEFILE_V1: '1.21',
|
||||
COMPOSEFILE_V2_0: '1.22',
|
||||
}
|
||||
|
||||
API_VERSION_TO_ENGINE_VERSION = {
|
||||
API_VERSIONS[COMPOSEFILE_V1]: '1.9.0',
|
||||
API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0'
|
||||
}
|
||||
|
@ -60,7 +60,7 @@ class Container(object):
|
||||
|
||||
@property
|
||||
def short_id(self):
|
||||
return self.id[:10]
|
||||
return self.id[:12]
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
@ -134,7 +134,11 @@ class Container(object):
|
||||
|
||||
@property
|
||||
def environment(self):
|
||||
return dict(var.split("=", 1) for var in self.get('Config.Env') or [])
|
||||
def parse_env(var):
|
||||
if '=' in var:
|
||||
return var.split("=", 1)
|
||||
return var, None
|
||||
return dict(parse_env(var) for var in self.get('Config.Env') or [])
|
||||
|
||||
@property
|
||||
def exit_code(self):
|
||||
|
@ -159,18 +159,26 @@ class ProjectNetworks(object):
|
||||
network.ensure()
|
||||
|
||||
|
||||
def get_network_names_for_service(service_dict):
|
||||
def get_network_aliases_for_service(service_dict):
|
||||
if 'network_mode' in service_dict:
|
||||
return []
|
||||
return service_dict.get('networks', ['default'])
|
||||
return {}
|
||||
networks = service_dict.get('networks', {'default': None})
|
||||
return dict(
|
||||
(net, (config or {}).get('aliases', []))
|
||||
for net, config in networks.items()
|
||||
)
|
||||
|
||||
|
||||
def get_network_names_for_service(service_dict):
|
||||
return get_network_aliases_for_service(service_dict).keys()
|
||||
|
||||
|
||||
def get_networks(service_dict, network_definitions):
|
||||
networks = []
|
||||
for name in get_network_names_for_service(service_dict):
|
||||
networks = {}
|
||||
for name, aliases in get_network_aliases_for_service(service_dict).items():
|
||||
network = network_definitions.get(name)
|
||||
if network:
|
||||
networks.append(network.full_name)
|
||||
networks[network.full_name] = aliases
|
||||
else:
|
||||
raise ConfigurationError(
|
||||
'Service "{}" uses an undefined network "{}"'
|
||||
|
@ -69,11 +69,13 @@ class Project(object):
|
||||
if use_networking:
|
||||
service_networks = get_networks(service_dict, networks)
|
||||
else:
|
||||
service_networks = []
|
||||
service_networks = {}
|
||||
|
||||
service_dict.pop('networks', None)
|
||||
links = project.get_links(service_dict)
|
||||
network_mode = project.get_network_mode(service_dict, service_networks)
|
||||
network_mode = project.get_network_mode(
|
||||
service_dict, list(service_networks.keys())
|
||||
)
|
||||
volumes_from = get_volumes_from(project, service_dict)
|
||||
|
||||
if config_data.version != V1:
|
||||
|
@ -123,7 +123,7 @@ class Service(object):
|
||||
self.links = links or []
|
||||
self.volumes_from = volumes_from or []
|
||||
self.network_mode = network_mode or NetworkMode(None)
|
||||
self.networks = networks or []
|
||||
self.networks = networks or {}
|
||||
self.options = options
|
||||
|
||||
def containers(self, stopped=False, one_off=False, filters={}):
|
||||
@ -431,14 +431,14 @@ class Service(object):
|
||||
def connect_container_to_networks(self, container):
|
||||
connected_networks = container.get('NetworkSettings.Networks')
|
||||
|
||||
for network in self.networks:
|
||||
for network, aliases in self.networks.items():
|
||||
if network in connected_networks:
|
||||
self.client.disconnect_container_from_network(
|
||||
container.id, network)
|
||||
|
||||
self.client.connect_container_to_network(
|
||||
container.id, network,
|
||||
aliases=self._get_aliases(container),
|
||||
aliases=list(self._get_aliases(container).union(aliases)),
|
||||
links=self._get_links(False),
|
||||
)
|
||||
|
||||
@ -472,7 +472,7 @@ class Service(object):
|
||||
'image_id': self.image()['Id'],
|
||||
'links': self.get_link_names(),
|
||||
'net': self.network_mode.id,
|
||||
'networks': self.networks,
|
||||
'networks': list(self.networks.keys()),
|
||||
'volumes_from': [
|
||||
(v.source.name, v.mode)
|
||||
for v in self.volumes_from if isinstance(v.source, Service)
|
||||
@ -513,9 +513,9 @@ class Service(object):
|
||||
|
||||
def _get_aliases(self, container):
|
||||
if container.labels.get(LABEL_ONE_OFF) == "True":
|
||||
return []
|
||||
return set()
|
||||
|
||||
return [self.name, container.short_id]
|
||||
return {self.name, container.short_id}
|
||||
|
||||
def _get_links(self, link_to_self):
|
||||
links = {}
|
||||
@ -591,20 +591,19 @@ class Service(object):
|
||||
ports.append(port)
|
||||
container_options['ports'] = ports
|
||||
|
||||
override_options['binds'] = merge_volume_bindings(
|
||||
container_options.get('volumes') or [],
|
||||
previous_container)
|
||||
|
||||
if 'volumes' in container_options:
|
||||
container_options['volumes'] = dict(
|
||||
(v.internal, {}) for v in container_options['volumes'])
|
||||
|
||||
container_options['environment'] = merge_environment(
|
||||
self.options.get('environment'),
|
||||
override_options.get('environment'))
|
||||
|
||||
if previous_container:
|
||||
container_options['environment']['affinity:container'] = ('=' + previous_container.id)
|
||||
binds, affinity = merge_volume_bindings(
|
||||
container_options.get('volumes') or [],
|
||||
previous_container)
|
||||
override_options['binds'] = binds
|
||||
container_options['environment'].update(affinity)
|
||||
|
||||
if 'volumes' in container_options:
|
||||
container_options['volumes'] = dict(
|
||||
(v.internal, {}) for v in container_options['volumes'])
|
||||
|
||||
container_options['image'] = self.image_name
|
||||
|
||||
@ -622,6 +621,8 @@ class Service(object):
|
||||
override_options,
|
||||
one_off=one_off)
|
||||
|
||||
container_options['environment'] = format_environment(
|
||||
container_options['environment'])
|
||||
return container_options
|
||||
|
||||
def _get_container_host_config(self, override_options, one_off=False):
|
||||
@ -875,18 +876,23 @@ def merge_volume_bindings(volumes, previous_container):
|
||||
"""Return a list of volume bindings for a container. Container data volumes
|
||||
are replaced by those from the previous container.
|
||||
"""
|
||||
affinity = {}
|
||||
|
||||
volume_bindings = dict(
|
||||
build_volume_binding(volume)
|
||||
for volume in volumes
|
||||
if volume.external)
|
||||
|
||||
if previous_container:
|
||||
data_volumes = get_container_data_volumes(previous_container, volumes)
|
||||
warn_on_masked_volume(volumes, data_volumes, previous_container.service)
|
||||
old_volumes = get_container_data_volumes(previous_container, volumes)
|
||||
warn_on_masked_volume(volumes, old_volumes, previous_container.service)
|
||||
volume_bindings.update(
|
||||
build_volume_binding(volume) for volume in data_volumes)
|
||||
build_volume_binding(volume) for volume in old_volumes)
|
||||
|
||||
return list(volume_bindings.values())
|
||||
if old_volumes:
|
||||
affinity = {'affinity:container': '=' + previous_container.id}
|
||||
|
||||
return list(volume_bindings.values()), affinity
|
||||
|
||||
|
||||
def get_container_data_volumes(container, volumes_option):
|
||||
@ -923,7 +929,7 @@ def get_container_data_volumes(container, volumes_option):
|
||||
continue
|
||||
|
||||
# Copy existing volume from old container
|
||||
volume = volume._replace(external=mount['Source'])
|
||||
volume = volume._replace(external=mount['Name'])
|
||||
volumes.append(volume)
|
||||
|
||||
return volumes
|
||||
@ -1014,3 +1020,12 @@ def get_log_config(logging_dict):
|
||||
type=log_driver,
|
||||
config=log_options
|
||||
)
|
||||
|
||||
|
||||
# TODO: remove once fix is available in docker-py
|
||||
def format_environment(environment):
|
||||
def format_env(key, value):
|
||||
if value is None:
|
||||
return key
|
||||
return '{key}={value}'.format(key=key, value=value)
|
||||
return [format_env(*item) for item in environment.items()]
|
||||
|
@ -92,3 +92,7 @@ def json_hash(obj):
|
||||
|
||||
def microseconds_from_time_nano(time_nano):
|
||||
return int(time_nano % 1000000000 / 1000)
|
||||
|
||||
|
||||
def build_string_dict(source_dict):
|
||||
return dict((k, str(v)) for k, v in source_dict.items())
|
||||
|
@ -69,7 +69,8 @@ class ProjectVolumes(object):
|
||||
name=vol_name,
|
||||
driver=data.get('driver'),
|
||||
driver_opts=data.get('driver_opts'),
|
||||
external_name=data.get('external_name'))
|
||||
external_name=data.get('external_name')
|
||||
)
|
||||
for vol_name, data in config_volumes.items()
|
||||
}
|
||||
return cls(volumes)
|
||||
@ -96,6 +97,11 @@ class ProjectVolumes(object):
|
||||
)
|
||||
)
|
||||
continue
|
||||
log.info(
|
||||
'Creating volume "{0}" with {1} driver'.format(
|
||||
volume.full_name, volume.driver or 'default'
|
||||
)
|
||||
)
|
||||
volume.create()
|
||||
except NotFound:
|
||||
raise ConfigurationError(
|
||||
|
@ -33,7 +33,7 @@ def migrate(content):
|
||||
|
||||
services = {name: data.pop(name) for name in data.keys()}
|
||||
|
||||
data['version'] = 2
|
||||
data['version'] = "2"
|
||||
data['services'] = services
|
||||
create_volumes_section(data)
|
||||
|
||||
@ -155,7 +155,7 @@ def parse_opts(args):
|
||||
|
||||
|
||||
def main(args):
|
||||
logging.basicConfig(format='\033[33m%(levelname)s:\033[37m %(message)s\n')
|
||||
logging.basicConfig(format='\033[33m%(levelname)s:\033[37m %(message)s\033[0m\n')
|
||||
|
||||
opts = parse_opts(args)
|
||||
|
||||
|
@ -5,9 +5,10 @@ RUN svn checkout https://github.com/docker/docker/trunk/docs /docs/content/engin
|
||||
RUN svn checkout https://github.com/docker/swarm/trunk/docs /docs/content/swarm
|
||||
RUN svn checkout https://github.com/docker/machine/trunk/docs /docs/content/machine
|
||||
RUN svn checkout https://github.com/docker/distribution/trunk/docs /docs/content/registry
|
||||
RUN svn checkout https://github.com/kitematic/kitematic/trunk/docs /docs/content/kitematic
|
||||
RUN svn checkout https://github.com/docker/tutorials/trunk/docs /docs/content/tutorials
|
||||
RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content
|
||||
RUN svn checkout https://github.com/docker/notary/trunk/docs /docs/content/notary
|
||||
RUN svn checkout https://github.com/docker/kitematic/trunk/docs /docs/content/kitematic
|
||||
RUN svn checkout https://github.com/docker/toolbox/trunk/docs /docs/content/toolbox
|
||||
RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content/project
|
||||
|
||||
ENV PROJECT=compose
|
||||
# To get the git info for this repo
|
||||
|
@ -453,7 +453,7 @@ id.
|
||||
|
||||
### network_mode
|
||||
|
||||
> [Version 2 file format](#version-1) only. In version 1, use [net](#net).
|
||||
> [Version 2 file format](#version-2) only. In version 1, use [net](#net).
|
||||
|
||||
Network mode. Use the same values as the docker client `--net` parameter, plus
|
||||
the special form `service:[service name]`.
|
||||
@ -475,6 +475,54 @@ Networks to join, referencing entries under the
|
||||
- some-network
|
||||
- other-network
|
||||
|
||||
#### aliases
|
||||
|
||||
Aliases (alternative hostnames) for this service on the network. Other containers on the same network can use either the service name or this alias to connect to one of the service's containers.
|
||||
|
||||
Since `aliases` is network-scoped, the same service can have different aliases on different networks.
|
||||
|
||||
> **Note**: A network-wide alias can be shared by multiple containers, and even by multiple services. If it is, then exactly which container the name will resolve to is not guaranteed.
|
||||
|
||||
The general format is shown here.
|
||||
|
||||
networks:
|
||||
some-network:
|
||||
aliases:
|
||||
- alias1
|
||||
- alias3
|
||||
other-network:
|
||||
aliases:
|
||||
- alias2
|
||||
|
||||
In the example below, three services are provided (`web`, `worker`, and `db`), along with two networks (`new` and `legacy`). The `db` service is reachable at the hostname `db` or `database` on the `new` network, and at `db` or `mysql` on the `legacy` network.
|
||||
|
||||
version: 2
|
||||
|
||||
services:
|
||||
web:
|
||||
build: ./web
|
||||
networks:
|
||||
- new
|
||||
|
||||
worker:
|
||||
build: ./worker
|
||||
networks:
|
||||
- legacy
|
||||
|
||||
db:
|
||||
image: mysql
|
||||
networks:
|
||||
new:
|
||||
aliases:
|
||||
- database
|
||||
legacy:
|
||||
aliases:
|
||||
- mysql
|
||||
|
||||
networks:
|
||||
new:
|
||||
legacy:
|
||||
|
||||
### pid
|
||||
|
||||
pid: "host"
|
||||
@ -534,10 +582,11 @@ limit as an integer or soft/hard limits as a mapping.
|
||||
### volumes, volume\_driver
|
||||
|
||||
Mount paths or named volumes, optionally specifying a path on the host machine
|
||||
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`). Named volumes can
|
||||
be specified with the
|
||||
[top-level `volumes` key](#volume-configuration-reference), but this is
|
||||
optional - the Docker Engine will create the volume if it doesn't exist.
|
||||
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
|
||||
For [version 2 files](#version-2), named volumes need to be specified with the
|
||||
[top-level `volumes` key](#volume-configuration-reference).
|
||||
When using [version 1](#version-1), the Docker Engine will create the named
|
||||
volume automatically if it doesn't exist.
|
||||
|
||||
You can mount a relative path on the host, which will expand relative to
|
||||
the directory of the Compose configuration file being used. Relative paths
|
||||
@ -559,11 +608,16 @@ should always begin with `.` or `..`.
|
||||
# Named volume
|
||||
- datavolume:/var/lib/mysql
|
||||
|
||||
If you use a volume name (instead of a volume path), you may also specify
|
||||
a `volume_driver`.
|
||||
If you do not use a host path, you may specify a `volume_driver`.
|
||||
|
||||
volume_driver: mydriver
|
||||
|
||||
Note that for [version 2 files](#version-2), this driver
|
||||
will not apply to named volumes (you should use the `driver` option when
|
||||
[declaring the volume](#volume-configuration-reference) instead).
|
||||
For [version 1](#version-1), both named volumes and container volumes will
|
||||
use the specified driver.
|
||||
|
||||
> Note: No path expansion will be done if you have also specified a
|
||||
> `volume_driver`.
|
||||
|
||||
@ -625,7 +679,7 @@ While it is possible to declare volumes on the fly as part of the service
|
||||
declaration, this section allows you to create named volumes that can be
|
||||
reused across multiple services (without relying on `volumes_from`), and are
|
||||
easily retrieved and inspected using the docker command line or API.
|
||||
See the [docker volume](http://docs.docker.com/reference/commandline/volume/)
|
||||
See the [docker volume](/engine/reference/commandline/volume_create.md)
|
||||
subcommand documentation for more information.
|
||||
|
||||
### driver
|
||||
@ -761,14 +815,14 @@ service's containers to it.
|
||||
networks:
|
||||
- default
|
||||
|
||||
networks
|
||||
networks:
|
||||
outside:
|
||||
external: true
|
||||
|
||||
You can also specify the name of the network separately from the name used to
|
||||
refer to it within the Compose file:
|
||||
|
||||
networks
|
||||
networks:
|
||||
outside:
|
||||
external:
|
||||
name: actual-name-of-network
|
||||
|
@ -72,6 +72,8 @@ and a `docker-compose.yml` file.
|
||||
|
||||
9. Add the following configuration to the file.
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
db:
|
||||
image: postgres
|
||||
web:
|
||||
@ -81,7 +83,7 @@ and a `docker-compose.yml` file.
|
||||
- .:/code
|
||||
ports:
|
||||
- "8000:8000"
|
||||
links:
|
||||
depends_on:
|
||||
- db
|
||||
|
||||
This file defines two services: The `db` service and the `web` service.
|
||||
@ -129,7 +131,7 @@ In this step, you create a Django started project by building the image from the
|
||||
|
||||
In this section, you set up the database connection for Django.
|
||||
|
||||
1. In your project dirctory, edit the `composeexample/settings.py` file.
|
||||
1. In your project directory, edit the `composeexample/settings.py` file.
|
||||
|
||||
2. Replace the `DATABASES = ...` with the following:
|
||||
|
||||
|
@ -95,13 +95,16 @@ Define a set of services using `docker-compose.yml`:
|
||||
1. Create a file called docker-compose.yml in your project directory and add
|
||||
the following:
|
||||
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
links:
|
||||
depends_on:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
|
@ -39,7 +39,7 @@ which the release page specifies, in your terminal.
|
||||
|
||||
The following is an example command illustrating the format:
|
||||
|
||||
curl -L https://github.com/docker/compose/releases/download/1.6.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/1.6.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
|
||||
If you have problems installing with `curl`, see
|
||||
[Alternative Install Options](#alternative-install-options).
|
||||
@ -54,7 +54,7 @@ which the release page specifies, in your terminal.
|
||||
7. Test the installation.
|
||||
|
||||
$ docker-compose --version
|
||||
docker-compose version: 1.6.0
|
||||
docker-compose version: 1.6.1
|
||||
|
||||
|
||||
## Alternative install options
|
||||
@ -77,7 +77,7 @@ to get started.
|
||||
Compose can also be run inside a container, from a small bash script wrapper.
|
||||
To install compose as a container run:
|
||||
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.6.0/run.sh > /usr/local/bin/docker-compose
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.6.1/run.sh > /usr/local/bin/docker-compose
|
||||
$ chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
## Master builds
|
||||
|
@ -76,7 +76,9 @@ See the [links reference](compose-file.md#links) for more information.
|
||||
|
||||
## Multi-host networking
|
||||
|
||||
When deploying a Compose application to a Swarm cluster, you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to application code. Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay.md) to see how to set up the overlay driver, and then specify `driver: overlay` in your networking config (see the sections below for how to do this).
|
||||
When [deploying a Compose application to a Swarm cluster](swarm.md), you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to your Compose file or application code.
|
||||
|
||||
Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay.md) to see how to set up a Swarm cluster. The cluster will use the `overlay` driver by default, but you can specify it explicitly if you prefer - see below for how to do this.
|
||||
|
||||
## Specifying custom networks
|
||||
|
||||
@ -105,11 +107,11 @@ Here's an example Compose file defining two custom networks. The `proxy` service
|
||||
|
||||
networks:
|
||||
front:
|
||||
# Use the overlay driver for multi-host communication
|
||||
driver: overlay
|
||||
# Use a custom driver
|
||||
driver: custom-driver-1
|
||||
back:
|
||||
# Use a custom driver which takes special options
|
||||
driver: my-custom-driver
|
||||
driver: custom-driver-2
|
||||
driver_opts:
|
||||
foo: "1"
|
||||
bar: "2"
|
||||
@ -135,8 +137,8 @@ Instead of (or as well as) specifying your own networks, you can also change the
|
||||
|
||||
networks:
|
||||
default:
|
||||
# Use the overlay driver for multi-host communication
|
||||
driver: overlay
|
||||
# Use a custom driver
|
||||
driver: custom-driver-1
|
||||
|
||||
## Using a pre-existing network
|
||||
|
||||
|
@ -60,7 +60,7 @@ recreating any services which `web` depends on.
|
||||
You can use Compose to deploy an app to a remote Docker host by setting the
|
||||
`DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables
|
||||
appropriately. For tasks like this,
|
||||
[Docker Machine](https://docs.docker.com/machine/) makes managing local and
|
||||
[Docker Machine](/machine/overview) makes managing local and
|
||||
remote Docker hosts very easy, and is recommended even if you're not deploying
|
||||
remotely.
|
||||
|
||||
@ -69,14 +69,12 @@ commands will work with no further configuration.
|
||||
|
||||
### Running Compose on a Swarm cluster
|
||||
|
||||
[Docker Swarm](https://docs.docker.com/swarm/), a Docker-native clustering
|
||||
[Docker Swarm](/swarm/overview), a Docker-native clustering
|
||||
system, exposes the same API as a single Docker host, which means you can use
|
||||
Compose against a Swarm instance and run your apps across multiple hosts.
|
||||
|
||||
Compose/Swarm integration is still in the experimental stage, and Swarm is still
|
||||
in beta, but if you'd like to explore and experiment, check out the <a
|
||||
href="https://github.com/docker/compose/blob/master/SWARM.md">integration
|
||||
guide</a>.
|
||||
Compose/Swarm integration is still in the experimental stage, but if you'd like
|
||||
to explore and experiment, check out the [integration guide](swarm.md).
|
||||
|
||||
## Compose documentation
|
||||
|
||||
|
@ -43,6 +43,8 @@ You'll need an empty `Gemfile.lock` in order to build our `Dockerfile`.
|
||||
|
||||
Finally, `docker-compose.yml` is where the magic happens. This file describes the services that comprise your app (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration needed to link them together and expose the web app's port.
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
db:
|
||||
image: postgres
|
||||
web:
|
||||
@ -52,7 +54,7 @@ Finally, `docker-compose.yml` is where the magic happens. This file describes th
|
||||
- .:/myapp
|
||||
ports:
|
||||
- "3000:3000"
|
||||
links:
|
||||
depends_on:
|
||||
- db
|
||||
|
||||
### Build the project
|
||||
|
184
docs/swarm.md
Normal file
184
docs/swarm.md
Normal file
@ -0,0 +1,184 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Using Compose with Swarm"
|
||||
description = "How to use Compose and Swarm together to deploy apps to multi-host clusters"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers, swarm"]
|
||||
[menu.main]
|
||||
parent="workw_compose"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Using Compose with Swarm
|
||||
|
||||
Docker Compose and [Docker Swarm](/swarm/overview) aim to have full integration, meaning
|
||||
you can point a Compose app at a Swarm cluster and have it all just work as if
|
||||
you were using a single Docker host.
|
||||
|
||||
The actual extent of integration depends on which version of the [Compose file
|
||||
format](compose-file.md#versioning) you are using:
|
||||
|
||||
1. If you're using version 1 along with `links`, your app will work, but Swarm
|
||||
will schedule all containers on one host, because links between containers
|
||||
do not work across hosts with the old networking system.
|
||||
|
||||
2. If you're using version 2, your app should work with no changes:
|
||||
|
||||
- subject to the [limitations](#limitations) described below,
|
||||
|
||||
- as long as the Swarm cluster is configured to use the [overlay
|
||||
driver](/engine/userguide/networking/dockernetworks.md#an-overlay-network),
|
||||
or a custom driver which supports multi-host networking.
|
||||
|
||||
Read the [Getting started with multi-host
|
||||
networking](/engine/userguide/networking/get-started-overlay.md) to see how to
|
||||
set up a Swarm cluster with [Docker Machine](/machine/overview) and the overlay driver.
|
||||
Once you've got it running, deploying your app to it should be as simple as:
|
||||
|
||||
$ eval "$(docker-machine env --swarm <name of swarm master machine>)"
|
||||
$ docker-compose up
|
||||
|
||||
|
||||
## Limitations
|
||||
|
||||
### Building images
|
||||
|
||||
Swarm can build an image from a Dockerfile just like a single-host Docker
|
||||
instance can, but the resulting image will only live on a single node and won't
|
||||
be distributed to other nodes.
|
||||
|
||||
If you want to use Compose to scale the service in question to multiple nodes,
|
||||
you'll have to build it yourself, push it to a registry (e.g. the Docker Hub)
|
||||
and reference it from `docker-compose.yml`:
|
||||
|
||||
$ docker build -t myusername/web .
|
||||
$ docker push myusername/web
|
||||
|
||||
$ cat docker-compose.yml
|
||||
web:
|
||||
image: myusername/web
|
||||
|
||||
$ docker-compose up -d
|
||||
$ docker-compose scale web=3
|
||||
|
||||
### Multiple dependencies
|
||||
|
||||
If a service has multiple dependencies of the type which force co-scheduling
|
||||
(see [Automatic scheduling](#automatic-scheduling) below), it's possible that
|
||||
Swarm will schedule the dependencies on different nodes, making the dependent
|
||||
service impossible to schedule. For example, here `foo` needs to be co-scheduled
|
||||
with `bar` and `baz`:
|
||||
|
||||
version: "2"
|
||||
services:
|
||||
foo:
|
||||
image: foo
|
||||
volumes_from: ["bar"]
|
||||
network_mode: "service:baz"
|
||||
bar:
|
||||
image: bar
|
||||
baz:
|
||||
image: baz
|
||||
|
||||
The problem is that Swarm might first schedule `bar` and `baz` on different
|
||||
nodes (since they're not dependent on one another), making it impossible to
|
||||
pick an appropriate node for `foo`.
|
||||
|
||||
To work around this, use [manual scheduling](#manual-scheduling) to ensure that
|
||||
all three services end up on the same node:
|
||||
|
||||
version: "2"
|
||||
services:
|
||||
foo:
|
||||
image: foo
|
||||
volumes_from: ["bar"]
|
||||
network_mode: "service:baz"
|
||||
environment:
|
||||
- "constraint:node==node-1"
|
||||
bar:
|
||||
image: bar
|
||||
environment:
|
||||
- "constraint:node==node-1"
|
||||
baz:
|
||||
image: baz
|
||||
environment:
|
||||
- "constraint:node==node-1"
|
||||
|
||||
### Host ports and recreating containers
|
||||
|
||||
If a service maps a port from the host, e.g. `80:8000`, then you may get an
|
||||
error like this when running `docker-compose up` on it after the first time:
|
||||
|
||||
docker: Error response from daemon: unable to find a node that satisfies
|
||||
container==6ab2dfe36615ae786ef3fc35d641a260e3ea9663d6e69c5b70ce0ca6cb373c02.
|
||||
|
||||
The usual cause of this error is that the container has a volume (defined either
|
||||
in its image or in the Compose file) without an explicit mapping, and so in
|
||||
order to preserve its data, Compose has directed Swarm to schedule the new
|
||||
container on the same node as the old container. This results in a port clash.
|
||||
|
||||
There are two viable workarounds for this problem:
|
||||
|
||||
- Specify a named volume, and use a volume driver which is capable of mounting
|
||||
the volume into the container regardless of what node it's scheduled on.
|
||||
|
||||
Compose does not give Swarm any specific scheduling instructions if a
|
||||
service uses only named volumes.
|
||||
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "80:8000"
|
||||
volumes:
|
||||
- web-logs:/var/log/web
|
||||
|
||||
volumes:
|
||||
web-logs:
|
||||
driver: custom-volume-driver
|
||||
|
||||
- Remove the old container before creating the new one. You will lose any data
|
||||
in the volume.
|
||||
|
||||
$ docker-compose stop web
|
||||
$ docker-compose rm -f web
|
||||
$ docker-compose up web
|
||||
|
||||
|
||||
## Scheduling containers
|
||||
|
||||
### Automatic scheduling
|
||||
|
||||
Some configuration options will result in containers being automatically
|
||||
scheduled on the same Swarm node to ensure that they work correctly. These are:
|
||||
|
||||
- `network_mode: "service:..."` and `network_mode: "container:..."` (and
|
||||
`net: "container:..."` in the version 1 file format).
|
||||
|
||||
- `volumes_from`
|
||||
|
||||
- `links`
|
||||
|
||||
### Manual scheduling
|
||||
|
||||
Swarm offers a rich set of scheduling and affinity hints, enabling you to
|
||||
control where containers are located. They are specified via container
|
||||
environment variables, so you can use Compose's `environment` option to set
|
||||
them.
|
||||
|
||||
# Schedule containers on a specific node
|
||||
environment:
|
||||
- "constraint:node==node-1"
|
||||
|
||||
# Schedule containers on a node that has the 'storage' label set to 'ssd'
|
||||
environment:
|
||||
- "constraint:storage==ssd"
|
||||
|
||||
# Schedule containers where the 'redis' image is already pulled
|
||||
environment:
|
||||
- "affinity:image==redis"
|
||||
|
||||
For the full set of available filters and expressions, see the [Swarm
|
||||
documentation](/swarm/scheduler/filter.md).
|
@ -41,12 +41,14 @@ and WordPress.
|
||||
Next you'll create a `docker-compose.yml` file that will start your web service
|
||||
and a separate MySQL instance:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
command: php -S 0.0.0.0:8000 -t /code
|
||||
ports:
|
||||
- "8000:8000"
|
||||
links:
|
||||
depends_on:
|
||||
- db
|
||||
volumes:
|
||||
- .:/code
|
||||
|
@ -1 +1 @@
|
||||
pyinstaller==3.0
|
||||
pyinstaller==3.1.1
|
||||
|
@ -1,6 +1,6 @@
|
||||
PyYAML==3.11
|
||||
cached-property==1.2.0
|
||||
docker-py==1.7.0
|
||||
docker-py==1.7.1
|
||||
dockerpty==0.4.1
|
||||
docopt==0.6.1
|
||||
enum34==1.0.4
|
||||
|
@ -15,7 +15,7 @@
|
||||
|
||||
set -e
|
||||
|
||||
VERSION="1.6.0"
|
||||
VERSION="1.6.1"
|
||||
IMAGE="docker/compose:$VERSION"
|
||||
|
||||
|
||||
@ -31,7 +31,9 @@ fi
|
||||
|
||||
|
||||
# Setup volume mounts for compose config and context
|
||||
VOLUMES="-v $(pwd):$(pwd)"
|
||||
if [ "$(pwd)" != '/' ]; then
|
||||
VOLUMES="-v $(pwd):$(pwd)"
|
||||
fi
|
||||
if [ -n "$COMPOSE_FILE" ]; then
|
||||
compose_dir=$(dirname $COMPOSE_FILE)
|
||||
fi
|
||||
@ -45,9 +47,10 @@ fi
|
||||
|
||||
# Only allocate tty if we detect one
|
||||
if [ -t 1 ]; then
|
||||
DOCKER_RUN_OPTIONS="-ti"
|
||||
else
|
||||
DOCKER_RUN_OPTIONS="-i"
|
||||
DOCKER_RUN_OPTIONS="-t"
|
||||
fi
|
||||
if [ -t 0 ]; then
|
||||
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -i"
|
||||
fi
|
||||
|
||||
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w $(pwd) $IMAGE $@
|
||||
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"
|
||||
|
@ -159,7 +159,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
'-f', 'tests/fixtures/invalid-composefile/invalid.yml',
|
||||
'config', '-q'
|
||||
], returncode=1)
|
||||
assert "'notaservice' doesn't have any configuration" in result.stderr
|
||||
assert "'notaservice' must be a mapping" in result.stderr
|
||||
|
||||
# TODO: this shouldn't be v2-dependent
|
||||
@v2_only()
|
||||
@ -185,7 +185,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
'build': {
|
||||
'context': os.path.abspath(self.base_dir),
|
||||
},
|
||||
'networks': ['front', 'default'],
|
||||
'networks': {'front': None, 'default': None},
|
||||
'volumes_from': ['service:other:rw'],
|
||||
},
|
||||
'other': {
|
||||
@ -445,6 +445,34 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
assert networks[0]['Options']['com.docker.network.bridge.enable_icc'] == 'false'
|
||||
|
||||
@v2_only()
|
||||
def test_up_with_network_aliases(self):
|
||||
filename = 'network-aliases.yml'
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
self.dispatch(['-f', filename, 'up', '-d'], None)
|
||||
back_name = '{}_back'.format(self.project.name)
|
||||
front_name = '{}_front'.format(self.project.name)
|
||||
|
||||
networks = [
|
||||
n for n in self.client.networks()
|
||||
if n['Name'].startswith('{}_'.format(self.project.name))
|
||||
]
|
||||
|
||||
# Two networks were created: back and front
|
||||
assert sorted(n['Name'] for n in networks) == [back_name, front_name]
|
||||
web_container = self.project.get_service('web').containers()[0]
|
||||
|
||||
back_aliases = web_container.get(
|
||||
'NetworkSettings.Networks.{}.Aliases'.format(back_name)
|
||||
)
|
||||
assert 'web' in back_aliases
|
||||
front_aliases = web_container.get(
|
||||
'NetworkSettings.Networks.{}.Aliases'.format(front_name)
|
||||
)
|
||||
assert 'web' in front_aliases
|
||||
assert 'forward_facing' in front_aliases
|
||||
assert 'ahead' in front_aliases
|
||||
|
||||
@v2_only()
|
||||
def test_up_with_networks(self):
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
@ -718,6 +746,12 @@ class CLITestCase(DockerClientTestCase):
|
||||
os.kill(proc.pid, signal.SIGTERM)
|
||||
wait_on_condition(ContainerCountCondition(self.project, 0))
|
||||
|
||||
def test_up_handles_abort_on_container_exit(self):
|
||||
start_process(self.base_dir, ['up', '--abort-on-container-exit'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 2))
|
||||
self.project.stop(['simple'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 0))
|
||||
|
||||
def test_run_service_without_links(self):
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['run', 'console', '/bin/true'])
|
||||
@ -738,6 +772,15 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
@v2_only()
|
||||
def test_run_service_with_dependencies(self):
|
||||
self.base_dir = 'tests/fixtures/v2-dependencies'
|
||||
self.dispatch(['run', 'web', '/bin/true'], None)
|
||||
db = self.project.get_service('db')
|
||||
console = self.project.get_service('console')
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
def test_run_with_no_deps(self):
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['run', '--no-deps', 'web', '/bin/true'])
|
||||
|
16
tests/fixtures/networks/network-aliases.yml
vendored
Normal file
16
tests/fixtures/networks/network-aliases.yml
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
web:
|
||||
image: busybox
|
||||
command: top
|
||||
networks:
|
||||
front:
|
||||
aliases:
|
||||
- forward_facing
|
||||
- ahead
|
||||
back:
|
||||
|
||||
networks:
|
||||
front: {}
|
||||
back: {}
|
13
tests/fixtures/v2-dependencies/docker-compose.yml
vendored
Normal file
13
tests/fixtures/v2-dependencies/docker-compose.yml
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
version: "2.0"
|
||||
services:
|
||||
db:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
web:
|
||||
image: busybox:latest
|
||||
command: top
|
||||
depends_on:
|
||||
- db
|
||||
console:
|
||||
image: busybox:latest
|
||||
command: top
|
@ -565,7 +565,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
'name': 'web',
|
||||
'image': 'busybox:latest',
|
||||
'command': 'top',
|
||||
'networks': ['foo', 'bar', 'baz'],
|
||||
'networks': {'foo': None, 'bar': None, 'baz': None},
|
||||
}],
|
||||
volumes={},
|
||||
networks={
|
||||
@ -598,7 +598,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
services=[{
|
||||
'name': 'web',
|
||||
'image': 'busybox:latest',
|
||||
'networks': ['front'],
|
||||
'networks': {'front': None},
|
||||
}],
|
||||
volumes={},
|
||||
networks={
|
||||
|
@ -266,6 +266,30 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.client.inspect_container,
|
||||
old_container.id)
|
||||
|
||||
def test_execute_convergence_plan_recreate_twice(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
volumes=[VolumeSpec.parse('/etc')],
|
||||
entrypoint=['top'],
|
||||
command=['-d', '1'])
|
||||
|
||||
orig_container = service.create_container()
|
||||
service.start_container(orig_container)
|
||||
|
||||
orig_container.inspect() # reload volume data
|
||||
volume_path = orig_container.get_mount('/etc')['Source']
|
||||
|
||||
# Do this twice to reproduce the bug
|
||||
for _ in range(2):
|
||||
new_container, = service.execute_convergence_plan(
|
||||
ConvergencePlan('recreate', [orig_container]))
|
||||
|
||||
assert new_container.get_mount('/etc')['Source'] == volume_path
|
||||
assert ('affinity:container==%s' % orig_container.id in
|
||||
new_container.get('Config.Env'))
|
||||
|
||||
orig_container = new_container
|
||||
|
||||
def test_execute_convergence_plan_when_containers_are_stopped(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
@ -885,7 +909,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
'FILE_DEF': 'F1',
|
||||
'FILE_DEF_EMPTY': '',
|
||||
'ENV_DEF': 'E3',
|
||||
'NO_DEF': ''
|
||||
'NO_DEF': None
|
||||
}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
|
@ -138,9 +138,10 @@ class CLITestCase(unittest.TestCase):
|
||||
})
|
||||
|
||||
_, _, call_kwargs = mock_client.create_container.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call_kwargs['environment'],
|
||||
{'FOO': 'ONE', 'BAR': 'NEW', 'OTHER': u'bär'})
|
||||
assert (
|
||||
sorted(call_kwargs['environment']) ==
|
||||
sorted(['FOO=ONE', 'BAR=NEW', 'OTHER=bär'])
|
||||
)
|
||||
|
||||
def test_run_service_with_restart_always(self):
|
||||
command = TopLevelCommand()
|
||||
|
@ -231,6 +231,57 @@ class ConfigTest(unittest.TestCase):
|
||||
assert volumes['simple'] == {}
|
||||
assert volumes['other'] == {}
|
||||
|
||||
def test_named_volume_numeric_driver_opt(self):
|
||||
config_details = build_config_details({
|
||||
'version': '2',
|
||||
'services': {
|
||||
'simple': {'image': 'busybox'}
|
||||
},
|
||||
'volumes': {
|
||||
'simple': {'driver_opts': {'size': 42}},
|
||||
}
|
||||
})
|
||||
cfg = config.load(config_details)
|
||||
assert cfg.volumes['simple']['driver_opts']['size'] == '42'
|
||||
|
||||
def test_volume_invalid_driver_opt(self):
|
||||
config_details = build_config_details({
|
||||
'version': '2',
|
||||
'services': {
|
||||
'simple': {'image': 'busybox'}
|
||||
},
|
||||
'volumes': {
|
||||
'simple': {'driver_opts': {'size': True}},
|
||||
}
|
||||
})
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(config_details)
|
||||
assert 'driver_opts.size contains an invalid type' in exc.exconly()
|
||||
|
||||
def test_named_volume_invalid_type_list(self):
|
||||
config_details = build_config_details({
|
||||
'version': '2',
|
||||
'services': {
|
||||
'simple': {'image': 'busybox'}
|
||||
},
|
||||
'volumes': []
|
||||
})
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(config_details)
|
||||
assert "volume must be a mapping, not an array" in exc.exconly()
|
||||
|
||||
def test_networks_invalid_type_list(self):
|
||||
config_details = build_config_details({
|
||||
'version': '2',
|
||||
'services': {
|
||||
'simple': {'image': 'busybox'}
|
||||
},
|
||||
'networks': []
|
||||
})
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(config_details)
|
||||
assert "network must be a mapping, not an array" in exc.exconly()
|
||||
|
||||
def test_load_service_with_name_version(self):
|
||||
with mock.patch('compose.config.config.log') as mock_logging:
|
||||
config_data = config.load(
|
||||
@ -341,8 +392,28 @@ class ConfigTest(unittest.TestCase):
|
||||
'filename.yml')
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(config_details)
|
||||
error_msg = "service 'web' doesn't have any configuration options"
|
||||
assert error_msg in exc.exconly()
|
||||
assert "service 'web' must be a mapping not a string." in exc.exconly()
|
||||
|
||||
def test_load_with_empty_build_args(self):
|
||||
config_details = build_config_details(
|
||||
{
|
||||
'version': '2',
|
||||
'services': {
|
||||
'web': {
|
||||
'build': {
|
||||
'context': '.',
|
||||
'args': None,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(config_details)
|
||||
assert (
|
||||
"services.web.build.args contains an invalid type, it should be an "
|
||||
"array, or an object" in exc.exconly()
|
||||
)
|
||||
|
||||
def test_config_integer_service_name_raise_validation_error(self):
|
||||
with pytest.raises(ConfigurationError) as excinfo:
|
||||
@ -354,8 +425,10 @@ class ConfigTest(unittest.TestCase):
|
||||
)
|
||||
)
|
||||
|
||||
assert "In file 'filename.yml' service name: 1 needs to be a string, eg '1'" \
|
||||
in excinfo.exconly()
|
||||
assert (
|
||||
"In file 'filename.yml', the service name 1 must be a quoted string, i.e. '1'" in
|
||||
excinfo.exconly()
|
||||
)
|
||||
|
||||
def test_config_integer_service_name_raise_validation_error_v2(self):
|
||||
with pytest.raises(ConfigurationError) as excinfo:
|
||||
@ -370,8 +443,10 @@ class ConfigTest(unittest.TestCase):
|
||||
)
|
||||
)
|
||||
|
||||
assert "In file 'filename.yml' service name: 1 needs to be a string, eg '1'" \
|
||||
in excinfo.exconly()
|
||||
assert (
|
||||
"In file 'filename.yml', the service name 1 must be a quoted string, i.e. '1'." in
|
||||
excinfo.exconly()
|
||||
)
|
||||
|
||||
def test_load_with_multiple_files_v1(self):
|
||||
base_file = config.ConfigFile(
|
||||
@ -505,7 +580,7 @@ class ConfigTest(unittest.TestCase):
|
||||
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(details)
|
||||
assert "service 'bogus' doesn't have any configuration" in exc.exconly()
|
||||
assert "service 'bogus' must be a mapping not a string." in exc.exconly()
|
||||
assert "In file 'override.yaml'" in exc.exconly()
|
||||
|
||||
def test_load_sorts_in_dependency_order(self):
|
||||
@ -594,6 +669,70 @@ class ConfigTest(unittest.TestCase):
|
||||
self.assertTrue('context' in service[0]['build'])
|
||||
self.assertEqual(service[0]['build']['dockerfile'], 'Dockerfile-alt')
|
||||
|
||||
def test_load_with_buildargs(self):
|
||||
service = config.load(
|
||||
build_config_details(
|
||||
{
|
||||
'version': '2',
|
||||
'services': {
|
||||
'web': {
|
||||
'build': {
|
||||
'context': '.',
|
||||
'dockerfile': 'Dockerfile-alt',
|
||||
'args': {
|
||||
'opt1': 42,
|
||||
'opt2': 'foobar'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
'tests/fixtures/extends',
|
||||
'filename.yml'
|
||||
)
|
||||
).services[0]
|
||||
assert 'args' in service['build']
|
||||
assert 'opt1' in service['build']['args']
|
||||
assert isinstance(service['build']['args']['opt1'], str)
|
||||
assert service['build']['args']['opt1'] == '42'
|
||||
assert service['build']['args']['opt2'] == 'foobar'
|
||||
|
||||
def test_load_with_multiple_files_mismatched_networks_format(self):
|
||||
base_file = config.ConfigFile(
|
||||
'base.yaml',
|
||||
{
|
||||
'version': '2',
|
||||
'services': {
|
||||
'web': {
|
||||
'image': 'example/web',
|
||||
'networks': {
|
||||
'foobar': {'aliases': ['foo', 'bar']}
|
||||
}
|
||||
}
|
||||
},
|
||||
'networks': {'foobar': {}, 'baz': {}}
|
||||
}
|
||||
)
|
||||
|
||||
override_file = config.ConfigFile(
|
||||
'override.yaml',
|
||||
{
|
||||
'version': '2',
|
||||
'services': {
|
||||
'web': {
|
||||
'networks': ['baz']
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
details = config.ConfigDetails('.', [base_file, override_file])
|
||||
web_service = config.load(details).services[0]
|
||||
assert web_service['networks'] == {
|
||||
'foobar': {'aliases': ['foo', 'bar']},
|
||||
'baz': None
|
||||
}
|
||||
|
||||
def test_load_with_multiple_files_v2(self):
|
||||
base_file = config.ConfigFile(
|
||||
'base.yaml',
|
||||
@ -961,7 +1100,7 @@ class ConfigTest(unittest.TestCase):
|
||||
|
||||
@mock.patch('compose.config.validation.log')
|
||||
def test_logs_warning_for_boolean_in_environment(self, mock_logging):
|
||||
expected_warning_msg = "There is a boolean value in the 'environment' key."
|
||||
expected_warning_msg = "There is a boolean value in the 'environment'"
|
||||
config.load(
|
||||
build_config_details(
|
||||
{'web': {
|
||||
@ -1079,6 +1218,39 @@ class ConfigTest(unittest.TestCase):
|
||||
'extends': {'service': 'foo'}
|
||||
}
|
||||
|
||||
def test_merge_build_args(self):
|
||||
base = {
|
||||
'build': {
|
||||
'context': '.',
|
||||
'args': {
|
||||
'ONE': '1',
|
||||
'TWO': '2',
|
||||
},
|
||||
}
|
||||
}
|
||||
override = {
|
||||
'build': {
|
||||
'args': {
|
||||
'TWO': 'dos',
|
||||
'THREE': '3',
|
||||
},
|
||||
}
|
||||
}
|
||||
actual = config.merge_service_dicts(
|
||||
base,
|
||||
override,
|
||||
DEFAULT_VERSION)
|
||||
assert actual == {
|
||||
'build': {
|
||||
'context': '.',
|
||||
'args': {
|
||||
'ONE': '1',
|
||||
'TWO': 'dos',
|
||||
'THREE': '3',
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
def test_external_volume_config(self):
|
||||
config_details = build_config_details({
|
||||
'version': '2',
|
||||
@ -1136,6 +1308,17 @@ class ConfigTest(unittest.TestCase):
|
||||
config.load(config_details)
|
||||
assert "Service 'one' depends on service 'three'" in exc.exconly()
|
||||
|
||||
def test_load_dockerfile_without_context(self):
|
||||
config_details = build_config_details({
|
||||
'version': '2',
|
||||
'services': {
|
||||
'one': {'build': {'dockerfile': 'Dockerfile.foo'}},
|
||||
},
|
||||
})
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(config_details)
|
||||
assert 'one.build is invalid, context is required.' in exc.exconly()
|
||||
|
||||
|
||||
class NetworkModeTest(unittest.TestCase):
|
||||
def test_network_mode_standard(self):
|
||||
@ -1506,57 +1689,54 @@ class VolumeConfigTest(unittest.TestCase):
|
||||
|
||||
|
||||
class MergePathMappingTest(object):
|
||||
def config_name(self):
|
||||
return ""
|
||||
config_name = ""
|
||||
|
||||
def test_empty(self):
|
||||
service_dict = config.merge_service_dicts({}, {}, DEFAULT_VERSION)
|
||||
assert self.config_name() not in service_dict
|
||||
assert self.config_name not in service_dict
|
||||
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{self.config_name(): ['/foo:/code', '/data']},
|
||||
{self.config_name: ['/foo:/code', '/data']},
|
||||
{},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict[self.config_name()]) == set(['/foo:/code', '/data'])
|
||||
assert set(service_dict[self.config_name]) == set(['/foo:/code', '/data'])
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{},
|
||||
{self.config_name(): ['/bar:/code']},
|
||||
{self.config_name: ['/bar:/code']},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict[self.config_name()]) == set(['/bar:/code'])
|
||||
assert set(service_dict[self.config_name]) == set(['/bar:/code'])
|
||||
|
||||
def test_override_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{self.config_name(): ['/foo:/code', '/data']},
|
||||
{self.config_name(): ['/bar:/code']},
|
||||
{self.config_name: ['/foo:/code', '/data']},
|
||||
{self.config_name: ['/bar:/code']},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict[self.config_name()]) == set(['/bar:/code', '/data'])
|
||||
assert set(service_dict[self.config_name]) == set(['/bar:/code', '/data'])
|
||||
|
||||
def test_add_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{self.config_name(): ['/foo:/code', '/data']},
|
||||
{self.config_name(): ['/bar:/code', '/quux:/data']},
|
||||
{self.config_name: ['/foo:/code', '/data']},
|
||||
{self.config_name: ['/bar:/code', '/quux:/data']},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict[self.config_name()]) == set(['/bar:/code', '/quux:/data'])
|
||||
assert set(service_dict[self.config_name]) == set(['/bar:/code', '/quux:/data'])
|
||||
|
||||
def test_remove_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{self.config_name(): ['/foo:/code', '/quux:/data']},
|
||||
{self.config_name(): ['/bar:/code', '/data']},
|
||||
{self.config_name: ['/foo:/code', '/quux:/data']},
|
||||
{self.config_name: ['/bar:/code', '/data']},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict[self.config_name()]) == set(['/bar:/code', '/data'])
|
||||
assert set(service_dict[self.config_name]) == set(['/bar:/code', '/data'])
|
||||
|
||||
|
||||
class MergeVolumesTest(unittest.TestCase, MergePathMappingTest):
|
||||
def config_name(self):
|
||||
return 'volumes'
|
||||
config_name = 'volumes'
|
||||
|
||||
|
||||
class MergeDevicesTest(unittest.TestCase, MergePathMappingTest):
|
||||
def config_name(self):
|
||||
return 'devices'
|
||||
config_name = 'devices'
|
||||
|
||||
|
||||
class BuildOrImageMergeTest(unittest.TestCase):
|
||||
@ -1594,30 +1774,49 @@ class BuildOrImageMergeTest(unittest.TestCase):
|
||||
)
|
||||
|
||||
|
||||
class MergeListsTest(unittest.TestCase):
|
||||
class MergeListsTest(object):
|
||||
config_name = ""
|
||||
base_config = []
|
||||
override_config = []
|
||||
|
||||
def merged_config(self):
|
||||
return set(self.base_config) | set(self.override_config)
|
||||
|
||||
def test_empty(self):
|
||||
assert 'ports' not in config.merge_service_dicts({}, {}, DEFAULT_VERSION)
|
||||
assert self.config_name not in config.merge_service_dicts({}, {}, DEFAULT_VERSION)
|
||||
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'ports': ['10:8000', '9000']},
|
||||
{self.config_name: self.base_config},
|
||||
{},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict['ports']) == set(['10:8000', '9000'])
|
||||
assert set(service_dict[self.config_name]) == set(self.base_config)
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{},
|
||||
{'ports': ['10:8000', '9000']},
|
||||
{self.config_name: self.base_config},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict['ports']) == set(['10:8000', '9000'])
|
||||
assert set(service_dict[self.config_name]) == set(self.base_config)
|
||||
|
||||
def test_add_item(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'ports': ['10:8000', '9000']},
|
||||
{'ports': ['20:8000']},
|
||||
{self.config_name: self.base_config},
|
||||
{self.config_name: self.override_config},
|
||||
DEFAULT_VERSION)
|
||||
assert set(service_dict['ports']) == set(['10:8000', '9000', '20:8000'])
|
||||
assert set(service_dict[self.config_name]) == set(self.merged_config())
|
||||
|
||||
|
||||
class MergePortsTest(unittest.TestCase, MergeListsTest):
|
||||
config_name = 'ports'
|
||||
base_config = ['10:8000', '9000']
|
||||
override_config = ['20:8000']
|
||||
|
||||
|
||||
class MergeNetworksTest(unittest.TestCase, MergeListsTest):
|
||||
config_name = 'networks'
|
||||
base_config = ['frontend', 'backend']
|
||||
override_config = ['monitoring']
|
||||
|
||||
|
||||
class MergeStringsOrListsTest(unittest.TestCase):
|
||||
@ -1776,7 +1975,7 @@ class EnvTest(unittest.TestCase):
|
||||
}
|
||||
self.assertEqual(
|
||||
resolve_environment(service_dict),
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''},
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': None},
|
||||
)
|
||||
|
||||
def test_resolve_environment_from_env_file(self):
|
||||
@ -1817,7 +2016,7 @@ class EnvTest(unittest.TestCase):
|
||||
'FILE_DEF': u'bär',
|
||||
'FILE_DEF_EMPTY': '',
|
||||
'ENV_DEF': 'E3',
|
||||
'NO_DEF': ''
|
||||
'NO_DEF': None
|
||||
},
|
||||
)
|
||||
|
||||
@ -1836,7 +2035,7 @@ class EnvTest(unittest.TestCase):
|
||||
}
|
||||
self.assertEqual(
|
||||
resolve_build_args(build),
|
||||
{'arg1': 'value1', 'empty_arg': '', 'env_arg': 'value2', 'no_env': ''},
|
||||
{'arg1': 'value1', 'empty_arg': '', 'env_arg': 'value2', 'no_env': None},
|
||||
)
|
||||
|
||||
@pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
|
||||
|
@ -12,8 +12,9 @@ from compose.container import get_container_name
|
||||
class ContainerTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.container_id = "abcabcabcbabc12345"
|
||||
self.container_dict = {
|
||||
"Id": "abc",
|
||||
"Id": self.container_id,
|
||||
"Image": "busybox:latest",
|
||||
"Command": "top",
|
||||
"Created": 1387384730,
|
||||
@ -41,19 +42,22 @@ class ContainerTest(unittest.TestCase):
|
||||
self.assertEqual(
|
||||
container.dictionary,
|
||||
{
|
||||
"Id": "abc",
|
||||
"Id": self.container_id,
|
||||
"Image": "busybox:latest",
|
||||
"Name": "/composetest_db_1",
|
||||
})
|
||||
|
||||
def test_from_ps_prefixed(self):
|
||||
self.container_dict['Names'] = ['/swarm-host-1' + n for n in self.container_dict['Names']]
|
||||
self.container_dict['Names'] = [
|
||||
'/swarm-host-1' + n for n in self.container_dict['Names']
|
||||
]
|
||||
|
||||
container = Container.from_ps(None,
|
||||
container = Container.from_ps(
|
||||
None,
|
||||
self.container_dict,
|
||||
has_been_inspected=True)
|
||||
self.assertEqual(container.dictionary, {
|
||||
"Id": "abc",
|
||||
"Id": self.container_id,
|
||||
"Image": "busybox:latest",
|
||||
"Name": "/composetest_db_1",
|
||||
})
|
||||
@ -142,6 +146,10 @@ class ContainerTest(unittest.TestCase):
|
||||
self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id"])
|
||||
self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None)
|
||||
|
||||
def test_short_id(self):
|
||||
container = Container(None, self.container_dict, has_been_inspected=True)
|
||||
assert container.short_id == self.container_id[:12]
|
||||
|
||||
|
||||
class GetContainerNameTestCase(unittest.TestCase):
|
||||
|
||||
|
@ -438,7 +438,7 @@ class ProjectTest(unittest.TestCase):
|
||||
{
|
||||
'name': 'foo',
|
||||
'image': 'busybox:latest',
|
||||
'networks': ['custom']
|
||||
'networks': {'custom': None}
|
||||
},
|
||||
],
|
||||
networks={'custom': {}},
|
||||
|
@ -267,13 +267,52 @@ class ServiceTest(unittest.TestCase):
|
||||
self.assertEqual(
|
||||
opts['labels'][LABEL_CONFIG_HASH],
|
||||
'f8bfa1058ad1f4231372a0b1639f0dfdb574dafff4e8d7938049ae993f7cf1fc')
|
||||
self.assertEqual(
|
||||
opts['environment'],
|
||||
{
|
||||
'affinity:container': '=ababab',
|
||||
'also': 'real',
|
||||
}
|
||||
assert opts['environment'] == ['also=real']
|
||||
|
||||
def test_get_container_create_options_sets_affinity_with_binds(self):
|
||||
service = Service(
|
||||
'foo',
|
||||
image='foo',
|
||||
client=self.mock_client,
|
||||
)
|
||||
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
|
||||
prev_container = mock.Mock(
|
||||
id='ababab',
|
||||
image_config={'ContainerConfig': {'Volumes': ['/data']}})
|
||||
|
||||
def container_get(key):
|
||||
return {
|
||||
'Mounts': [
|
||||
{
|
||||
'Destination': '/data',
|
||||
'Source': '/some/path',
|
||||
'Name': 'abab1234',
|
||||
},
|
||||
]
|
||||
}.get(key, None)
|
||||
|
||||
prev_container.get.side_effect = container_get
|
||||
|
||||
opts = service._get_container_create_options(
|
||||
{},
|
||||
1,
|
||||
previous_container=prev_container)
|
||||
|
||||
assert opts['environment'] == ['affinity:container==ababab']
|
||||
|
||||
def test_get_container_create_options_no_affinity_without_binds(self):
|
||||
service = Service('foo', image='foo', client=self.mock_client)
|
||||
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
|
||||
prev_container = mock.Mock(
|
||||
id='ababab',
|
||||
image_config={'ContainerConfig': {}})
|
||||
prev_container.get.return_value = None
|
||||
|
||||
opts = service._get_container_create_options(
|
||||
{},
|
||||
1,
|
||||
previous_container=prev_container)
|
||||
assert opts['environment'] == []
|
||||
|
||||
def test_get_container_not_found(self):
|
||||
self.mock_client.containers.return_value = []
|
||||
@ -650,6 +689,7 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
'/host/volume:/host/volume:ro',
|
||||
'/new/volume',
|
||||
'/existing/volume',
|
||||
'named:/named/vol',
|
||||
]]
|
||||
|
||||
self.mock_client.inspect_image.return_value = {
|
||||
@ -691,8 +731,8 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
}, has_been_inspected=True)
|
||||
|
||||
expected = [
|
||||
VolumeSpec.parse('/var/lib/docker/aaaaaaaa:/existing/volume:rw'),
|
||||
VolumeSpec.parse('/var/lib/docker/cccccccc:/mnt/image/data:rw'),
|
||||
VolumeSpec.parse('existingvolume:/existing/volume:rw'),
|
||||
VolumeSpec.parse('imagedata:/mnt/image/data:rw'),
|
||||
]
|
||||
|
||||
volumes = get_container_data_volumes(container, options)
|
||||
@ -710,7 +750,8 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
'ContainerConfig': {'Volumes': {}}
|
||||
}
|
||||
|
||||
intermediate_container = Container(self.mock_client, {
|
||||
previous_container = Container(self.mock_client, {
|
||||
'Id': 'cdefab',
|
||||
'Image': 'ababab',
|
||||
'Mounts': [{
|
||||
'Source': '/var/lib/docker/aaaaaaaa',
|
||||
@ -724,11 +765,12 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
expected = [
|
||||
'/host/volume:/host/volume:ro',
|
||||
'/host/rw/volume:/host/rw/volume:rw',
|
||||
'/var/lib/docker/aaaaaaaa:/existing/volume:rw',
|
||||
'existingvolume:/existing/volume:rw',
|
||||
]
|
||||
|
||||
binds = merge_volume_bindings(options, intermediate_container)
|
||||
self.assertEqual(set(binds), set(expected))
|
||||
binds, affinity = merge_volume_bindings(options, previous_container)
|
||||
assert sorted(binds) == sorted(expected)
|
||||
assert affinity == {'affinity:container': '=cdefab'}
|
||||
|
||||
def test_mount_same_host_path_to_two_volumes(self):
|
||||
service = Service(
|
||||
@ -761,13 +803,14 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
]),
|
||||
)
|
||||
|
||||
def test_different_host_path_in_container_json(self):
|
||||
def test_get_container_create_options_with_different_host_path_in_container_json(self):
|
||||
service = Service(
|
||||
'web',
|
||||
image='busybox',
|
||||
volumes=[VolumeSpec.parse('/host/path:/data')],
|
||||
client=self.mock_client,
|
||||
)
|
||||
volume_name = 'abcdefff1234'
|
||||
|
||||
self.mock_client.inspect_image.return_value = {
|
||||
'Id': 'ababab',
|
||||
@ -788,7 +831,7 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
'Mode': '',
|
||||
'RW': True,
|
||||
'Driver': 'local',
|
||||
'Name': 'abcdefff1234'
|
||||
'Name': volume_name,
|
||||
},
|
||||
]
|
||||
}
|
||||
@ -799,9 +842,9 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
previous_container=Container(self.mock_client, {'Id': '123123123'}),
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
self.mock_client.create_host_config.call_args[1]['binds'],
|
||||
['/mnt/sda1/host/path:/data:rw'],
|
||||
assert (
|
||||
self.mock_client.create_host_config.call_args[1]['binds'] ==
|
||||
['{}:/data:rw'.format(volume_name)]
|
||||
)
|
||||
|
||||
def test_warn_on_masked_volume_no_warning_when_no_container_volumes(self):
|
||||
|
Loading…
x
Reference in New Issue
Block a user