mirror of
https://github.com/docker/compose.git
synced 2025-07-22 05:04:27 +02:00
commit
ed549155b3
23
CHANGES.md
23
CHANGES.md
@ -1,6 +1,29 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
1.2.0 (2015-04-16)
|
||||
------------------
|
||||
|
||||
- `docker-compose.yml` now supports an `extends` option, which enables a service to inherit configuration from another service in another configuration file. This is really good for sharing common configuration between apps, or for configuring the same app for different environments. Here's the [documentation](https://github.com/docker/compose/blob/master/docs/yml.md#extends).
|
||||
|
||||
- When using Compose with a Swarm cluster, containers that depend on one another will be co-scheduled on the same node. This means that most Compose apps will now work out of the box, as long as they don't use `build`.
|
||||
|
||||
- Repeated invocations of `docker-compose up` when using Compose with a Swarm cluster now work reliably.
|
||||
|
||||
- Directories passed to `build`, filenames passed to `env_file` and volume host paths passed to `volumes` are now treated as relative to the *directory of the configuration file*, not the directory that `docker-compose` is being run in. In the majority of cases, those are the same, but if you use the `-f|--file` argument to specify a configuration file in another directory, **this is a breaking change**.
|
||||
|
||||
- A service can now share another service's network namespace with `net: container:<service>`.
|
||||
|
||||
- `volumes_from` and `net: container:<service>` entries are taken into account when resolving dependencies, so `docker-compose up <service>` will correctly start all dependencies of `<service>`.
|
||||
|
||||
- `docker-compose run` now accepts a `--user` argument to specify a user to run the command as, just like `docker run`.
|
||||
|
||||
- The `up`, `stop` and `restart` commands now accept a `--timeout` (or `-t`) argument to specify how long to wait when attempting to gracefully stop containers, just like `docker stop`.
|
||||
|
||||
- `docker-compose rm` now accepts `-f` as a shorthand for `--force`, just like `docker rm`.
|
||||
|
||||
Thanks, @abesto, @albers, @alunduil, @dnephin, @funkyfuture, @gilclark, @IanVS, @KingsleyKelly, @knutwalker, @thaJeztah and @vmalloc!
|
||||
|
||||
1.1.0 (2015-02-25)
|
||||
------------------
|
||||
|
||||
|
@ -24,8 +24,21 @@ that should get you started.
|
||||
|
||||
## Running the test suite
|
||||
|
||||
Use the test script to run linting checks and then the full test suite:
|
||||
|
||||
$ script/test
|
||||
|
||||
Tests are run against a Docker daemon inside a container, so that we can test against multiple Docker versions. By default they'll run against only the latest Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run against all supported versions:
|
||||
|
||||
$ DOCKER_VERSIONS=all script/test
|
||||
|
||||
Arguments to `script/test` are passed through to the `nosetests` executable, so you can specify a test directory, file, module, class or method:
|
||||
|
||||
$ script/test tests/unit
|
||||
$ script/test tests/unit/cli_test.py
|
||||
$ script/test tests.integration.service_test
|
||||
$ script/test tests.integration.service_test:ServiceTest.test_containers
|
||||
|
||||
## Building binaries
|
||||
|
||||
Linux:
|
||||
|
28
Dockerfile
28
Dockerfile
@ -1,5 +1,31 @@
|
||||
FROM debian:wheezy
|
||||
RUN apt-get update -qq && apt-get install -qy python python-pip python-dev git && apt-get clean
|
||||
|
||||
RUN set -ex; \
|
||||
apt-get update -qq; \
|
||||
apt-get install -y \
|
||||
python \
|
||||
python-pip \
|
||||
python-dev \
|
||||
git \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
curl \
|
||||
lxc \
|
||||
iptables \
|
||||
; \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV ALL_DOCKER_VERSIONS 1.3.3 1.4.1 1.5.0
|
||||
|
||||
RUN set -ex; \
|
||||
for v in ${ALL_DOCKER_VERSIONS}; do \
|
||||
curl https://get.docker.com/builds/Linux/x86_64/docker-$v -o /usr/local/bin/docker-$v; \
|
||||
chmod +x /usr/local/bin/docker-$v; \
|
||||
done
|
||||
|
||||
# Set the default Docker to be run
|
||||
RUN ln -s /usr/local/bin/docker-1.3.3 /usr/local/bin/docker
|
||||
|
||||
RUN useradd -d /home/user -m -s /bin/bash user
|
||||
WORKDIR /code/
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
Aanand Prasad <aanand.prasad@gmail.com> (@aanand)
|
||||
Ben Firshman <ben@firshman.co.uk> (@bfirsh)
|
||||
Chris Corbyn <chris@w3style.co.uk> (@d11wtq)
|
||||
Daniel Nephin <dnephin@gmail.com> (@dnephin)
|
||||
|
@ -4,6 +4,7 @@ include requirements.txt
|
||||
include requirements-dev.txt
|
||||
include tox.ini
|
||||
include *.md
|
||||
include contrib/completion/bash/docker-compose
|
||||
recursive-include tests *
|
||||
global-exclude *.pyc
|
||||
global-exclude *.pyo
|
||||
|
@ -1,8 +1,6 @@
|
||||
Docker Compose
|
||||
==============
|
||||
|
||||
[](https://app.wercker.com/project/bykey/d5dbac3907301c3d5ce735e2d5e95a5b)
|
||||
|
||||
[](http://jenkins.dockerproject.com/job/Compose%20Master/)
|
||||
*(Previously known as Fig)*
|
||||
|
||||
Compose is a tool for defining and running complex applications with Docker.
|
||||
@ -53,4 +51,5 @@ Compose has commands for managing the whole lifecycle of your application:
|
||||
Installation and documentation
|
||||
------------------------------
|
||||
|
||||
Full documentation is available on [Fig's website](http://www.fig.sh/).
|
||||
- Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
|
||||
- Hop into #docker-compose on Freenode if you have any questions.
|
||||
|
@ -1,4 +1,4 @@
|
||||
from __future__ import unicode_literals
|
||||
from .service import Service # noqa:flake8
|
||||
|
||||
__version__ = '1.1.0'
|
||||
__version__ = '1.2.0'
|
||||
|
@ -4,9 +4,9 @@ from requests.exceptions import ConnectionError, SSLError
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import yaml
|
||||
import six
|
||||
|
||||
from .. import config
|
||||
from ..project import Project
|
||||
from ..service import ConfigError
|
||||
from .docopt_command import DocoptCommand
|
||||
@ -25,7 +25,7 @@ class Command(DocoptCommand):
|
||||
def dispatch(self, *args, **kwargs):
|
||||
try:
|
||||
super(Command, self).dispatch(*args, **kwargs)
|
||||
except SSLError, e:
|
||||
except SSLError as e:
|
||||
raise errors.UserError('SSL error: %s' % e)
|
||||
except ConnectionError:
|
||||
if call_silently(['which', 'docker']) != 0:
|
||||
@ -69,18 +69,11 @@ class Command(DocoptCommand):
|
||||
return verbose_proxy.VerboseProxy('docker', client)
|
||||
return client
|
||||
|
||||
def get_config(self, config_path):
|
||||
try:
|
||||
with open(config_path, 'r') as fh:
|
||||
return yaml.safe_load(fh)
|
||||
except IOError as e:
|
||||
raise errors.UserError(six.text_type(e))
|
||||
|
||||
def get_project(self, config_path, project_name=None, verbose=False):
|
||||
try:
|
||||
return Project.from_config(
|
||||
return Project.from_dicts(
|
||||
self.get_project_name(config_path, project_name),
|
||||
self.get_config(config_path),
|
||||
config.load(config_path),
|
||||
self.get_client(verbose=verbose))
|
||||
except ConfigError as e:
|
||||
raise errors.UserError(six.text_type(e))
|
||||
|
@ -32,4 +32,4 @@ def docker_client():
|
||||
)
|
||||
|
||||
timeout = int(os.environ.get('DOCKER_CLIENT_TIMEOUT', 60))
|
||||
return Client(base_url=base_url, tls=tls_config, version='1.14', timeout=timeout)
|
||||
return Client(base_url=base_url, tls=tls_config, version='1.15', timeout=timeout)
|
||||
|
@ -46,7 +46,7 @@ class LogPrinter(object):
|
||||
if monochrome:
|
||||
color_fn = no_color
|
||||
else:
|
||||
color_fn = color_fns.next()
|
||||
color_fn = next(color_fns)
|
||||
generators.append(self._make_log_generator(container, color_fn))
|
||||
|
||||
return generators
|
||||
|
@ -1,26 +1,26 @@
|
||||
from __future__ import print_function
|
||||
from __future__ import unicode_literals
|
||||
from inspect import getdoc
|
||||
from operator import attrgetter
|
||||
import logging
|
||||
import sys
|
||||
import re
|
||||
import signal
|
||||
from operator import attrgetter
|
||||
import sys
|
||||
|
||||
from inspect import getdoc
|
||||
from docker.errors import APIError
|
||||
import dockerpty
|
||||
|
||||
from .. import __version__
|
||||
from ..project import NoSuchService, ConfigurationError
|
||||
from ..service import BuildError, CannotBeScaledError
|
||||
from ..config import parse_environment
|
||||
from .command import Command
|
||||
from .docopt_command import NoSuchCommand
|
||||
from .errors import UserError
|
||||
from .formatter import Formatter
|
||||
from .log_printer import LogPrinter
|
||||
from .utils import yesno
|
||||
|
||||
from docker.errors import APIError
|
||||
from .errors import UserError
|
||||
from .docopt_command import NoSuchCommand
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -238,8 +238,8 @@ class TopLevelCommand(Command):
|
||||
Usage: rm [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--force Don't ask to confirm removal
|
||||
-v Remove volumes associated with containers
|
||||
-f, --force Don't ask to confirm removal
|
||||
-v Remove volumes associated with containers
|
||||
"""
|
||||
all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
|
||||
stopped_containers = [c for c in all_containers if not c.is_running]
|
||||
@ -276,6 +276,7 @@ class TopLevelCommand(Command):
|
||||
new container name.
|
||||
--entrypoint CMD Override the entrypoint of the image.
|
||||
-e KEY=VAL Set an environment variable (can be used multiple times)
|
||||
-u, --user="" Run as specified username or uid
|
||||
--no-deps Don't start linked services.
|
||||
--rm Remove container after run. Ignored in detached mode.
|
||||
--service-ports Run command with the service's ports enabled and mapped
|
||||
@ -293,7 +294,7 @@ class TopLevelCommand(Command):
|
||||
if len(deps) > 0:
|
||||
project.up(
|
||||
service_names=deps,
|
||||
start_links=True,
|
||||
start_deps=True,
|
||||
recreate=False,
|
||||
insecure_registry=insecure_registry,
|
||||
detach=options['-d']
|
||||
@ -316,28 +317,31 @@ class TopLevelCommand(Command):
|
||||
}
|
||||
|
||||
if options['-e']:
|
||||
for option in options['-e']:
|
||||
if 'environment' not in service.options:
|
||||
service.options['environment'] = {}
|
||||
k, v = option.split('=', 1)
|
||||
service.options['environment'][k] = v
|
||||
# Merge environment from config with -e command line
|
||||
container_options['environment'] = dict(
|
||||
parse_environment(service.options.get('environment')),
|
||||
**parse_environment(options['-e']))
|
||||
|
||||
if options['--entrypoint']:
|
||||
container_options['entrypoint'] = options.get('--entrypoint')
|
||||
|
||||
if options['--user']:
|
||||
container_options['user'] = options.get('--user')
|
||||
|
||||
if not options['--service-ports']:
|
||||
container_options['ports'] = []
|
||||
|
||||
container = service.create_container(
|
||||
one_off=True,
|
||||
insecure_registry=insecure_registry,
|
||||
**container_options
|
||||
)
|
||||
|
||||
service_ports = None
|
||||
if options['--service-ports']:
|
||||
service_ports = service.options['ports']
|
||||
if options['-d']:
|
||||
service.start_container(container, ports=service_ports, one_off=True)
|
||||
service.start_container(container)
|
||||
print(container.name)
|
||||
else:
|
||||
service.start_container(container, ports=service_ports, one_off=True)
|
||||
service.start_container(container)
|
||||
dockerpty.start(project.client, container.id, interactive=not options['-T'])
|
||||
exit_code = container.wait()
|
||||
if options['--rm']:
|
||||
@ -389,17 +393,29 @@ class TopLevelCommand(Command):
|
||||
|
||||
They can be started again with `docker-compose start`.
|
||||
|
||||
Usage: stop [SERVICE...]
|
||||
Usage: stop [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
|
||||
(default: 10)
|
||||
"""
|
||||
project.stop(service_names=options['SERVICE'])
|
||||
timeout = options.get('--timeout')
|
||||
params = {} if timeout is None else {'timeout': int(timeout)}
|
||||
project.stop(service_names=options['SERVICE'], **params)
|
||||
|
||||
def restart(self, project, options):
|
||||
"""
|
||||
Restart running containers.
|
||||
|
||||
Usage: restart [SERVICE...]
|
||||
Usage: restart [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
|
||||
(default: 10)
|
||||
"""
|
||||
project.restart(service_names=options['SERVICE'])
|
||||
timeout = options.get('--timeout')
|
||||
params = {} if timeout is None else {'timeout': int(timeout)}
|
||||
project.restart(service_names=options['SERVICE'], **params)
|
||||
|
||||
def up(self, project, options):
|
||||
"""
|
||||
@ -418,30 +434,33 @@ class TopLevelCommand(Command):
|
||||
Usage: up [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--allow-insecure-ssl Allow insecure connections to the docker
|
||||
registry
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
--allow-insecure-ssl Allow insecure connections to the docker
|
||||
registry
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
-t, --timeout TIMEOUT When attached, use this timeout in seconds
|
||||
for the shutdown. (default: 10)
|
||||
|
||||
"""
|
||||
insecure_registry = options['--allow-insecure-ssl']
|
||||
detached = options['-d']
|
||||
|
||||
monochrome = options['--no-color']
|
||||
|
||||
start_links = not options['--no-deps']
|
||||
start_deps = not options['--no-deps']
|
||||
recreate = not options['--no-recreate']
|
||||
service_names = options['SERVICE']
|
||||
|
||||
project.up(
|
||||
service_names=service_names,
|
||||
start_links=start_links,
|
||||
start_deps=start_deps,
|
||||
recreate=recreate,
|
||||
insecure_registry=insecure_registry,
|
||||
detach=options['-d'],
|
||||
detach=detached,
|
||||
do_build=not options['--no-build'],
|
||||
)
|
||||
|
||||
@ -460,7 +479,9 @@ class TopLevelCommand(Command):
|
||||
signal.signal(signal.SIGINT, handler)
|
||||
|
||||
print("Gracefully stopping... (press Ctrl+C again to force)")
|
||||
project.stop(service_names=service_names)
|
||||
timeout = options.get('--timeout')
|
||||
params = {} if timeout is None else {'timeout': int(timeout)}
|
||||
project.stop(service_names=service_names, **params)
|
||||
|
||||
|
||||
def list_containers(containers):
|
||||
|
432
compose/config.py
Normal file
432
compose/config.py
Normal file
@ -0,0 +1,432 @@
|
||||
import os
|
||||
import yaml
|
||||
import six
|
||||
|
||||
|
||||
DOCKER_CONFIG_KEYS = [
|
||||
'cap_add',
|
||||
'cap_drop',
|
||||
'cpu_shares',
|
||||
'command',
|
||||
'detach',
|
||||
'dns',
|
||||
'dns_search',
|
||||
'domainname',
|
||||
'entrypoint',
|
||||
'env_file',
|
||||
'environment',
|
||||
'hostname',
|
||||
'image',
|
||||
'links',
|
||||
'mem_limit',
|
||||
'net',
|
||||
'ports',
|
||||
'privileged',
|
||||
'restart',
|
||||
'stdin_open',
|
||||
'tty',
|
||||
'user',
|
||||
'volumes',
|
||||
'volumes_from',
|
||||
'working_dir',
|
||||
]
|
||||
|
||||
ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
|
||||
'build',
|
||||
'expose',
|
||||
'external_links',
|
||||
'name',
|
||||
]
|
||||
|
||||
DOCKER_CONFIG_HINTS = {
|
||||
'cpu_share' : 'cpu_shares',
|
||||
'link' : 'links',
|
||||
'port' : 'ports',
|
||||
'privilege' : 'privileged',
|
||||
'priviliged': 'privileged',
|
||||
'privilige' : 'privileged',
|
||||
'volume' : 'volumes',
|
||||
'workdir' : 'working_dir',
|
||||
}
|
||||
|
||||
|
||||
def load(filename):
|
||||
working_dir = os.path.dirname(filename)
|
||||
return from_dictionary(load_yaml(filename), working_dir=working_dir, filename=filename)
|
||||
|
||||
|
||||
def from_dictionary(dictionary, working_dir=None, filename=None):
|
||||
service_dicts = []
|
||||
|
||||
for service_name, service_dict in list(dictionary.items()):
|
||||
if not isinstance(service_dict, dict):
|
||||
raise ConfigurationError('Service "%s" doesn\'t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.' % service_name)
|
||||
loader = ServiceLoader(working_dir=working_dir, filename=filename)
|
||||
service_dict = loader.make_service_dict(service_name, service_dict)
|
||||
service_dicts.append(service_dict)
|
||||
|
||||
return service_dicts
|
||||
|
||||
|
||||
def make_service_dict(name, service_dict, working_dir=None):
|
||||
return ServiceLoader(working_dir=working_dir).make_service_dict(name, service_dict)
|
||||
|
||||
|
||||
class ServiceLoader(object):
|
||||
def __init__(self, working_dir, filename=None, already_seen=None):
|
||||
self.working_dir = working_dir
|
||||
self.filename = filename
|
||||
self.already_seen = already_seen or []
|
||||
|
||||
def make_service_dict(self, name, service_dict):
|
||||
if self.signature(name) in self.already_seen:
|
||||
raise CircularReference(self.already_seen)
|
||||
|
||||
service_dict = service_dict.copy()
|
||||
service_dict['name'] = name
|
||||
service_dict = resolve_environment(service_dict, working_dir=self.working_dir)
|
||||
service_dict = self.resolve_extends(service_dict)
|
||||
return process_container_options(service_dict, working_dir=self.working_dir)
|
||||
|
||||
def resolve_extends(self, service_dict):
|
||||
if 'extends' not in service_dict:
|
||||
return service_dict
|
||||
|
||||
extends_options = process_extends_options(service_dict['name'], service_dict['extends'])
|
||||
|
||||
if self.working_dir is None:
|
||||
raise Exception("No working_dir passed to ServiceLoader()")
|
||||
|
||||
other_config_path = expand_path(self.working_dir, extends_options['file'])
|
||||
other_working_dir = os.path.dirname(other_config_path)
|
||||
other_already_seen = self.already_seen + [self.signature(service_dict['name'])]
|
||||
other_loader = ServiceLoader(
|
||||
working_dir=other_working_dir,
|
||||
filename=other_config_path,
|
||||
already_seen=other_already_seen,
|
||||
)
|
||||
|
||||
other_config = load_yaml(other_config_path)
|
||||
other_service_dict = other_config[extends_options['service']]
|
||||
other_service_dict = other_loader.make_service_dict(
|
||||
service_dict['name'],
|
||||
other_service_dict,
|
||||
)
|
||||
validate_extended_service_dict(
|
||||
other_service_dict,
|
||||
filename=other_config_path,
|
||||
service=extends_options['service'],
|
||||
)
|
||||
|
||||
return merge_service_dicts(other_service_dict, service_dict)
|
||||
|
||||
def signature(self, name):
|
||||
return (self.filename, name)
|
||||
|
||||
|
||||
def process_extends_options(service_name, extends_options):
|
||||
error_prefix = "Invalid 'extends' configuration for %s:" % service_name
|
||||
|
||||
if not isinstance(extends_options, dict):
|
||||
raise ConfigurationError("%s must be a dictionary" % error_prefix)
|
||||
|
||||
if 'service' not in extends_options:
|
||||
raise ConfigurationError(
|
||||
"%s you need to specify a service, e.g. 'service: web'" % error_prefix
|
||||
)
|
||||
|
||||
for k, _ in extends_options.items():
|
||||
if k not in ['file', 'service']:
|
||||
raise ConfigurationError(
|
||||
"%s unsupported configuration option '%s'" % (error_prefix, k)
|
||||
)
|
||||
|
||||
return extends_options
|
||||
|
||||
|
||||
def validate_extended_service_dict(service_dict, filename, service):
|
||||
error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
|
||||
|
||||
if 'links' in service_dict:
|
||||
raise ConfigurationError("%s services with 'links' cannot be extended" % error_prefix)
|
||||
|
||||
if 'volumes_from' in service_dict:
|
||||
raise ConfigurationError("%s services with 'volumes_from' cannot be extended" % error_prefix)
|
||||
|
||||
if 'net' in service_dict:
|
||||
if get_service_name_from_net(service_dict['net']) is not None:
|
||||
raise ConfigurationError("%s services with 'net: container' cannot be extended" % error_prefix)
|
||||
|
||||
|
||||
def process_container_options(service_dict, working_dir=None):
|
||||
for k in service_dict:
|
||||
if k not in ALLOWED_KEYS:
|
||||
msg = "Unsupported config option for %s service: '%s'" % (service_dict['name'], k)
|
||||
if k in DOCKER_CONFIG_HINTS:
|
||||
msg += " (did you mean '%s'?)" % DOCKER_CONFIG_HINTS[k]
|
||||
raise ConfigurationError(msg)
|
||||
|
||||
service_dict = service_dict.copy()
|
||||
|
||||
if 'volumes' in service_dict:
|
||||
service_dict['volumes'] = resolve_host_paths(service_dict['volumes'], working_dir=working_dir)
|
||||
|
||||
if 'build' in service_dict:
|
||||
service_dict['build'] = resolve_build_path(service_dict['build'], working_dir=working_dir)
|
||||
|
||||
return service_dict
|
||||
|
||||
|
||||
def merge_service_dicts(base, override):
|
||||
d = base.copy()
|
||||
|
||||
if 'environment' in base or 'environment' in override:
|
||||
d['environment'] = merge_environment(
|
||||
base.get('environment'),
|
||||
override.get('environment'),
|
||||
)
|
||||
|
||||
if 'volumes' in base or 'volumes' in override:
|
||||
d['volumes'] = merge_volumes(
|
||||
base.get('volumes'),
|
||||
override.get('volumes'),
|
||||
)
|
||||
|
||||
if 'image' in override and 'build' in d:
|
||||
del d['build']
|
||||
|
||||
if 'build' in override and 'image' in d:
|
||||
del d['image']
|
||||
|
||||
list_keys = ['ports', 'expose', 'external_links']
|
||||
|
||||
for key in list_keys:
|
||||
if key in base or key in override:
|
||||
d[key] = base.get(key, []) + override.get(key, [])
|
||||
|
||||
list_or_string_keys = ['dns', 'dns_search']
|
||||
|
||||
for key in list_or_string_keys:
|
||||
if key in base or key in override:
|
||||
d[key] = to_list(base.get(key)) + to_list(override.get(key))
|
||||
|
||||
already_merged_keys = ['environment', 'volumes'] + list_keys + list_or_string_keys
|
||||
|
||||
for k in set(ALLOWED_KEYS) - set(already_merged_keys):
|
||||
if k in override:
|
||||
d[k] = override[k]
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def merge_environment(base, override):
|
||||
env = parse_environment(base)
|
||||
env.update(parse_environment(override))
|
||||
return env
|
||||
|
||||
|
||||
def parse_links(links):
|
||||
return dict(parse_link(l) for l in links)
|
||||
|
||||
|
||||
def parse_link(link):
|
||||
if ':' in link:
|
||||
source, alias = link.split(':', 1)
|
||||
return (alias, source)
|
||||
else:
|
||||
return (link, link)
|
||||
|
||||
|
||||
def get_env_files(options, working_dir=None):
|
||||
if 'env_file' not in options:
|
||||
return {}
|
||||
|
||||
if working_dir is None:
|
||||
raise Exception("No working_dir passed to get_env_files()")
|
||||
|
||||
env_files = options.get('env_file', [])
|
||||
if not isinstance(env_files, list):
|
||||
env_files = [env_files]
|
||||
|
||||
return [expand_path(working_dir, path) for path in env_files]
|
||||
|
||||
|
||||
def resolve_environment(service_dict, working_dir=None):
|
||||
service_dict = service_dict.copy()
|
||||
|
||||
if 'environment' not in service_dict and 'env_file' not in service_dict:
|
||||
return service_dict
|
||||
|
||||
env = {}
|
||||
|
||||
if 'env_file' in service_dict:
|
||||
for f in get_env_files(service_dict, working_dir=working_dir):
|
||||
env.update(env_vars_from_file(f))
|
||||
del service_dict['env_file']
|
||||
|
||||
env.update(parse_environment(service_dict.get('environment')))
|
||||
env = dict(resolve_env_var(k, v) for k, v in six.iteritems(env))
|
||||
|
||||
service_dict['environment'] = env
|
||||
return service_dict
|
||||
|
||||
|
||||
def parse_environment(environment):
|
||||
if not environment:
|
||||
return {}
|
||||
|
||||
if isinstance(environment, list):
|
||||
return dict(split_env(e) for e in environment)
|
||||
|
||||
if isinstance(environment, dict):
|
||||
return environment
|
||||
|
||||
raise ConfigurationError(
|
||||
"environment \"%s\" must be a list or mapping," %
|
||||
environment
|
||||
)
|
||||
|
||||
|
||||
def split_env(env):
|
||||
if '=' in env:
|
||||
return env.split('=', 1)
|
||||
else:
|
||||
return env, None
|
||||
|
||||
|
||||
def resolve_env_var(key, val):
|
||||
if val is not None:
|
||||
return key, val
|
||||
elif key in os.environ:
|
||||
return key, os.environ[key]
|
||||
else:
|
||||
return key, ''
|
||||
|
||||
|
||||
def env_vars_from_file(filename):
|
||||
"""
|
||||
Read in a line delimited file of environment variables.
|
||||
"""
|
||||
if not os.path.exists(filename):
|
||||
raise ConfigurationError("Couldn't find env file: %s" % filename)
|
||||
env = {}
|
||||
for line in open(filename, 'r'):
|
||||
line = line.strip()
|
||||
if line and not line.startswith('#'):
|
||||
k, v = split_env(line)
|
||||
env[k] = v
|
||||
return env
|
||||
|
||||
|
||||
def resolve_host_paths(volumes, working_dir=None):
|
||||
if working_dir is None:
|
||||
raise Exception("No working_dir passed to resolve_host_paths()")
|
||||
|
||||
return [resolve_host_path(v, working_dir) for v in volumes]
|
||||
|
||||
|
||||
def resolve_host_path(volume, working_dir):
|
||||
container_path, host_path = split_volume(volume)
|
||||
if host_path is not None:
|
||||
host_path = os.path.expanduser(host_path)
|
||||
host_path = os.path.expandvars(host_path)
|
||||
return "%s:%s" % (expand_path(working_dir, host_path), container_path)
|
||||
else:
|
||||
return container_path
|
||||
|
||||
|
||||
def resolve_build_path(build_path, working_dir=None):
|
||||
if working_dir is None:
|
||||
raise Exception("No working_dir passed to resolve_build_path")
|
||||
|
||||
_path = expand_path(working_dir, build_path)
|
||||
if not os.path.exists(_path) or not os.access(_path, os.R_OK):
|
||||
raise ConfigurationError("build path %s either does not exist or is not accessible." % _path)
|
||||
else:
|
||||
return _path
|
||||
|
||||
|
||||
def merge_volumes(base, override):
|
||||
d = dict_from_volumes(base)
|
||||
d.update(dict_from_volumes(override))
|
||||
return volumes_from_dict(d)
|
||||
|
||||
|
||||
def dict_from_volumes(volumes):
|
||||
if volumes:
|
||||
return dict(split_volume(v) for v in volumes)
|
||||
else:
|
||||
return {}
|
||||
|
||||
|
||||
def volumes_from_dict(d):
|
||||
return [join_volume(v) for v in d.items()]
|
||||
|
||||
|
||||
def split_volume(string):
|
||||
if ':' in string:
|
||||
(host, container) = string.split(':', 1)
|
||||
return (container, host)
|
||||
else:
|
||||
return (string, None)
|
||||
|
||||
|
||||
def join_volume(pair):
|
||||
(container, host) = pair
|
||||
if host is None:
|
||||
return container
|
||||
else:
|
||||
return ":".join((host, container))
|
||||
|
||||
|
||||
def expand_path(working_dir, path):
|
||||
return os.path.abspath(os.path.join(working_dir, path))
|
||||
|
||||
|
||||
def to_list(value):
|
||||
if value is None:
|
||||
return []
|
||||
elif isinstance(value, six.string_types):
|
||||
return [value]
|
||||
else:
|
||||
return value
|
||||
|
||||
|
||||
def get_service_name_from_net(net_config):
|
||||
if not net_config:
|
||||
return
|
||||
|
||||
if not net_config.startswith('container:'):
|
||||
return
|
||||
|
||||
_, net_name = net_config.split(':', 1)
|
||||
return net_name
|
||||
|
||||
|
||||
def load_yaml(filename):
|
||||
try:
|
||||
with open(filename, 'r') as fh:
|
||||
return yaml.safe_load(fh)
|
||||
except IOError as e:
|
||||
raise ConfigurationError(six.text_type(e))
|
||||
|
||||
|
||||
class ConfigurationError(Exception):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
return self.msg
|
||||
|
||||
|
||||
class CircularReference(ConfigurationError):
|
||||
def __init__(self, trail):
|
||||
self.trail = trail
|
||||
|
||||
@property
|
||||
def msg(self):
|
||||
lines = [
|
||||
"{} in {}".format(service_name, filename)
|
||||
for (filename, service_name) in self.trail
|
||||
]
|
||||
return "Circular reference:\n {}".format("\n extends ".join(lines))
|
@ -2,6 +2,7 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
|
||||
import six
|
||||
from functools import reduce
|
||||
|
||||
|
||||
class Container(object):
|
||||
|
@ -2,6 +2,8 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
|
||||
from functools import reduce
|
||||
from .config import get_service_name_from_net, ConfigurationError
|
||||
from .service import Service
|
||||
from .container import Container
|
||||
from docker.errors import APIError
|
||||
@ -18,6 +20,15 @@ def sort_service_dicts(services):
|
||||
def get_service_names(links):
|
||||
return [link.split(':')[0] for link in links]
|
||||
|
||||
def get_service_dependents(service_dict, services):
|
||||
name = service_dict['name']
|
||||
return [
|
||||
service for service in services
|
||||
if (name in get_service_names(service.get('links', [])) or
|
||||
name in service.get('volumes_from', []) or
|
||||
name == get_service_name_from_net(service.get('net')))
|
||||
]
|
||||
|
||||
def visit(n):
|
||||
if n['name'] in temporary_marked:
|
||||
if n['name'] in get_service_names(n.get('links', [])):
|
||||
@ -28,8 +39,7 @@ def sort_service_dicts(services):
|
||||
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
|
||||
if n in unmarked:
|
||||
temporary_marked.add(n['name'])
|
||||
dependents = [m for m in services if (n['name'] in get_service_names(m.get('links', []))) or (n['name'] in m.get('volumes_from', []))]
|
||||
for m in dependents:
|
||||
for m in get_service_dependents(n, services):
|
||||
visit(m)
|
||||
temporary_marked.remove(n['name'])
|
||||
unmarked.remove(n)
|
||||
@ -59,20 +69,12 @@ class Project(object):
|
||||
for service_dict in sort_service_dicts(service_dicts):
|
||||
links = project.get_links(service_dict)
|
||||
volumes_from = project.get_volumes_from(service_dict)
|
||||
net = project.get_net(service_dict)
|
||||
|
||||
project.services.append(Service(client=client, project=name, links=links, volumes_from=volumes_from, **service_dict))
|
||||
project.services.append(Service(client=client, project=name, links=links, net=net,
|
||||
volumes_from=volumes_from, **service_dict))
|
||||
return project
|
||||
|
||||
@classmethod
|
||||
def from_config(cls, name, config, client):
|
||||
dicts = []
|
||||
for service_name, service in list(config.items()):
|
||||
if not isinstance(service, dict):
|
||||
raise ConfigurationError('Service "%s" doesn\'t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.' % service_name)
|
||||
service['name'] = service_name
|
||||
dicts.append(service)
|
||||
return cls.from_dicts(name, dicts, client)
|
||||
|
||||
def get_service(self, name):
|
||||
"""
|
||||
Retrieve a service by name. Raises NoSuchService
|
||||
@ -84,31 +86,31 @@ class Project(object):
|
||||
|
||||
raise NoSuchService(name)
|
||||
|
||||
def get_services(self, service_names=None, include_links=False):
|
||||
def get_services(self, service_names=None, include_deps=False):
|
||||
"""
|
||||
Returns a list of this project's services filtered
|
||||
by the provided list of names, or all services if service_names is None
|
||||
or [].
|
||||
|
||||
If include_links is specified, returns a list including the links for
|
||||
If include_deps is specified, returns a list including the dependencies for
|
||||
service_names, in order of dependency.
|
||||
|
||||
Preserves the original order of self.services where possible,
|
||||
reordering as needed to resolve links.
|
||||
reordering as needed to resolve dependencies.
|
||||
|
||||
Raises NoSuchService if any of the named services do not exist.
|
||||
"""
|
||||
if service_names is None or len(service_names) == 0:
|
||||
return self.get_services(
|
||||
service_names=[s.name for s in self.services],
|
||||
include_links=include_links
|
||||
include_deps=include_deps
|
||||
)
|
||||
else:
|
||||
unsorted = [self.get_service(name) for name in service_names]
|
||||
services = [s for s in self.services if s in unsorted]
|
||||
|
||||
if include_links:
|
||||
services = reduce(self._inject_links, services, [])
|
||||
if include_deps:
|
||||
services = reduce(self._inject_deps, services, [])
|
||||
|
||||
uniques = []
|
||||
[uniques.append(s) for s in services if s not in uniques]
|
||||
@ -145,6 +147,28 @@ class Project(object):
|
||||
del service_dict['volumes_from']
|
||||
return volumes_from
|
||||
|
||||
def get_net(self, service_dict):
|
||||
if 'net' in service_dict:
|
||||
net_name = get_service_name_from_net(service_dict.get('net'))
|
||||
|
||||
if net_name:
|
||||
try:
|
||||
net = self.get_service(net_name)
|
||||
except NoSuchService:
|
||||
try:
|
||||
net = Container.from_id(self.client, net_name)
|
||||
except APIError:
|
||||
raise ConfigurationError('Serivce "%s" is trying to use the network of "%s", which is not the name of a service or container.' % (service_dict['name'], net_name))
|
||||
else:
|
||||
net = service_dict['net']
|
||||
|
||||
del service_dict['net']
|
||||
|
||||
else:
|
||||
net = 'bridge'
|
||||
|
||||
return net
|
||||
|
||||
def start(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
service.start(**options)
|
||||
@ -170,13 +194,13 @@ class Project(object):
|
||||
|
||||
def up(self,
|
||||
service_names=None,
|
||||
start_links=True,
|
||||
start_deps=True,
|
||||
recreate=True,
|
||||
insecure_registry=False,
|
||||
detach=False,
|
||||
do_build=True):
|
||||
running_containers = []
|
||||
for service in self.get_services(service_names, include_links=start_links):
|
||||
for service in self.get_services(service_names, include_deps=start_deps):
|
||||
if recreate:
|
||||
for (_, container) in service.recreate_containers(
|
||||
insecure_registry=insecure_registry,
|
||||
@ -193,7 +217,7 @@ class Project(object):
|
||||
return running_containers
|
||||
|
||||
def pull(self, service_names=None, insecure_registry=False):
|
||||
for service in self.get_services(service_names, include_links=True):
|
||||
for service in self.get_services(service_names, include_deps=True):
|
||||
service.pull(insecure_registry=insecure_registry)
|
||||
|
||||
def remove_stopped(self, service_names=None, **options):
|
||||
@ -206,19 +230,22 @@ class Project(object):
|
||||
for service in self.get_services(service_names)
|
||||
if service.has_container(container, one_off=one_off)]
|
||||
|
||||
def _inject_links(self, acc, service):
|
||||
linked_names = service.get_linked_names()
|
||||
def _inject_deps(self, acc, service):
|
||||
net_name = service.get_net_name()
|
||||
dep_names = (service.get_linked_names() +
|
||||
service.get_volumes_from_names() +
|
||||
([net_name] if net_name else []))
|
||||
|
||||
if len(linked_names) > 0:
|
||||
linked_services = self.get_services(
|
||||
service_names=linked_names,
|
||||
include_links=True
|
||||
if len(dep_names) > 0:
|
||||
dep_services = self.get_services(
|
||||
service_names=list(set(dep_names)),
|
||||
include_deps=True
|
||||
)
|
||||
else:
|
||||
linked_services = []
|
||||
dep_services = []
|
||||
|
||||
linked_services.append(service)
|
||||
return acc + linked_services
|
||||
dep_services.append(service)
|
||||
return acc + dep_services
|
||||
|
||||
|
||||
class NoSuchService(Exception):
|
||||
@ -230,13 +257,5 @@ class NoSuchService(Exception):
|
||||
return self.msg
|
||||
|
||||
|
||||
class ConfigurationError(Exception):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
return self.msg
|
||||
|
||||
|
||||
class DependencyError(ConfigurationError):
|
||||
pass
|
||||
|
@ -3,55 +3,20 @@ from __future__ import absolute_import
|
||||
from collections import namedtuple
|
||||
import logging
|
||||
import re
|
||||
import os
|
||||
from operator import attrgetter
|
||||
import sys
|
||||
import six
|
||||
|
||||
from docker.errors import APIError
|
||||
from docker.utils import create_host_config
|
||||
|
||||
from .config import DOCKER_CONFIG_KEYS
|
||||
from .container import Container, get_container_name
|
||||
from .progress_stream import stream_output, StreamOutputError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
DOCKER_CONFIG_KEYS = [
|
||||
'cap_add',
|
||||
'cap_drop',
|
||||
'cpu_shares',
|
||||
'command',
|
||||
'detach',
|
||||
'dns',
|
||||
'dns_search',
|
||||
'domainname',
|
||||
'entrypoint',
|
||||
'env_file',
|
||||
'environment',
|
||||
'hostname',
|
||||
'image',
|
||||
'mem_limit',
|
||||
'net',
|
||||
'ports',
|
||||
'privileged',
|
||||
'restart',
|
||||
'stdin_open',
|
||||
'tty',
|
||||
'user',
|
||||
'volumes',
|
||||
'volumes_from',
|
||||
'working_dir',
|
||||
]
|
||||
DOCKER_CONFIG_HINTS = {
|
||||
'cpu_share' : 'cpu_shares',
|
||||
'link' : 'links',
|
||||
'port' : 'ports',
|
||||
'privilege' : 'privileged',
|
||||
'priviliged': 'privileged',
|
||||
'privilige' : 'privileged',
|
||||
'volume' : 'volumes',
|
||||
'workdir' : 'working_dir',
|
||||
}
|
||||
|
||||
DOCKER_START_KEYS = [
|
||||
'cap_add',
|
||||
'cap_drop',
|
||||
@ -87,7 +52,7 @@ ServiceName = namedtuple('ServiceName', 'project service number')
|
||||
|
||||
|
||||
class Service(object):
|
||||
def __init__(self, name, client=None, project='default', links=None, external_links=None, volumes_from=None, **options):
|
||||
def __init__(self, name, client=None, project='default', links=None, external_links=None, volumes_from=None, net=None, **options):
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, name):
|
||||
raise ConfigError('Invalid service name "%s" - only %s are allowed' % (name, VALID_NAME_CHARS))
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, project):
|
||||
@ -95,26 +60,13 @@ class Service(object):
|
||||
if 'image' in options and 'build' in options:
|
||||
raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
|
||||
|
||||
for filename in get_env_files(options):
|
||||
if not os.path.exists(filename):
|
||||
raise ConfigError("Couldn't find env file for service %s: %s" % (name, filename))
|
||||
|
||||
supported_options = DOCKER_CONFIG_KEYS + ['build', 'expose',
|
||||
'external_links']
|
||||
|
||||
for k in options:
|
||||
if k not in supported_options:
|
||||
msg = "Unsupported config option for %s service: '%s'" % (name, k)
|
||||
if k in DOCKER_CONFIG_HINTS:
|
||||
msg += " (did you mean '%s'?)" % DOCKER_CONFIG_HINTS[k]
|
||||
raise ConfigError(msg)
|
||||
|
||||
self.name = name
|
||||
self.client = client
|
||||
self.project = project
|
||||
self.links = links or []
|
||||
self.external_links = external_links or []
|
||||
self.volumes_from = volumes_from or []
|
||||
self.net = net or None
|
||||
self.options = options
|
||||
|
||||
def containers(self, stopped=False, one_off=False):
|
||||
@ -217,6 +169,7 @@ class Service(object):
|
||||
one_off=False,
|
||||
insecure_registry=False,
|
||||
do_build=True,
|
||||
intermediate_container=None,
|
||||
**override_options):
|
||||
"""
|
||||
Create a container for this service. If the image doesn't exist, attempt to pull
|
||||
@ -224,7 +177,9 @@ class Service(object):
|
||||
"""
|
||||
container_options = self._get_container_create_options(
|
||||
override_options,
|
||||
one_off=one_off)
|
||||
one_off=one_off,
|
||||
intermediate_container=intermediate_container,
|
||||
)
|
||||
|
||||
if (do_build and
|
||||
self.can_be_built() and
|
||||
@ -289,57 +244,33 @@ class Service(object):
|
||||
entrypoint=['/bin/echo'],
|
||||
command=[],
|
||||
detach=True,
|
||||
host_config=create_host_config(volumes_from=[container.id]),
|
||||
)
|
||||
intermediate_container.start(volumes_from=container.id)
|
||||
intermediate_container.start()
|
||||
intermediate_container.wait()
|
||||
container.remove()
|
||||
|
||||
options = dict(override_options)
|
||||
new_container = self.create_container(do_build=False, **options)
|
||||
self.start_container(new_container, intermediate_container=intermediate_container)
|
||||
new_container = self.create_container(
|
||||
do_build=False,
|
||||
intermediate_container=intermediate_container,
|
||||
**options
|
||||
)
|
||||
self.start_container(new_container)
|
||||
|
||||
intermediate_container.remove()
|
||||
|
||||
return (intermediate_container, new_container)
|
||||
|
||||
def start_container_if_stopped(self, container, **options):
|
||||
def start_container_if_stopped(self, container):
|
||||
if container.is_running:
|
||||
return container
|
||||
else:
|
||||
log.info("Starting %s..." % container.name)
|
||||
return self.start_container(container, **options)
|
||||
return self.start_container(container)
|
||||
|
||||
def start_container(self, container, intermediate_container=None, **override_options):
|
||||
options = dict(self.options, **override_options)
|
||||
port_bindings = build_port_bindings(options.get('ports') or [])
|
||||
|
||||
volume_bindings = dict(
|
||||
build_volume_binding(parse_volume_spec(volume))
|
||||
for volume in options.get('volumes') or []
|
||||
if ':' in volume)
|
||||
|
||||
privileged = options.get('privileged', False)
|
||||
net = options.get('net', 'bridge')
|
||||
dns = options.get('dns', None)
|
||||
dns_search = options.get('dns_search', None)
|
||||
cap_add = options.get('cap_add', None)
|
||||
cap_drop = options.get('cap_drop', None)
|
||||
|
||||
restart = parse_restart_spec(options.get('restart', None))
|
||||
|
||||
container.start(
|
||||
links=self._get_links(link_to_self=options.get('one_off', False)),
|
||||
port_bindings=port_bindings,
|
||||
binds=volume_bindings,
|
||||
volumes_from=self._get_volumes_from(intermediate_container),
|
||||
privileged=privileged,
|
||||
network_mode=net,
|
||||
dns=dns,
|
||||
dns_search=dns_search,
|
||||
restart_policy=restart,
|
||||
cap_add=cap_add,
|
||||
cap_drop=cap_drop,
|
||||
)
|
||||
def start_container(self, container):
|
||||
container.start()
|
||||
return container
|
||||
|
||||
def start_or_create_containers(
|
||||
@ -363,6 +294,15 @@ class Service(object):
|
||||
def get_linked_names(self):
|
||||
return [s.name for (s, _) in self.links]
|
||||
|
||||
def get_volumes_from_names(self):
|
||||
return [s.name for s in self.volumes_from if isinstance(s, Service)]
|
||||
|
||||
def get_net_name(self):
|
||||
if isinstance(self.net, Service):
|
||||
return self.net.name
|
||||
else:
|
||||
return
|
||||
|
||||
def _next_container_name(self, all_containers, one_off=False):
|
||||
bits = [self.project, self.name]
|
||||
if one_off:
|
||||
@ -398,7 +338,6 @@ class Service(object):
|
||||
for volume_source in self.volumes_from:
|
||||
if isinstance(volume_source, Service):
|
||||
containers = volume_source.containers(stopped=True)
|
||||
|
||||
if not containers:
|
||||
volumes_from.append(volume_source.create_container().id)
|
||||
else:
|
||||
@ -412,7 +351,26 @@ class Service(object):
|
||||
|
||||
return volumes_from
|
||||
|
||||
def _get_container_create_options(self, override_options, one_off=False):
|
||||
def _get_net(self):
|
||||
if not self.net:
|
||||
return "bridge"
|
||||
|
||||
if isinstance(self.net, Service):
|
||||
containers = self.net.containers()
|
||||
if len(containers) > 0:
|
||||
net = 'container:' + containers[0].id
|
||||
else:
|
||||
log.warning("Warning: Service %s is trying to use reuse the network stack "
|
||||
"of another service that is not running." % (self.net.name))
|
||||
net = None
|
||||
elif isinstance(self.net, Container):
|
||||
net = 'container:' + self.net.id
|
||||
else:
|
||||
net = self.net
|
||||
|
||||
return net
|
||||
|
||||
def _get_container_create_options(self, override_options, one_off=False, intermediate_container=None):
|
||||
container_options = dict(
|
||||
(k, self.options[k])
|
||||
for k in DOCKER_CONFIG_KEYS if k in self.options)
|
||||
@ -450,8 +408,6 @@ class Service(object):
|
||||
(parse_volume_spec(v).internal, {})
|
||||
for v in container_options['volumes'])
|
||||
|
||||
container_options['environment'] = merge_environment(container_options)
|
||||
|
||||
if self.can_be_built():
|
||||
container_options['image'] = self.full_name
|
||||
else:
|
||||
@ -461,8 +417,47 @@ class Service(object):
|
||||
for key in DOCKER_START_KEYS:
|
||||
container_options.pop(key, None)
|
||||
|
||||
container_options['host_config'] = self._get_container_host_config(override_options, one_off=one_off, intermediate_container=intermediate_container)
|
||||
|
||||
return container_options
|
||||
|
||||
def _get_container_host_config(self, override_options, one_off=False, intermediate_container=None):
|
||||
options = dict(self.options, **override_options)
|
||||
port_bindings = build_port_bindings(options.get('ports') or [])
|
||||
|
||||
volume_bindings = dict(
|
||||
build_volume_binding(parse_volume_spec(volume))
|
||||
for volume in options.get('volumes') or []
|
||||
if ':' in volume)
|
||||
|
||||
privileged = options.get('privileged', False)
|
||||
cap_add = options.get('cap_add', None)
|
||||
cap_drop = options.get('cap_drop', None)
|
||||
|
||||
dns = options.get('dns', None)
|
||||
if isinstance(dns, six.string_types):
|
||||
dns = [dns]
|
||||
|
||||
dns_search = options.get('dns_search', None)
|
||||
if isinstance(dns_search, six.string_types):
|
||||
dns_search = [dns_search]
|
||||
|
||||
restart = parse_restart_spec(options.get('restart', None))
|
||||
|
||||
return create_host_config(
|
||||
links=self._get_links(link_to_self=one_off),
|
||||
port_bindings=port_bindings,
|
||||
binds=volume_bindings,
|
||||
volumes_from=self._get_volumes_from(intermediate_container),
|
||||
privileged=privileged,
|
||||
network_mode=self._get_net(),
|
||||
dns=dns,
|
||||
dns_search=dns_search,
|
||||
restart_policy=restart,
|
||||
cap_add=cap_add,
|
||||
cap_drop=cap_drop,
|
||||
)
|
||||
|
||||
def _get_image_name(self, image):
|
||||
repo, tag = parse_repository_tag(image)
|
||||
if tag == "":
|
||||
@ -482,7 +477,7 @@ class Service(object):
|
||||
|
||||
try:
|
||||
all_events = stream_output(build_output, sys.stdout)
|
||||
except StreamOutputError, e:
|
||||
except StreamOutputError as e:
|
||||
raise BuildError(self, unicode(e))
|
||||
|
||||
image_id = None
|
||||
@ -590,8 +585,7 @@ def parse_repository_tag(s):
|
||||
|
||||
def build_volume_binding(volume_spec):
|
||||
internal = {'bind': volume_spec.internal, 'ro': volume_spec.mode == 'ro'}
|
||||
external = os.path.expanduser(volume_spec.external)
|
||||
return os.path.abspath(os.path.expandvars(external)), internal
|
||||
return volume_spec.external, internal
|
||||
|
||||
|
||||
def build_port_bindings(ports):
|
||||
@ -620,54 +614,3 @@ def split_port(port):
|
||||
|
||||
external_ip, external_port, internal_port = parts
|
||||
return internal_port, (external_ip, external_port or None)
|
||||
|
||||
|
||||
def get_env_files(options):
|
||||
env_files = options.get('env_file', [])
|
||||
if not isinstance(env_files, list):
|
||||
env_files = [env_files]
|
||||
return env_files
|
||||
|
||||
|
||||
def merge_environment(options):
|
||||
env = {}
|
||||
|
||||
for f in get_env_files(options):
|
||||
env.update(env_vars_from_file(f))
|
||||
|
||||
if 'environment' in options:
|
||||
if isinstance(options['environment'], list):
|
||||
env.update(dict(split_env(e) for e in options['environment']))
|
||||
else:
|
||||
env.update(options['environment'])
|
||||
|
||||
return dict(resolve_env(k, v) for k, v in env.iteritems())
|
||||
|
||||
|
||||
def split_env(env):
|
||||
if '=' in env:
|
||||
return env.split('=', 1)
|
||||
else:
|
||||
return env, None
|
||||
|
||||
|
||||
def resolve_env(key, val):
|
||||
if val is not None:
|
||||
return key, val
|
||||
elif key in os.environ:
|
||||
return key, os.environ[key]
|
||||
else:
|
||||
return key, ''
|
||||
|
||||
|
||||
def env_vars_from_file(filename):
|
||||
"""
|
||||
Read in a line delimited file of environment variables.
|
||||
"""
|
||||
env = {}
|
||||
for line in open(filename, 'r'):
|
||||
line = line.strip()
|
||||
if line and not line.startswith('#'):
|
||||
k, v = split_env(line)
|
||||
env[k] = v
|
||||
return env
|
||||
|
@ -1,7 +1,7 @@
|
||||
#!bash
|
||||
#
|
||||
# bash completion for docker-compose
|
||||
#
|
||||
#
|
||||
# This work is based on the completion for the docker command.
|
||||
#
|
||||
# This script provides completion of:
|
||||
@ -94,7 +94,7 @@ _docker-compose_build() {
|
||||
_docker-compose_docker-compose() {
|
||||
case "$prev" in
|
||||
--file|-f)
|
||||
_filedir
|
||||
_filedir y?(a)ml
|
||||
return
|
||||
;;
|
||||
--project-name|-p)
|
||||
@ -196,14 +196,27 @@ _docker-compose_pull() {
|
||||
|
||||
|
||||
_docker-compose_restart() {
|
||||
__docker-compose_services_running
|
||||
case "$prev" in
|
||||
-t | --timeout)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "-t --timeout" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker-compose_services_running
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
|
||||
_docker-compose_rm() {
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--force -v" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--force -f -v" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker-compose_services_stopped
|
||||
@ -219,14 +232,14 @@ _docker-compose_run() {
|
||||
compopt -o nospace
|
||||
return
|
||||
;;
|
||||
--entrypoint)
|
||||
--entrypoint|--user|-u)
|
||||
return
|
||||
;;
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --entrypoint -e --no-deps --rm --service-ports -T" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --entrypoint -e --no-deps --rm --service-ports -T --user -u" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker-compose_services_all
|
||||
@ -254,14 +267,33 @@ _docker-compose_start() {
|
||||
|
||||
|
||||
_docker-compose_stop() {
|
||||
__docker-compose_services_running
|
||||
case "$prev" in
|
||||
-t | --timeout)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "-t --timeout" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker-compose_services_running
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
|
||||
_docker-compose_up() {
|
||||
case "$prev" in
|
||||
-t | --timeout)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate -t --timeout" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker-compose_services_all
|
||||
|
@ -17,7 +17,7 @@ On a Mac, install with `brew install bash-completion`
|
||||
|
||||
Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g.
|
||||
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/1.1.0/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/1.2.0/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
|
||||
|
||||
Completion will be available upon next login.
|
||||
|
||||
|
@ -1,14 +1,23 @@
|
||||
---
|
||||
layout: default
|
||||
title: Getting started with Compose and Django
|
||||
---
|
||||
page_title: Quickstart Guide: Compose and Django
|
||||
page_description: Getting started with Docker Compose and Django
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers,
|
||||
django
|
||||
|
||||
Getting started with Compose and Django
|
||||
===================================
|
||||
|
||||
Let's use Compose to set up and run a Django/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md).
|
||||
## Getting started with Compose and Django
|
||||
|
||||
Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with:
|
||||
|
||||
This Quick-start Guide will demonstrate how to use Compose to set up and run a
|
||||
simple Django/PostgreSQL app. Before starting, you'll need to have
|
||||
[Compose installed](install.md).
|
||||
|
||||
### Define the project
|
||||
|
||||
Start by setting up the three files you'll need to build the app. First, since
|
||||
your app is going to run inside a Docker container containing all of its
|
||||
dependencies, you'll need to define exactly what needs to be included in the
|
||||
container. This is done using a file called `Dockerfile`. To begin with, the
|
||||
Dockerfile consists of:
|
||||
|
||||
FROM python:2.7
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
@ -18,14 +27,21 @@ Let's set up the three files that'll get us started. First, our app is going to
|
||||
RUN pip install -r requirements.txt
|
||||
ADD . /code/
|
||||
|
||||
That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
This Dockerfile will define an image that is used to build a container that
|
||||
includes your application and has Python installed alongside all of your Python
|
||||
dependencies. For more information on how to write Dockerfiles, see the
|
||||
[Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Second, we define our Python dependencies in a file called `requirements.txt`:
|
||||
Second, you'll define your Python dependencies in a file called
|
||||
`requirements.txt`:
|
||||
|
||||
Django
|
||||
psycopg2
|
||||
|
||||
Simple enough. Finally, this is all tied together with a file called `docker-compose.yml`. It describes the services that our app comprises of (a web server and database), what Docker images they use, how they link together, what volumes will be mounted inside the containers and what ports they expose.
|
||||
Finally, this is all tied together with a file called `docker-compose.yml`. It
|
||||
describes the services that comprise your app (here, a web server and database),
|
||||
which Docker images they use, how they link together, what volumes will be
|
||||
mounted inside the containers, and what ports they expose.
|
||||
|
||||
db:
|
||||
image: postgres
|
||||
@ -39,20 +55,28 @@ Simple enough. Finally, this is all tied together with a file called `docker-com
|
||||
links:
|
||||
- db
|
||||
|
||||
See the [`docker-compose.yml` reference](yml.html) for more information on how it works.
|
||||
See the [`docker-compose.yml` reference](yml.html) for more information on how
|
||||
this file works.
|
||||
|
||||
We can now start a Django project using `docker-compose run`:
|
||||
### Build the project
|
||||
|
||||
You can now start a Django project with `docker-compose run`:
|
||||
|
||||
$ docker-compose run web django-admin.py startproject composeexample .
|
||||
|
||||
First, Compose will build an image for the `web` service using the `Dockerfile`. It will then run `django-admin.py startproject composeexample .` inside a container using that image.
|
||||
First, Compose will build an image for the `web` service using the `Dockerfile`.
|
||||
It will then run `django-admin.py startproject composeexample .` inside a
|
||||
container built using that image.
|
||||
|
||||
This will generate a Django app inside the current directory:
|
||||
|
||||
$ ls
|
||||
Dockerfile docker-compose.yml composeexample manage.py requirements.txt
|
||||
|
||||
First thing we need to do is set up the database connection. Replace the `DATABASES = ...` definition in `composeexample/settings.py` to read:
|
||||
### Connect the database
|
||||
|
||||
Now you need to set up the database connection. Replace the `DATABASES = ...`
|
||||
definition in `composeexample/settings.py` to read:
|
||||
|
||||
DATABASES = {
|
||||
'default': {
|
||||
@ -64,7 +88,9 @@ First thing we need to do is set up the database connection. Replace the `DATABA
|
||||
}
|
||||
}
|
||||
|
||||
These settings are determined by the [postgres](https://registry.hub.docker.com/_/postgres/) Docker image we are using.
|
||||
These settings are determined by the
|
||||
[postgres](https://registry.hub.docker.com/_/postgres/) Docker image specified
|
||||
in the Dockerfile.
|
||||
|
||||
Then, run `docker-compose up`:
|
||||
|
||||
@ -83,13 +109,15 @@ Then, run `docker-compose up`:
|
||||
myapp_web_1 | Starting development server at http://0.0.0.0:8000/
|
||||
myapp_web_1 | Quit the server with CONTROL-C.
|
||||
|
||||
And your Django app should be running at port 8000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).
|
||||
Your Django app should nw be running at port 8000 on your Docker daemon (if
|
||||
you're using Boot2docker, `boot2docker ip` will tell you its address).
|
||||
|
||||
You can also run management commands with Docker. To set up your database, for example, run `docker-compose up` and in another terminal run:
|
||||
You can also run management commands with Docker. To set up your database, for
|
||||
example, run `docker-compose up` and in another terminal run:
|
||||
|
||||
$ docker-compose run web python manage.py syncdb
|
||||
|
||||
## Compose documentation
|
||||
## More Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
|
@ -185,6 +185,9 @@ your services once you've finished with them:
|
||||
|
||||
$ docker-compose stop
|
||||
|
||||
At this point, you have seen the basics of how Compose works.
|
||||
|
||||
At this point, you have seen the basics of how Compose works.
|
||||
|
||||
- Next, try the quick start guide for [Django](django.md),
|
||||
[Rails](rails.md), or [Wordpress](wordpress.md).
|
||||
- See the reference guides for complete details on the [commands](cli.md), the
|
||||
[configuration file](yml.md) and [environment variables](env.md).
|
||||
|
@ -10,26 +10,17 @@ Compose with a `curl` command.
|
||||
|
||||
### Install Docker
|
||||
|
||||
First, you'll need to install Docker version 1.3 or greater.
|
||||
First, install Docker version 1.3 or greater:
|
||||
|
||||
If you're on OS X, you can use the
|
||||
[OS X installer](https://docs.docker.com/installation/mac/) to install both
|
||||
Docker and the OSX helper app, boot2docker. Once boot2docker is running, set the
|
||||
environment variables that'll configure Docker and Compose to talk to it:
|
||||
|
||||
$(boot2docker shellinit)
|
||||
|
||||
To persist the environment variables across shell sessions, add the above line
|
||||
to your `~/.bashrc` file.
|
||||
|
||||
For complete instructions, or if you are on another platform, consult Docker's
|
||||
[installation instructions](https://docs.docker.com/installation/).
|
||||
- [Instructions for Mac OS X](http://docs.docker.com/installation/mac/)
|
||||
- [Instructions for Ubuntu](http://docs.docker.com/installation/ubuntulinux/)
|
||||
- [Instructions for other systems](http://docs.docker.com/installation/)
|
||||
|
||||
### Install Compose
|
||||
|
||||
To install Compose, run the following commands:
|
||||
|
||||
curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
Optionally, you can also install [command completion](completion.md) for the
|
||||
|
@ -5,3 +5,6 @@
|
||||
- ['compose/yml.md', 'Reference', 'Compose yml']
|
||||
- ['compose/env.md', 'Reference', 'Compose ENV variables']
|
||||
- ['compose/completion.md', 'Reference', 'Compose commandline completion']
|
||||
- ['compose/django.md', 'Examples', 'Getting started with Compose and Django']
|
||||
- ['compose/rails.md', 'Examples', 'Getting started with Compose and Rails']
|
||||
- ['compose/wordpress.md', 'Examples', 'Getting started with Compose and Wordpress']
|
||||
|
@ -1,14 +1,20 @@
|
||||
---
|
||||
layout: default
|
||||
title: Getting started with Compose and Rails
|
||||
---
|
||||
page_title: Quickstart Guide: Compose and Rails
|
||||
page_description: Getting started with Docker Compose and Rails
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers,
|
||||
rails
|
||||
|
||||
Getting started with Compose and Rails
|
||||
==================================
|
||||
|
||||
We're going to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md).
|
||||
## Getting started with Compose and Rails
|
||||
|
||||
Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with:
|
||||
This Quickstart guide will show you how to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md).
|
||||
|
||||
### Define the project
|
||||
|
||||
Start by setting up the three files you'll need to build the app. First, since
|
||||
your app is going to run inside a Docker container containing all of its
|
||||
dependencies, you'll need to define exactly what needs to be included in the
|
||||
container. This is done using a file called `Dockerfile`. To begin with, the
|
||||
Dockerfile consists of:
|
||||
|
||||
FROM ruby:2.2.0
|
||||
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
|
||||
@ -18,14 +24,14 @@ Let's set up the three files that'll get us started. First, our app is going to
|
||||
RUN bundle install
|
||||
ADD . /myapp
|
||||
|
||||
That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
That'll put your application code inside an image that will build a container with Ruby, Bundler and all your dependencies inside it. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Next, we have a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`.
|
||||
Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`.
|
||||
|
||||
source 'https://rubygems.org'
|
||||
gem 'rails', '4.2.0'
|
||||
|
||||
Finally, `docker-compose.yml` is where the magic happens. It describes what services our app comprises (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration we need to link them together and expose the web app's port.
|
||||
Finally, `docker-compose.yml` is where the magic happens. This file describes the services that comprise your app (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration needed to link them together and expose the web app's port.
|
||||
|
||||
db:
|
||||
image: postgres
|
||||
@ -41,11 +47,16 @@ Finally, `docker-compose.yml` is where the magic happens. It describes what serv
|
||||
links:
|
||||
- db
|
||||
|
||||
With those files in place, we can now generate the Rails skeleton app using `docker-compose run`:
|
||||
### Build the project
|
||||
|
||||
With those three files in place, you can now generate the Rails skeleton app
|
||||
using `docker-compose run`:
|
||||
|
||||
$ docker-compose run web rails new . --force --database=postgresql --skip-bundle
|
||||
|
||||
First, Compose will build the image for the `web` service using the `Dockerfile`. Then it'll run `rails new` inside a new container, using that image. Once it's done, you should have a fresh app generated:
|
||||
First, Compose will build the image for the `web` service using the
|
||||
`Dockerfile`. Then it'll run `rails new` inside a new container, using that
|
||||
image. Once it's done, you should have generated a fresh app:
|
||||
|
||||
$ ls
|
||||
Dockerfile app docker-compose.yml tmp
|
||||
@ -54,17 +65,26 @@ First, Compose will build the image for the `web` service using the `Dockerfile`
|
||||
README.rdoc config.ru public
|
||||
Rakefile db test
|
||||
|
||||
Uncomment the line in your new `Gemfile` which loads `therubyracer`, so we've got a Javascript runtime:
|
||||
Uncomment the line in your new `Gemfile` which loads `therubyracer`, so you've
|
||||
got a Javascript runtime:
|
||||
|
||||
gem 'therubyracer', platforms: :ruby
|
||||
|
||||
Now that we've got a new `Gemfile`, we need to build the image again. (This, and changes to the Dockerfile itself, should be the only times you'll need to rebuild).
|
||||
Now that you've got a new `Gemfile`, you need to build the image again. (This,
|
||||
and changes to the Dockerfile itself, should be the only times you'll need to
|
||||
rebuild.)
|
||||
|
||||
$ docker-compose build
|
||||
|
||||
The app is now bootable, but we're not quite there yet. By default, Rails expects a database to be running on `localhost` - we need to point it at the `db` container instead. We also need to change the database and username to align with the defaults set by the `postgres` image.
|
||||
### Connect the database
|
||||
|
||||
Open up your newly-generated `database.yml`. Replace its contents with the following:
|
||||
The app is now bootable, but you're not quite there yet. By default, Rails
|
||||
expects a database to be running on `localhost` - so you need to point it at the
|
||||
`db` container instead. You also need to change the database and username to
|
||||
align with the defaults set by the `postgres` image.
|
||||
|
||||
Open up your newly-generated `database.yml` file. Replace its contents with the
|
||||
following:
|
||||
|
||||
development: &default
|
||||
adapter: postgresql
|
||||
@ -79,23 +99,25 @@ Open up your newly-generated `database.yml`. Replace its contents with the follo
|
||||
<<: *default
|
||||
database: myapp_test
|
||||
|
||||
We can now boot the app.
|
||||
You can now boot the app with:
|
||||
|
||||
$ docker-compose up
|
||||
|
||||
If all's well, you should see some PostgreSQL output, and then—after a few seconds—the familiar refrain:
|
||||
If all's well, you should see some PostgreSQL output, and then—after a few
|
||||
seconds—the familiar refrain:
|
||||
|
||||
myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1
|
||||
myapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu]
|
||||
myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000
|
||||
|
||||
Finally, we just need to create the database. In another terminal, run:
|
||||
Finally, you need to create the database. In another terminal, run:
|
||||
|
||||
$ docker-compose run web rake db:create
|
||||
|
||||
And we're rolling—your app should now be running on port 3000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).
|
||||
That's it. Your app should now be running on port 3000 on your Docker daemon (if
|
||||
you're using Boot2docker, `boot2docker ip` will tell you its address).
|
||||
|
||||
## Compose documentation
|
||||
## More Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
|
@ -1,25 +1,40 @@
|
||||
---
|
||||
layout: default
|
||||
title: Getting started with Compose and Wordpress
|
||||
---
|
||||
page_title: Quickstart Guide: Compose and Wordpress
|
||||
page_description: Getting started with Docker Compose and Rails
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers,
|
||||
wordpress
|
||||
|
||||
Getting started with Compose and Wordpress
|
||||
======================================
|
||||
## Getting started with Compose and Wordpress
|
||||
|
||||
Compose makes it nice and easy to run Wordpress in an isolated environment. [Install Compose](install.md), then download Wordpress into the current directory:
|
||||
You can use Compose to easily run Wordpress in an isolated environment built
|
||||
with Docker containers.
|
||||
|
||||
### Define the project
|
||||
|
||||
First, [Install Compose](install.md) and then download Wordpress into the
|
||||
current directory:
|
||||
|
||||
$ curl https://wordpress.org/latest.tar.gz | tar -xvzf -
|
||||
|
||||
This will create a directory called `wordpress`, which you can rename to the name of your project if you wish. Inside that directory, we create `Dockerfile`, a file that defines what environment your app is going to run in:
|
||||
This will create a directory called `wordpress`. If you wish, you can rename it
|
||||
to the name of your project.
|
||||
|
||||
Next, inside that directory, create a `Dockerfile`, a file that defines what
|
||||
environment your app is going to run in. For more information on how to write
|
||||
Dockerfiles, see the
|
||||
[Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the
|
||||
[Dockerfile reference](http://docs.docker.com/reference/builder/). In this case,
|
||||
your Dockerfile should be:
|
||||
|
||||
```
|
||||
FROM orchardup/php5
|
||||
ADD . /code
|
||||
```
|
||||
|
||||
This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
This tells Docker how to build an image defining a container that contains PHP
|
||||
and Wordpress.
|
||||
|
||||
Next up, `docker-compose.yml` starts our web service and a separate MySQL instance:
|
||||
Next you'll create a `docker-compose.yml` file that will start your web service
|
||||
and a separate MySQL instance:
|
||||
|
||||
```
|
||||
web:
|
||||
@ -37,7 +52,9 @@ db:
|
||||
MYSQL_DATABASE: wordpress
|
||||
```
|
||||
|
||||
Two supporting files are needed to get this working - first up, `wp-config.php` is the standard Wordpress config file with a single change to point the database configuration at the `db` container:
|
||||
Two supporting files are needed to get this working - first, `wp-config.php` is
|
||||
the standard Wordpress config file with a single change to point the database
|
||||
configuration at the `db` container:
|
||||
|
||||
```
|
||||
<?php
|
||||
@ -67,7 +84,7 @@ if ( !defined('ABSPATH') )
|
||||
require_once(ABSPATH . 'wp-settings.php');
|
||||
```
|
||||
|
||||
Finally, `router.php` tells PHP's built-in web server how to run Wordpress:
|
||||
Second, `router.php` tells PHP's built-in web server how to run Wordpress:
|
||||
|
||||
```
|
||||
<?php
|
||||
@ -87,10 +104,15 @@ if(file_exists($root.$path))
|
||||
}
|
||||
}else include_once 'index.php';
|
||||
```
|
||||
### Build the project
|
||||
|
||||
With those four files in place, run `docker-compose up` inside your Wordpress directory and it'll pull and build the images we need, and then start the web and database containers. You'll then be able to visit Wordpress at port 8000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).
|
||||
With those four files in place, run `docker-compose up` inside your Wordpress
|
||||
directory and it'll pull and build the needed images, and then start the web and
|
||||
database containers. You'll then be able to visit Wordpress at port 8000 on your
|
||||
Docker daemon (if you're using Boot2docker, `boot2docker ip` will tell you its
|
||||
address).
|
||||
|
||||
## Compose documentation
|
||||
## More Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
|
91
docs/yml.md
91
docs/yml.md
@ -29,8 +29,9 @@ image: a4bc65fd
|
||||
|
||||
### build
|
||||
|
||||
Path to a directory containing a Dockerfile. This directory is also the
|
||||
build context that is sent to the Docker daemon.
|
||||
Path to a directory containing a Dockerfile. When the value supplied is a
|
||||
relative path, it is interpreted as relative to the location of the yml file
|
||||
itself. This directory is also the build context that is sent to the Docker daemon.
|
||||
|
||||
Compose will build and tag it with a generated name, and use that image thereafter.
|
||||
|
||||
@ -158,17 +159,101 @@ environment:
|
||||
|
||||
Add environment variables from a file. Can be a single value or a list.
|
||||
|
||||
If you have specified a Compose file with `docker-compose -f FILE`, paths in
|
||||
`env_file` are relative to the directory that file is in.
|
||||
|
||||
Environment variables specified in `environment` override these values.
|
||||
|
||||
```
|
||||
env_file: .env
|
||||
|
||||
env_file:
|
||||
- .env
|
||||
- ./common.env
|
||||
- ./apps/web.env
|
||||
- /opt/secrets.env
|
||||
```
|
||||
|
||||
```
|
||||
RACK_ENV: development
|
||||
```
|
||||
|
||||
### extends
|
||||
|
||||
Extend another service, in the current file or another, optionally overriding
|
||||
configuration.
|
||||
|
||||
Here's a simple example. Suppose we have 2 files - **common.yml** and
|
||||
**development.yml**. We can use `extends` to define a service in
|
||||
**development.yml** which uses configuration defined in **common.yml**:
|
||||
|
||||
**common.yml**
|
||||
|
||||
```
|
||||
webapp:
|
||||
build: ./webapp
|
||||
environment:
|
||||
- DEBUG=false
|
||||
- SEND_EMAILS=false
|
||||
```
|
||||
|
||||
**development.yml**
|
||||
|
||||
```
|
||||
web:
|
||||
extends:
|
||||
file: common.yml
|
||||
service: webapp
|
||||
ports:
|
||||
- "8000:8000"
|
||||
links:
|
||||
- db
|
||||
environment:
|
||||
- DEBUG=true
|
||||
db:
|
||||
image: postgres
|
||||
```
|
||||
|
||||
Here, the `web` service in **development.yml** inherits the configuration of
|
||||
the `webapp` service in **common.yml** - the `build` and `environment` keys -
|
||||
and adds `ports` and `links` configuration. It overrides one of the defined
|
||||
environment variables (DEBUG) with a new value, and the other one
|
||||
(SEND_EMAILS) is left untouched. It's exactly as if you defined `web` like
|
||||
this:
|
||||
|
||||
```yaml
|
||||
web:
|
||||
build: ./webapp
|
||||
ports:
|
||||
- "8000:8000"
|
||||
links:
|
||||
- db
|
||||
environment:
|
||||
- DEBUG=true
|
||||
- SEND_EMAILS=false
|
||||
```
|
||||
|
||||
The `extends` option is great for sharing configuration between different
|
||||
apps, or for configuring the same app differently for different environments.
|
||||
You could write a new file for a staging environment, **staging.yml**, which
|
||||
binds to a different port and doesn't turn on debugging:
|
||||
|
||||
```
|
||||
web:
|
||||
extends:
|
||||
file: common.yml
|
||||
service: webapp
|
||||
ports:
|
||||
- "80:8000"
|
||||
links:
|
||||
- db
|
||||
db:
|
||||
image: postgres
|
||||
```
|
||||
|
||||
> **Note:** When you extend a service, `links` and `volumes_from`
|
||||
> configuration options are **not** inherited - you will have to define
|
||||
> those manually each time you extend it.
|
||||
|
||||
### net
|
||||
|
||||
Networking mode. Use the same values as the docker client `--net` parameter.
|
||||
|
@ -1,8 +1,8 @@
|
||||
PyYAML==3.10
|
||||
docker-py==0.7.1
|
||||
docker-py==1.0.0
|
||||
dockerpty==0.3.2
|
||||
docopt==0.6.1
|
||||
requests==2.2.1
|
||||
six==1.7.3
|
||||
texttable==0.8.1
|
||||
texttable==0.8.2
|
||||
websocket-client==0.11.0
|
||||
|
@ -1,8 +1,12 @@
|
||||
#!/bin/sh
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
mkdir -p `pwd`/dist
|
||||
chmod 777 `pwd`/dist
|
||||
docker build -t docker-compose .
|
||||
docker run -u user -v `pwd`/dist:/code/dist --rm --entrypoint pyinstaller docker-compose -F bin/docker-compose
|
||||
mv dist/docker-compose dist/docker-compose-Linux-x86_64
|
||||
docker run -u user -v `pwd`/dist:/code/dist --rm --entrypoint dist/docker-compose-Linux-x86_64 docker-compose --version
|
||||
|
||||
TAG="docker-compose"
|
||||
docker build -t "$TAG" .
|
||||
docker run \
|
||||
--rm \
|
||||
--user=user \
|
||||
--volume="$(pwd):/code" \
|
||||
--entrypoint="script/build-linux-inner" \
|
||||
"$TAG"
|
||||
|
10
script/build-linux-inner
Executable file
10
script/build-linux-inner
Executable file
@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
mkdir -p `pwd`/dist
|
||||
chmod 777 `pwd`/dist
|
||||
|
||||
pyinstaller -F bin/docker-compose
|
||||
mv dist/docker-compose dist/docker-compose-Linux-x86_64
|
||||
dist/docker-compose-Linux-x86_64 --version
|
18
script/ci
Executable file
18
script/ci
Executable file
@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
# This should be run inside a container built from the Dockerfile
|
||||
# at the root of the repo:
|
||||
#
|
||||
# $ TAG="docker-compose:$(git rev-parse --short HEAD)"
|
||||
# $ docker build -t "$TAG" .
|
||||
# $ docker run --rm --volume="/var/run/docker.sock:/var/run/docker.sock" --volume="$(pwd)/.git:/code/.git" -e "TAG=$TAG" --entrypoint="script/ci" "$TAG"
|
||||
|
||||
set -e
|
||||
|
||||
>&2 echo "Validating DCO"
|
||||
script/validate-dco
|
||||
|
||||
export DOCKER_VERSIONS=all
|
||||
. script/test-versions
|
||||
|
||||
>&2 echo "Building Linux binary"
|
||||
su -c script/build-linux-inner user
|
88
script/dind
Executable file
88
script/dind
Executable file
@ -0,0 +1,88 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# DinD: a wrapper script which allows docker to be run inside a docker container.
|
||||
# Original version by Jerome Petazzoni <jerome@docker.com>
|
||||
# See the blog post: http://blog.docker.com/2013/09/docker-can-now-run-within-docker/
|
||||
#
|
||||
# This script should be executed inside a docker container in privilieged mode
|
||||
# ('docker run --privileged', introduced in docker 0.6).
|
||||
|
||||
# Usage: dind CMD [ARG...]
|
||||
|
||||
# apparmor sucks and Docker needs to know that it's in a container (c) @tianon
|
||||
export container=docker
|
||||
|
||||
# First, make sure that cgroups are mounted correctly.
|
||||
CGROUP=/cgroup
|
||||
|
||||
mkdir -p "$CGROUP"
|
||||
|
||||
if ! mountpoint -q "$CGROUP"; then
|
||||
mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
|
||||
echo >&2 'Could not make a tmpfs mount. Did you use --privileged?'
|
||||
exit 1
|
||||
}
|
||||
fi
|
||||
|
||||
if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
|
||||
mount -t securityfs none /sys/kernel/security || {
|
||||
echo >&2 'Could not mount /sys/kernel/security.'
|
||||
echo >&2 'AppArmor detection and -privileged mode might break.'
|
||||
}
|
||||
fi
|
||||
|
||||
# Mount the cgroup hierarchies exactly as they are in the parent system.
|
||||
for SUBSYS in $(cut -d: -f2 /proc/1/cgroup); do
|
||||
mkdir -p "$CGROUP/$SUBSYS"
|
||||
if ! mountpoint -q $CGROUP/$SUBSYS; then
|
||||
mount -n -t cgroup -o "$SUBSYS" cgroup "$CGROUP/$SUBSYS"
|
||||
fi
|
||||
|
||||
# The two following sections address a bug which manifests itself
|
||||
# by a cryptic "lxc-start: no ns_cgroup option specified" when
|
||||
# trying to start containers withina container.
|
||||
# The bug seems to appear when the cgroup hierarchies are not
|
||||
# mounted on the exact same directories in the host, and in the
|
||||
# container.
|
||||
|
||||
# Named, control-less cgroups are mounted with "-o name=foo"
|
||||
# (and appear as such under /proc/<pid>/cgroup) but are usually
|
||||
# mounted on a directory named "foo" (without the "name=" prefix).
|
||||
# Systemd and OpenRC (and possibly others) both create such a
|
||||
# cgroup. To avoid the aforementioned bug, we symlink "foo" to
|
||||
# "name=foo". This shouldn't have any adverse effect.
|
||||
name="${SUBSYS#name=}"
|
||||
if [ "$name" != "$SUBSYS" ]; then
|
||||
ln -s "$SUBSYS" "$CGROUP/$name"
|
||||
fi
|
||||
|
||||
# Likewise, on at least one system, it has been reported that
|
||||
# systemd would mount the CPU and CPU accounting controllers
|
||||
# (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu"
|
||||
# but on a directory called "cpu,cpuacct" (note the inversion
|
||||
# in the order of the groups). This tries to work around it.
|
||||
if [ "$SUBSYS" = 'cpuacct,cpu' ]; then
|
||||
ln -s "$SUBSYS" "$CGROUP/cpu,cpuacct"
|
||||
fi
|
||||
done
|
||||
|
||||
# Note: as I write those lines, the LXC userland tools cannot setup
|
||||
# a "sub-container" properly if the "devices" cgroup is not in its
|
||||
# own hierarchy. Let's detect this and issue a warning.
|
||||
if ! grep -q :devices: /proc/1/cgroup; then
|
||||
echo >&2 'WARNING: the "devices" cgroup should be in its own hierarchy.'
|
||||
fi
|
||||
if ! grep -qw devices /proc/1/cgroup; then
|
||||
echo >&2 'WARNING: it looks like the "devices" cgroup is not mounted.'
|
||||
fi
|
||||
|
||||
# Mount /tmp
|
||||
mount -t tmpfs none /tmp
|
||||
|
||||
if [ $# -gt 0 ]; then
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
echo >&2 'ERROR: No command specified.'
|
||||
echo >&2 'You probably want to run hack/make.sh, or maybe a shell?'
|
4
script/shell
Executable file
4
script/shell
Executable file
@ -0,0 +1,4 @@
|
||||
#!/bin/sh
|
||||
set -ex
|
||||
docker build -t docker-compose .
|
||||
exec docker run -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/code -ti --rm --entrypoint bash docker-compose
|
20
script/test
20
script/test
@ -1,5 +1,17 @@
|
||||
#!/bin/sh
|
||||
#!/bin/bash
|
||||
# See CONTRIBUTING.md for usage.
|
||||
|
||||
set -ex
|
||||
docker build -t docker-compose .
|
||||
docker run -v /var/run/docker.sock:/var/run/docker.sock --rm --entrypoint flake8 docker-compose compose
|
||||
docker run -v /var/run/docker.sock:/var/run/docker.sock --rm --entrypoint nosetests docker-compose $@
|
||||
|
||||
TAG="docker-compose:$(git rev-parse --short HEAD)"
|
||||
|
||||
docker build -t "$TAG" .
|
||||
docker run \
|
||||
--rm \
|
||||
--volume="/var/run/docker.sock:/var/run/docker.sock" \
|
||||
--volume="$(pwd):/code" \
|
||||
-e DOCKER_VERSIONS \
|
||||
-e "TAG=$TAG" \
|
||||
--entrypoint="script/test-versions" \
|
||||
"$TAG" \
|
||||
"$@"
|
||||
|
26
script/test-versions
Executable file
26
script/test-versions
Executable file
@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
# This should be run inside a container built from the Dockerfile
|
||||
# at the root of the repo - script/test will do it automatically.
|
||||
|
||||
set -e
|
||||
|
||||
>&2 echo "Running lint checks"
|
||||
flake8 compose
|
||||
|
||||
if [ "$DOCKER_VERSIONS" == "" ]; then
|
||||
DOCKER_VERSIONS="1.5.0"
|
||||
elif [ "$DOCKER_VERSIONS" == "all" ]; then
|
||||
DOCKER_VERSIONS="$ALL_DOCKER_VERSIONS"
|
||||
fi
|
||||
|
||||
for version in $DOCKER_VERSIONS; do
|
||||
>&2 echo "Running tests against Docker $version"
|
||||
docker run \
|
||||
--rm \
|
||||
--privileged \
|
||||
--volume="/var/lib/docker" \
|
||||
-e "DOCKER_VERSION=$version" \
|
||||
--entrypoint="script/dind" \
|
||||
"$TAG" \
|
||||
script/wrapdocker nosetests "$@"
|
||||
done
|
@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||
|
||||
adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }')
|
||||
|
20
script/wrapdocker
Executable file
20
script/wrapdocker
Executable file
@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "$DOCKER_VERSION" == "" ]; then
|
||||
DOCKER_VERSION="1.5.0"
|
||||
fi
|
||||
|
||||
ln -fs "/usr/local/bin/docker-$DOCKER_VERSION" "/usr/local/bin/docker"
|
||||
|
||||
# If a pidfile is still around (for example after a container restart),
|
||||
# delete it so that docker can start.
|
||||
rm -rf /var/run/docker.pid
|
||||
docker -d $DOCKER_DAEMON_ARGS &>/var/log/docker.log &
|
||||
|
||||
>&2 echo "Waiting for Docker to start..."
|
||||
while ! docker ps &>/dev/null; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
>&2 echo ">" "$@"
|
||||
exec "$@"
|
4
setup.py
4
setup.py
@ -27,10 +27,10 @@ def find_version(*file_paths):
|
||||
install_requires = [
|
||||
'docopt >= 0.6.1, < 0.7',
|
||||
'PyYAML >= 3.10, < 4',
|
||||
'requests >= 2.2.1, < 2.5.0',
|
||||
'requests >= 2.2.1, < 2.6',
|
||||
'texttable >= 0.8.1, < 0.9',
|
||||
'websocket-client >= 0.11.0, < 1.0',
|
||||
'docker-py >= 0.6.0, < 0.8',
|
||||
'docker-py >= 1.0.0, < 1.2',
|
||||
'dockerpty >= 0.3.2, < 0.4',
|
||||
'six >= 1.3.0, < 2',
|
||||
]
|
||||
|
2
tests/fixtures/build-ctx/Dockerfile
vendored
Normal file
2
tests/fixtures/build-ctx/Dockerfile
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
FROM busybox:latest
|
||||
CMD echo "success"
|
2
tests/fixtures/build-path/docker-compose.yml
vendored
Normal file
2
tests/fixtures/build-path/docker-compose.yml
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
foo:
|
||||
build: ../build-ctx/
|
@ -1,2 +1,2 @@
|
||||
service:
|
||||
build: tests/fixtures/dockerfile_with_entrypoint
|
||||
build: .
|
||||
|
4
tests/fixtures/env-file/docker-compose.yml
vendored
Normal file
4
tests/fixtures/env-file/docker-compose.yml
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
web:
|
||||
image: busybox
|
||||
command: /bin/true
|
||||
env_file: ./test.env
|
1
tests/fixtures/env-file/test.env
vendored
Normal file
1
tests/fixtures/env-file/test.env
vendored
Normal file
@ -0,0 +1 @@
|
||||
FOO=1
|
12
tests/fixtures/extends/circle-1.yml
vendored
Normal file
12
tests/fixtures/extends/circle-1.yml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
foo:
|
||||
image: busybox
|
||||
bar:
|
||||
image: busybox
|
||||
web:
|
||||
extends:
|
||||
file: circle-2.yml
|
||||
service: web
|
||||
baz:
|
||||
image: busybox
|
||||
quux:
|
||||
image: busybox
|
12
tests/fixtures/extends/circle-2.yml
vendored
Normal file
12
tests/fixtures/extends/circle-2.yml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
foo:
|
||||
image: busybox
|
||||
bar:
|
||||
image: busybox
|
||||
web:
|
||||
extends:
|
||||
file: circle-1.yml
|
||||
service: web
|
||||
baz:
|
||||
image: busybox
|
||||
quux:
|
||||
image: busybox
|
6
tests/fixtures/extends/common.yml
vendored
Normal file
6
tests/fixtures/extends/common.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
web:
|
||||
image: busybox
|
||||
command: /bin/true
|
||||
environment:
|
||||
- FOO=1
|
||||
- BAR=1
|
16
tests/fixtures/extends/docker-compose.yml
vendored
Normal file
16
tests/fixtures/extends/docker-compose.yml
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
myweb:
|
||||
extends:
|
||||
file: common.yml
|
||||
service: web
|
||||
command: sleep 300
|
||||
links:
|
||||
- "mydb:db"
|
||||
environment:
|
||||
# leave FOO alone
|
||||
# override BAR
|
||||
BAR: "2"
|
||||
# add BAZ
|
||||
BAZ: "2"
|
||||
mydb:
|
||||
image: busybox
|
||||
command: sleep 300
|
6
tests/fixtures/extends/nested-intermediate.yml
vendored
Normal file
6
tests/fixtures/extends/nested-intermediate.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
webintermediate:
|
||||
extends:
|
||||
file: common.yml
|
||||
service: web
|
||||
environment:
|
||||
- "FOO=2"
|
6
tests/fixtures/extends/nested.yml
vendored
Normal file
6
tests/fixtures/extends/nested.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
myweb:
|
||||
extends:
|
||||
file: nested-intermediate.yml
|
||||
service: webintermediate
|
||||
environment:
|
||||
- "BAR=2"
|
@ -4,4 +4,4 @@ simple:
|
||||
command: /bin/sleep 300
|
||||
ports:
|
||||
- '3000'
|
||||
- '9999:3001'
|
||||
- '49152:3001'
|
||||
|
@ -1,2 +1,2 @@
|
||||
simple:
|
||||
build: tests/fixtures/simple-dockerfile
|
||||
build: .
|
||||
|
4
tests/fixtures/user-composefile/docker-compose.yml
vendored
Normal file
4
tests/fixtures/user-composefile/docker-compose.yml
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
service:
|
||||
image: busybox:latest
|
||||
user: notauser
|
||||
command: id
|
5
tests/fixtures/volume-path/common/services.yml
vendored
Normal file
5
tests/fixtures/volume-path/common/services.yml
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
db:
|
||||
image: busybox
|
||||
volumes:
|
||||
- ./foo:/foo
|
||||
- ./bar:/bar
|
6
tests/fixtures/volume-path/docker-compose.yml
vendored
Normal file
6
tests/fixtures/volume-path/docker-compose.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
db:
|
||||
extends:
|
||||
file: common/services.yml
|
||||
service: db
|
||||
volumes:
|
||||
- ./bar:/bar
|
@ -1,5 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
import sys
|
||||
import os
|
||||
|
||||
from six import StringIO
|
||||
from mock import patch
|
||||
@ -23,6 +24,12 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
@property
|
||||
def project(self):
|
||||
# Hack: allow project to be overridden. This needs refactoring so that
|
||||
# the project object is built exactly once, by the command object, and
|
||||
# accessed by the test case object.
|
||||
if hasattr(self, '_project'):
|
||||
return self._project
|
||||
|
||||
return self.command.get_project(self.command.get_config_path())
|
||||
|
||||
def test_help(self):
|
||||
@ -231,6 +238,28 @@ class CLITestCase(DockerClientTestCase):
|
||||
u'/bin/echo helloworld'
|
||||
)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_with_user_overridden(self, _):
|
||||
self.command.base_dir = 'tests/fixtures/user-composefile'
|
||||
name = 'service'
|
||||
user = 'sshd'
|
||||
args = ['run', '--user={}'.format(user), name]
|
||||
self.command.dispatch(args, None)
|
||||
service = self.project.get_service(name)
|
||||
container = service.containers(stopped=True, one_off=True)[0]
|
||||
self.assertEqual(user, container.get('Config.User'))
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_with_user_overridden_short_form(self, _):
|
||||
self.command.base_dir = 'tests/fixtures/user-composefile'
|
||||
name = 'service'
|
||||
user = 'sshd'
|
||||
args = ['run', '-u', user, name]
|
||||
self.command.dispatch(args, None)
|
||||
service = self.project.get_service(name)
|
||||
container = service.containers(stopped=True, one_off=True)[0]
|
||||
self.assertEqual(user, container.get('Config.User'))
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_with_environement_overridden(self, _):
|
||||
name = 'service'
|
||||
@ -271,6 +300,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_with_map_ports(self, __):
|
||||
|
||||
# create one off container
|
||||
self.command.base_dir = 'tests/fixtures/ports-composefile'
|
||||
self.command.dispatch(['run', '-d', '--service-ports', 'simple'], None)
|
||||
@ -286,7 +316,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
# check the ports
|
||||
self.assertNotEqual(port_random, None)
|
||||
self.assertIn("0.0.0.0", port_random)
|
||||
self.assertEqual(port_assigned, "0.0.0.0:9999")
|
||||
self.assertEqual(port_assigned, "0.0.0.0:49152")
|
||||
|
||||
def test_rm(self):
|
||||
service = self.project.get_service('simple')
|
||||
@ -295,6 +325,12 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.command.dispatch(['rm', '--force'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 0)
|
||||
service = self.project.get_service('simple')
|
||||
service.create_container()
|
||||
service.kill()
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.command.dispatch(['rm', '-f'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 0)
|
||||
|
||||
def test_kill(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
@ -369,6 +405,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(project.get_service('another').containers()), 0)
|
||||
|
||||
def test_port(self):
|
||||
|
||||
self.command.base_dir = 'tests/fixtures/ports-composefile'
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
container = self.project.get_service('simple').get_container()
|
||||
@ -379,5 +416,41 @@ class CLITestCase(DockerClientTestCase):
|
||||
return mock_stdout.getvalue().rstrip()
|
||||
|
||||
self.assertEqual(get_port(3000), container.get_local_port(3000))
|
||||
self.assertEqual(get_port(3001), "0.0.0.0:9999")
|
||||
self.assertEqual(get_port(3001), "0.0.0.0:49152")
|
||||
self.assertEqual(get_port(3002), "")
|
||||
|
||||
def test_env_file_relative_to_compose_file(self):
|
||||
config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml')
|
||||
self.command.dispatch(['-f', config_path, 'up', '-d'], None)
|
||||
self._project = self.command.get_project(config_path)
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
self.assertEqual(len(containers), 1)
|
||||
self.assertIn("FOO=1", containers[0].get('Config.Env'))
|
||||
|
||||
def test_up_with_extends(self):
|
||||
self.command.base_dir = 'tests/fixtures/extends'
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
|
||||
self.assertEqual(
|
||||
set([s.name for s in self.project.services]),
|
||||
set(['mydb', 'myweb']),
|
||||
)
|
||||
|
||||
# Sort by name so we get [db, web]
|
||||
containers = sorted(
|
||||
self.project.containers(stopped=True),
|
||||
key=lambda c: c.name,
|
||||
)
|
||||
|
||||
self.assertEqual(len(containers), 2)
|
||||
web = containers[1]
|
||||
|
||||
self.assertEqual(set(web.links()), set(['db', 'mydb_1', 'extends_mydb_1']))
|
||||
|
||||
expected_env = set([
|
||||
"FOO=1",
|
||||
"BAR=2",
|
||||
"BAZ=2",
|
||||
])
|
||||
self.assertTrue(expected_env <= set(web.get('Config.Env')))
|
||||
|
@ -1,23 +1,25 @@
|
||||
from __future__ import unicode_literals
|
||||
from compose.project import Project, ConfigurationError
|
||||
from compose import config
|
||||
from compose.project import Project
|
||||
from compose.container import Container
|
||||
from .testcases import DockerClientTestCase
|
||||
|
||||
|
||||
class ProjectTest(DockerClientTestCase):
|
||||
def test_volumes_from_service(self):
|
||||
project = Project.from_config(
|
||||
name='composetest',
|
||||
config={
|
||||
'data': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes': ['/var/data'],
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
service_dicts = config.from_dictionary({
|
||||
'data': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes': ['/var/data'],
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
}, working_dir='.')
|
||||
project = Project.from_dicts(
|
||||
name='composetest',
|
||||
service_dicts=service_dicts,
|
||||
client=self.client,
|
||||
)
|
||||
db = project.get_service('db')
|
||||
@ -31,19 +33,76 @@ class ProjectTest(DockerClientTestCase):
|
||||
volumes=['/var/data'],
|
||||
name='composetest_data_container',
|
||||
)
|
||||
project = Project.from_config(
|
||||
project = Project.from_dicts(
|
||||
name='composetest',
|
||||
config={
|
||||
service_dicts=config.from_dictionary({
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['composetest_data_container'],
|
||||
},
|
||||
},
|
||||
}),
|
||||
client=self.client,
|
||||
)
|
||||
db = project.get_service('db')
|
||||
self.assertEqual(db.volumes_from, [data_container])
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_net_from_service(self):
|
||||
project = Project.from_dicts(
|
||||
name='composetest',
|
||||
service_dicts=config.from_dictionary({
|
||||
'net': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'net': 'container:net',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
},
|
||||
}),
|
||||
client=self.client,
|
||||
)
|
||||
|
||||
project.up()
|
||||
|
||||
web = project.get_service('web')
|
||||
net = project.get_service('net')
|
||||
self.assertEqual(web._get_net(), 'container:'+net.containers()[0].id)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_net_from_container(self):
|
||||
net_container = Container.create(
|
||||
self.client,
|
||||
image='busybox:latest',
|
||||
name='composetest_net_container',
|
||||
command='/bin/sleep 300'
|
||||
)
|
||||
net_container.start()
|
||||
|
||||
project = Project.from_dicts(
|
||||
name='composetest',
|
||||
service_dicts=config.from_dictionary({
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'net': 'container:composetest_net_container'
|
||||
},
|
||||
}),
|
||||
client=self.client,
|
||||
)
|
||||
|
||||
project.up()
|
||||
|
||||
web = project.get_service('web')
|
||||
self.assertEqual(web._get_net(), 'container:'+net_container.id)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_start_stop_kill_remove(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db')
|
||||
@ -199,20 +258,79 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_deps(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
|
||||
project = Project('composetest', [web, db, console], self.client)
|
||||
def test_project_up_starts_depends(self):
|
||||
project = Project.from_dicts(
|
||||
name='composetest',
|
||||
service_dicts=config.from_dictionary({
|
||||
'console': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
},
|
||||
'data' : {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'links': ['db'],
|
||||
},
|
||||
}),
|
||||
client=self.client,
|
||||
)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['web'], start_links=False)
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
project.up(['web'])
|
||||
self.assertEqual(len(project.containers()), 3)
|
||||
self.assertEqual(len(project.get_service('web').containers()), 1)
|
||||
self.assertEqual(len(project.get_service('db').containers()), 1)
|
||||
self.assertEqual(len(project.get_service('data').containers()), 1)
|
||||
self.assertEqual(len(project.get_service('console').containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_deps(self):
|
||||
project = Project.from_dicts(
|
||||
name='composetest',
|
||||
service_dicts=config.from_dictionary({
|
||||
'console': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
},
|
||||
'data' : {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'links': ['db'],
|
||||
},
|
||||
}),
|
||||
client=self.client,
|
||||
)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'], start_deps=False)
|
||||
self.assertEqual(len(project.containers(stopped=True)), 2)
|
||||
self.assertEqual(len(project.get_service('web').containers()), 0)
|
||||
self.assertEqual(len(project.get_service('db').containers()), 1)
|
||||
self.assertEqual(len(project.get_service('data').containers()), 0)
|
||||
self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1)
|
||||
self.assertEqual(len(project.get_service('console').containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
@ -2,6 +2,7 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
from os import path
|
||||
import mock
|
||||
|
||||
from compose import Service
|
||||
from compose.service import CannotBeScaledError
|
||||
@ -12,7 +13,7 @@ from .testcases import DockerClientTestCase
|
||||
|
||||
def create_and_start_container(service, **override_options):
|
||||
container = service.create_container(**override_options)
|
||||
return service.start_container(container, **override_options)
|
||||
return service.start_container(container)
|
||||
|
||||
|
||||
class ServiceTest(DockerClientTestCase):
|
||||
@ -122,6 +123,24 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertTrue(path.basename(actual_host_path) == path.basename(host_path),
|
||||
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_create_container_with_home_and_env_var_in_volume_path(self):
|
||||
os.environ['VOLUME_NAME'] = 'my-volume'
|
||||
os.environ['HOME'] = '/tmp/home-dir'
|
||||
expected_host_path = os.path.join(os.environ['HOME'], os.environ['VOLUME_NAME'])
|
||||
|
||||
host_path = '~/${VOLUME_NAME}'
|
||||
container_path = '/container-path'
|
||||
|
||||
service = self.create_service('db', volumes=['%s:%s' % (host_path, container_path)])
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
|
||||
actual_host_path = container.get('Volumes')[container_path]
|
||||
components = actual_host_path.split('/')
|
||||
self.assertTrue(components[-2:] == ['home-dir', 'my-volume'],
|
||||
msg="Last two components differ: %s, %s" % (actual_host_path, expected_host_path))
|
||||
|
||||
def test_create_container_with_volumes_from(self):
|
||||
volume_service = self.create_service('data')
|
||||
volume_container_1 = volume_service.create_container()
|
||||
@ -423,6 +442,11 @@ class ServiceTest(DockerClientTestCase):
|
||||
container = create_and_start_container(service)
|
||||
self.assertEqual(container.get('HostConfig.NetworkMode'), 'host')
|
||||
|
||||
def test_dns_no_value(self):
|
||||
service = self.create_service('web')
|
||||
container = create_and_start_container(service)
|
||||
self.assertIsNone(container.get('HostConfig.Dns'))
|
||||
|
||||
def test_dns_single_value(self):
|
||||
service = self.create_service('web', dns='8.8.8.8')
|
||||
container = create_and_start_container(service)
|
||||
@ -454,6 +478,11 @@ class ServiceTest(DockerClientTestCase):
|
||||
container = create_and_start_container(service)
|
||||
self.assertEqual(container.get('HostConfig.CapDrop'), ['SYS_ADMIN', 'NET_ADMIN'])
|
||||
|
||||
def test_dns_search_no_value(self):
|
||||
service = self.create_service('web')
|
||||
container = create_and_start_container(service)
|
||||
self.assertIsNone(container.get('HostConfig.DnsSearch'))
|
||||
|
||||
def test_dns_search_single_value(self):
|
||||
service = self.create_service('web', dns_search='example.com')
|
||||
container = create_and_start_container(service)
|
||||
@ -472,25 +501,21 @@ class ServiceTest(DockerClientTestCase):
|
||||
def test_split_env(self):
|
||||
service = self.create_service('web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS='])
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.iteritems():
|
||||
for k,v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
def test_env_from_file_combined_with_env(self):
|
||||
service = self.create_service('web', environment=['ONE=1', 'TWO=2', 'THREE=3'], env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env'])
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.iteritems():
|
||||
for k,v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_resolve_env(self):
|
||||
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
try:
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.iteritems():
|
||||
self.assertEqual(env[k], v)
|
||||
finally:
|
||||
del os.environ['FILE_DEF']
|
||||
del os.environ['FILE_DEF_EMPTY']
|
||||
del os.environ['ENV_DEF']
|
||||
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
@ -1,6 +1,7 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from compose.service import Service
|
||||
from compose.config import make_service_dict
|
||||
from compose.cli.docker_client import docker_client
|
||||
from compose.progress_stream import stream_output
|
||||
from .. import unittest
|
||||
@ -21,14 +22,15 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
self.client.remove_image(i)
|
||||
|
||||
def create_service(self, name, **kwargs):
|
||||
kwargs['image'] = "busybox:latest"
|
||||
|
||||
if 'command' not in kwargs:
|
||||
kwargs['command'] = ["/bin/sleep", "300"]
|
||||
|
||||
return Service(
|
||||
project='composetest',
|
||||
name=name,
|
||||
client=self.client,
|
||||
image="busybox:latest",
|
||||
**kwargs
|
||||
**make_service_dict(name, kwargs, working_dir='.')
|
||||
)
|
||||
|
||||
def check_build(self, *args, **kwargs):
|
||||
|
@ -19,4 +19,4 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
with mock.patch.dict(os.environ):
|
||||
os.environ['DOCKER_CLIENT_TIMEOUT'] = timeout = "300"
|
||||
client = docker_client.docker_client()
|
||||
self.assertEqual(client._timeout, int(timeout))
|
||||
self.assertEqual(client.timeout, int(timeout))
|
||||
|
@ -6,12 +6,14 @@ import tempfile
|
||||
import shutil
|
||||
from .. import unittest
|
||||
|
||||
import docker
|
||||
import mock
|
||||
from six import StringIO
|
||||
|
||||
from compose.cli import main
|
||||
from compose.cli.main import TopLevelCommand
|
||||
from compose.cli.errors import ComposeFileNotFound
|
||||
from six import StringIO
|
||||
from compose.service import Service
|
||||
|
||||
|
||||
class CLITestCase(unittest.TestCase):
|
||||
@ -103,6 +105,36 @@ class CLITestCase(unittest.TestCase):
|
||||
self.assertEqual(logging.getLogger().level, logging.DEBUG)
|
||||
self.assertEqual(logging.getLogger('requests').propagate, False)
|
||||
|
||||
@mock.patch('compose.cli.main.dockerpty', autospec=True)
|
||||
def test_run_with_environment_merged_with_options_list(self, mock_dockerpty):
|
||||
command = TopLevelCommand()
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_project = mock.Mock()
|
||||
mock_project.get_service.return_value = Service(
|
||||
'service',
|
||||
client=mock_client,
|
||||
environment=['FOO=ONE', 'BAR=TWO'],
|
||||
image='someimage')
|
||||
|
||||
command.run(mock_project, {
|
||||
'SERVICE': 'service',
|
||||
'COMMAND': None,
|
||||
'-e': ['BAR=NEW', 'OTHER=THREE'],
|
||||
'--user': None,
|
||||
'--no-deps': None,
|
||||
'--allow-insecure-ssl': None,
|
||||
'-d': True,
|
||||
'-T': None,
|
||||
'--entrypoint': None,
|
||||
'--service-ports': None,
|
||||
'--rm': None,
|
||||
})
|
||||
|
||||
_, _, call_kwargs = mock_client.create_container.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call_kwargs['environment'],
|
||||
{'FOO': 'ONE', 'BAR': 'NEW', 'OTHER': 'THREE'})
|
||||
|
||||
|
||||
def get_config_filename_for_files(filenames):
|
||||
project_dir = tempfile.mkdtemp()
|
||||
|
430
tests/unit/config_test.py
Normal file
430
tests/unit/config_test.py
Normal file
@ -0,0 +1,430 @@
|
||||
import os
|
||||
import mock
|
||||
from .. import unittest
|
||||
|
||||
from compose import config
|
||||
|
||||
class ConfigTest(unittest.TestCase):
|
||||
def test_from_dictionary(self):
|
||||
service_dicts = config.from_dictionary({
|
||||
'foo': {'image': 'busybox'},
|
||||
'bar': {'environment': ['FOO=1']},
|
||||
})
|
||||
|
||||
self.assertEqual(
|
||||
sorted(service_dicts, key=lambda d: d['name']),
|
||||
sorted([
|
||||
{
|
||||
'name': 'bar',
|
||||
'environment': {'FOO': '1'},
|
||||
},
|
||||
{
|
||||
'name': 'foo',
|
||||
'image': 'busybox',
|
||||
}
|
||||
])
|
||||
)
|
||||
|
||||
def test_from_dictionary_throws_error_when_not_dict(self):
|
||||
with self.assertRaises(config.ConfigurationError):
|
||||
config.from_dictionary({
|
||||
'web': 'busybox:latest',
|
||||
})
|
||||
|
||||
def test_config_validation(self):
|
||||
self.assertRaises(
|
||||
config.ConfigurationError,
|
||||
lambda: config.make_service_dict('foo', {'port': ['8000']})
|
||||
)
|
||||
config.make_service_dict('foo', {'ports': ['8000']})
|
||||
|
||||
|
||||
class VolumePathTest(unittest.TestCase):
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_volume_binding_with_environ(self):
|
||||
os.environ['VOLUME_PATH'] = '/host/path'
|
||||
d = config.make_service_dict('foo', {'volumes': ['${VOLUME_PATH}:/container/path']}, working_dir='.')
|
||||
self.assertEqual(d['volumes'], ['/host/path:/container/path'])
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_volume_binding_with_home(self):
|
||||
os.environ['HOME'] = '/home/user'
|
||||
d = config.make_service_dict('foo', {'volumes': ['~:/container/path']}, working_dir='.')
|
||||
self.assertEqual(d['volumes'], ['/home/user:/container/path'])
|
||||
|
||||
|
||||
class MergeVolumesTest(unittest.TestCase):
|
||||
def test_empty(self):
|
||||
service_dict = config.merge_service_dicts({}, {})
|
||||
self.assertNotIn('volumes', service_dict)
|
||||
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/data']},
|
||||
{},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/foo:/code', '/data']))
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{},
|
||||
{'volumes': ['/bar:/code']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code']))
|
||||
|
||||
def test_override_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/data']},
|
||||
{'volumes': ['/bar:/code']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/data']))
|
||||
|
||||
def test_add_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/data']},
|
||||
{'volumes': ['/bar:/code', '/quux:/data']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/quux:/data']))
|
||||
|
||||
def test_remove_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/quux:/data']},
|
||||
{'volumes': ['/bar:/code', '/data']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/data']))
|
||||
|
||||
def test_merge_build_or_image_no_override(self):
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'build': '.'}, {}),
|
||||
{'build': '.'},
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'image': 'redis'}, {}),
|
||||
{'image': 'redis'},
|
||||
)
|
||||
|
||||
def test_merge_build_or_image_override_with_same(self):
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'build': '.'}, {'build': './web'}),
|
||||
{'build': './web'},
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'image': 'redis'}, {'image': 'postgres'}),
|
||||
{'image': 'postgres'},
|
||||
)
|
||||
|
||||
def test_merge_build_or_image_override_with_other(self):
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'build': '.'}, {'image': 'redis'}),
|
||||
{'image': 'redis'}
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'image': 'redis'}, {'build': '.'}),
|
||||
{'build': '.'}
|
||||
)
|
||||
|
||||
|
||||
class MergeListsTest(unittest.TestCase):
|
||||
def test_empty(self):
|
||||
service_dict = config.merge_service_dicts({}, {})
|
||||
self.assertNotIn('ports', service_dict)
|
||||
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'ports': ['10:8000', '9000']},
|
||||
{},
|
||||
)
|
||||
self.assertEqual(set(service_dict['ports']), set(['10:8000', '9000']))
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{},
|
||||
{'ports': ['10:8000', '9000']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['ports']), set(['10:8000', '9000']))
|
||||
|
||||
def test_add_item(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'ports': ['10:8000', '9000']},
|
||||
{'ports': ['20:8000']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['ports']), set(['10:8000', '9000', '20:8000']))
|
||||
|
||||
|
||||
class MergeStringsOrListsTest(unittest.TestCase):
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'dns': '8.8.8.8'},
|
||||
{},
|
||||
)
|
||||
self.assertEqual(set(service_dict['dns']), set(['8.8.8.8']))
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{},
|
||||
{'dns': '8.8.8.8'},
|
||||
)
|
||||
self.assertEqual(set(service_dict['dns']), set(['8.8.8.8']))
|
||||
|
||||
def test_add_string(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'dns': ['8.8.8.8']},
|
||||
{'dns': '9.9.9.9'},
|
||||
)
|
||||
self.assertEqual(set(service_dict['dns']), set(['8.8.8.8', '9.9.9.9']))
|
||||
|
||||
def test_add_list(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'dns': '8.8.8.8'},
|
||||
{'dns': ['9.9.9.9']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['dns']), set(['8.8.8.8', '9.9.9.9']))
|
||||
|
||||
|
||||
class EnvTest(unittest.TestCase):
|
||||
def test_parse_environment_as_list(self):
|
||||
environment =[
|
||||
'NORMAL=F1',
|
||||
'CONTAINS_EQUALS=F=2',
|
||||
'TRAILING_EQUALS=',
|
||||
]
|
||||
self.assertEqual(
|
||||
config.parse_environment(environment),
|
||||
{'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''},
|
||||
)
|
||||
|
||||
def test_parse_environment_as_dict(self):
|
||||
environment = {
|
||||
'NORMAL': 'F1',
|
||||
'CONTAINS_EQUALS': 'F=2',
|
||||
'TRAILING_EQUALS': None,
|
||||
}
|
||||
self.assertEqual(config.parse_environment(environment), environment)
|
||||
|
||||
def test_parse_environment_invalid(self):
|
||||
with self.assertRaises(config.ConfigurationError):
|
||||
config.parse_environment('a=b')
|
||||
|
||||
def test_parse_environment_empty(self):
|
||||
self.assertEqual(config.parse_environment(None), {})
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_resolve_environment(self):
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
|
||||
service_dict = config.make_service_dict(
|
||||
'foo',
|
||||
{
|
||||
'environment': {
|
||||
'FILE_DEF': 'F1',
|
||||
'FILE_DEF_EMPTY': '',
|
||||
'ENV_DEF': None,
|
||||
'NO_DEF': None
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
service_dict['environment'],
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''},
|
||||
)
|
||||
|
||||
def test_env_from_file(self):
|
||||
service_dict = config.make_service_dict(
|
||||
'foo',
|
||||
{'env_file': 'one.env'},
|
||||
'tests/fixtures/env',
|
||||
)
|
||||
self.assertEqual(
|
||||
service_dict['environment'],
|
||||
{'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'bar'},
|
||||
)
|
||||
|
||||
def test_env_from_multiple_files(self):
|
||||
service_dict = config.make_service_dict(
|
||||
'foo',
|
||||
{'env_file': ['one.env', 'two.env']},
|
||||
'tests/fixtures/env',
|
||||
)
|
||||
self.assertEqual(
|
||||
service_dict['environment'],
|
||||
{'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'},
|
||||
)
|
||||
|
||||
def test_env_nonexistent_file(self):
|
||||
options = {'env_file': 'nonexistent.env'}
|
||||
self.assertRaises(
|
||||
config.ConfigurationError,
|
||||
lambda: config.make_service_dict('foo', options, 'tests/fixtures/env'),
|
||||
)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_resolve_environment_from_file(self):
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
service_dict = config.make_service_dict(
|
||||
'foo',
|
||||
{'env_file': 'resolve.env'},
|
||||
'tests/fixtures/env',
|
||||
)
|
||||
self.assertEqual(
|
||||
service_dict['environment'],
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''},
|
||||
)
|
||||
|
||||
class ExtendsTest(unittest.TestCase):
|
||||
def test_extends(self):
|
||||
service_dicts = config.load('tests/fixtures/extends/docker-compose.yml')
|
||||
|
||||
service_dicts = sorted(
|
||||
service_dicts,
|
||||
key=lambda sd: sd['name'],
|
||||
)
|
||||
|
||||
self.assertEqual(service_dicts, [
|
||||
{
|
||||
'name': 'mydb',
|
||||
'image': 'busybox',
|
||||
'command': 'sleep 300',
|
||||
},
|
||||
{
|
||||
'name': 'myweb',
|
||||
'image': 'busybox',
|
||||
'command': 'sleep 300',
|
||||
'links': ['mydb:db'],
|
||||
'environment': {
|
||||
"FOO": "1",
|
||||
"BAR": "2",
|
||||
"BAZ": "2",
|
||||
},
|
||||
}
|
||||
])
|
||||
|
||||
def test_nested(self):
|
||||
service_dicts = config.load('tests/fixtures/extends/nested.yml')
|
||||
|
||||
self.assertEqual(service_dicts, [
|
||||
{
|
||||
'name': 'myweb',
|
||||
'image': 'busybox',
|
||||
'command': '/bin/true',
|
||||
'environment': {
|
||||
"FOO": "2",
|
||||
"BAR": "2",
|
||||
},
|
||||
},
|
||||
])
|
||||
|
||||
def test_circular(self):
|
||||
try:
|
||||
config.load('tests/fixtures/extends/circle-1.yml')
|
||||
raise Exception("Expected config.CircularReference to be raised")
|
||||
except config.CircularReference as e:
|
||||
self.assertEqual(
|
||||
[(os.path.basename(filename), service_name) for (filename, service_name) in e.trail],
|
||||
[
|
||||
('circle-1.yml', 'web'),
|
||||
('circle-2.yml', 'web'),
|
||||
('circle-1.yml', 'web'),
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def test_extends_validation(self):
|
||||
dictionary = {'extends': None}
|
||||
load_config = lambda: config.make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends')
|
||||
|
||||
self.assertRaisesRegexp(config.ConfigurationError, 'dictionary', load_config)
|
||||
|
||||
dictionary['extends'] = {}
|
||||
self.assertRaises(config.ConfigurationError, load_config)
|
||||
|
||||
dictionary['extends']['file'] = 'common.yml'
|
||||
self.assertRaisesRegexp(config.ConfigurationError, 'service', load_config)
|
||||
|
||||
dictionary['extends']['service'] = 'web'
|
||||
self.assertIsInstance(load_config(), dict)
|
||||
|
||||
dictionary['extends']['what'] = 'is this'
|
||||
self.assertRaisesRegexp(config.ConfigurationError, 'what', load_config)
|
||||
|
||||
def test_blacklisted_options(self):
|
||||
def load_config():
|
||||
return config.make_service_dict('myweb', {
|
||||
'extends': {
|
||||
'file': 'whatever',
|
||||
'service': 'web',
|
||||
}
|
||||
}, '.')
|
||||
|
||||
with self.assertRaisesRegexp(config.ConfigurationError, 'links'):
|
||||
other_config = {'web': {'links': ['db']}}
|
||||
|
||||
with mock.patch.object(config, 'load_yaml', return_value=other_config):
|
||||
print load_config()
|
||||
|
||||
with self.assertRaisesRegexp(config.ConfigurationError, 'volumes_from'):
|
||||
other_config = {'web': {'volumes_from': ['db']}}
|
||||
|
||||
with mock.patch.object(config, 'load_yaml', return_value=other_config):
|
||||
print load_config()
|
||||
|
||||
with self.assertRaisesRegexp(config.ConfigurationError, 'net'):
|
||||
other_config = {'web': {'net': 'container:db'}}
|
||||
|
||||
with mock.patch.object(config, 'load_yaml', return_value=other_config):
|
||||
print load_config()
|
||||
|
||||
other_config = {'web': {'net': 'host'}}
|
||||
|
||||
with mock.patch.object(config, 'load_yaml', return_value=other_config):
|
||||
print load_config()
|
||||
|
||||
def test_volume_path(self):
|
||||
dicts = config.load('tests/fixtures/volume-path/docker-compose.yml')
|
||||
|
||||
paths = [
|
||||
'%s:/foo' % os.path.abspath('tests/fixtures/volume-path/common/foo'),
|
||||
'%s:/bar' % os.path.abspath('tests/fixtures/volume-path/bar'),
|
||||
]
|
||||
|
||||
self.assertEqual(set(dicts[0]['volumes']), set(paths))
|
||||
|
||||
|
||||
class BuildPathTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.abs_context_path = os.path.join(os.getcwd(), 'tests/fixtures/build-ctx')
|
||||
|
||||
def test_nonexistent_path(self):
|
||||
options = {'build': 'nonexistent.path'}
|
||||
self.assertRaises(
|
||||
config.ConfigurationError,
|
||||
lambda: config.make_service_dict('foo', options, 'tests/fixtures/build-path'),
|
||||
)
|
||||
|
||||
def test_relative_path(self):
|
||||
relative_build_path = '../build-ctx/'
|
||||
service_dict = config.make_service_dict(
|
||||
'relpath',
|
||||
{'build': relative_build_path},
|
||||
working_dir='tests/fixtures/build-path'
|
||||
)
|
||||
self.assertEquals(service_dict['build'], self.abs_context_path)
|
||||
|
||||
def test_absolute_path(self):
|
||||
service_dict = config.make_service_dict(
|
||||
'abspath',
|
||||
{'build': self.abs_context_path},
|
||||
working_dir='tests/fixtures/build-path'
|
||||
)
|
||||
self.assertEquals(service_dict['build'], self.abs_context_path)
|
||||
|
||||
def test_from_file(self):
|
||||
service_dict = config.load('tests/fixtures/build-path/docker-compose.yml')
|
||||
self.assertEquals(service_dict, [{'name': 'foo', 'build': self.abs_context_path}])
|
@ -1,7 +1,12 @@
|
||||
from __future__ import unicode_literals
|
||||
from .. import unittest
|
||||
from compose.service import Service
|
||||
from compose.project import Project, ConfigurationError
|
||||
from compose.project import Project
|
||||
from compose.container import Container
|
||||
from compose import config
|
||||
|
||||
import mock
|
||||
import docker
|
||||
|
||||
class ProjectTest(unittest.TestCase):
|
||||
def test_from_dict(self):
|
||||
@ -45,26 +50,21 @@ class ProjectTest(unittest.TestCase):
|
||||
self.assertEqual(project.services[2].name, 'web')
|
||||
|
||||
def test_from_config(self):
|
||||
project = Project.from_config('composetest', {
|
||||
dicts = config.from_dictionary({
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
},
|
||||
}, None)
|
||||
})
|
||||
project = Project.from_dicts('composetest', dicts, None)
|
||||
self.assertEqual(len(project.services), 2)
|
||||
self.assertEqual(project.get_service('web').name, 'web')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'busybox:latest')
|
||||
self.assertEqual(project.get_service('db').name, 'db')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'busybox:latest')
|
||||
|
||||
def test_from_config_throws_error_when_not_dict(self):
|
||||
with self.assertRaises(ConfigurationError):
|
||||
project = Project.from_config('composetest', {
|
||||
'web': 'busybox:latest',
|
||||
}, None)
|
||||
|
||||
def test_get_service(self):
|
||||
web = Service(
|
||||
project='composetest',
|
||||
@ -120,7 +120,7 @@ class ProjectTest(unittest.TestCase):
|
||||
)
|
||||
project = Project('test', [web, db, cache, console], None)
|
||||
self.assertEqual(
|
||||
project.get_services(['console'], include_links=True),
|
||||
project.get_services(['console'], include_deps=True),
|
||||
[db, web, console]
|
||||
)
|
||||
|
||||
@ -136,6 +136,105 @@ class ProjectTest(unittest.TestCase):
|
||||
)
|
||||
project = Project('test', [web, db], None)
|
||||
self.assertEqual(
|
||||
project.get_services(['web', 'db'], include_links=True),
|
||||
project.get_services(['web', 'db'], include_deps=True),
|
||||
[db, web]
|
||||
)
|
||||
|
||||
def test_use_volumes_from_container(self):
|
||||
container_id = 'aabbccddee'
|
||||
container_dict = dict(Name='aaa', Id=container_id)
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_client.inspect_container.return_value = container_dict
|
||||
project = Project.from_dicts('test', [
|
||||
{
|
||||
'name': 'test',
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['aaa']
|
||||
}
|
||||
], mock_client)
|
||||
self.assertEqual(project.get_service('test')._get_volumes_from(), [container_id])
|
||||
|
||||
def test_use_volumes_from_service_no_container(self):
|
||||
container_name = 'test_vol_1'
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_client.containers.return_value = [
|
||||
{
|
||||
"Name": container_name,
|
||||
"Names": [container_name],
|
||||
"Id": container_name,
|
||||
"Image": 'busybox:latest'
|
||||
}
|
||||
]
|
||||
project = Project.from_dicts('test', [
|
||||
{
|
||||
'name': 'vol',
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
{
|
||||
'name': 'test',
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['vol']
|
||||
}
|
||||
], mock_client)
|
||||
self.assertEqual(project.get_service('test')._get_volumes_from(), [container_name])
|
||||
|
||||
@mock.patch.object(Service, 'containers')
|
||||
def test_use_volumes_from_service_container(self, mock_return):
|
||||
container_ids = ['aabbccddee', '12345']
|
||||
mock_return.return_value = [
|
||||
mock.Mock(id=container_id, spec=Container)
|
||||
for container_id in container_ids]
|
||||
|
||||
project = Project.from_dicts('test', [
|
||||
{
|
||||
'name': 'vol',
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
{
|
||||
'name': 'test',
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['vol']
|
||||
}
|
||||
], None)
|
||||
self.assertEqual(project.get_service('test')._get_volumes_from(), container_ids)
|
||||
|
||||
def test_use_net_from_container(self):
|
||||
container_id = 'aabbccddee'
|
||||
container_dict = dict(Name='aaa', Id=container_id)
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_client.inspect_container.return_value = container_dict
|
||||
project = Project.from_dicts('test', [
|
||||
{
|
||||
'name': 'test',
|
||||
'image': 'busybox:latest',
|
||||
'net': 'container:aaa'
|
||||
}
|
||||
], mock_client)
|
||||
service = project.get_service('test')
|
||||
self.assertEqual(service._get_net(), 'container:'+container_id)
|
||||
|
||||
def test_use_net_from_service(self):
|
||||
container_name = 'test_aaa_1'
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_client.containers.return_value = [
|
||||
{
|
||||
"Name": container_name,
|
||||
"Names": [container_name],
|
||||
"Id": container_name,
|
||||
"Image": 'busybox:latest'
|
||||
}
|
||||
]
|
||||
project = Project.from_dicts('test', [
|
||||
{
|
||||
'name': 'aaa',
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
{
|
||||
'name': 'test',
|
||||
'image': 'busybox:latest',
|
||||
'net': 'container:aaa'
|
||||
}
|
||||
], mock_client)
|
||||
|
||||
service = project.get_service('test')
|
||||
self.assertEqual(service._get_net(), 'container:'+container_name)
|
||||
|
@ -1,6 +1,5 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
|
||||
from .. import unittest
|
||||
import mock
|
||||
@ -11,14 +10,14 @@ from requests import Response
|
||||
from compose import Service
|
||||
from compose.container import Container
|
||||
from compose.service import (
|
||||
ConfigError,
|
||||
split_port,
|
||||
build_port_bindings,
|
||||
parse_volume_spec,
|
||||
build_volume_binding,
|
||||
APIError,
|
||||
ConfigError,
|
||||
build_port_bindings,
|
||||
build_volume_binding,
|
||||
get_container_name,
|
||||
parse_repository_tag,
|
||||
parse_volume_spec,
|
||||
split_port,
|
||||
)
|
||||
|
||||
|
||||
@ -46,10 +45,6 @@ class ServiceTest(unittest.TestCase):
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo', project='_'))
|
||||
Service(name='foo', project='bar')
|
||||
|
||||
def test_config_validation(self):
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo', port=['8000']))
|
||||
Service(name='foo', ports=['8000'])
|
||||
|
||||
def test_get_container_name(self):
|
||||
self.assertIsNone(get_container_name({}))
|
||||
self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1')
|
||||
@ -305,95 +300,3 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
self.assertEqual(
|
||||
binding,
|
||||
('/outside', dict(bind='/inside', ro=False)))
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_build_volume_binding_with_environ(self):
|
||||
os.environ['VOLUME_PATH'] = '/opt'
|
||||
binding = build_volume_binding(parse_volume_spec('${VOLUME_PATH}:/opt'))
|
||||
self.assertEqual(binding, ('/opt', dict(bind='/opt', ro=False)))
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_building_volume_binding_with_home(self):
|
||||
os.environ['HOME'] = '/home/user'
|
||||
binding = build_volume_binding(parse_volume_spec('~:/home/user'))
|
||||
self.assertEqual(
|
||||
binding,
|
||||
('/home/user', dict(bind='/home/user', ro=False)))
|
||||
|
||||
class ServiceEnvironmentTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.mock_client = mock.create_autospec(docker.Client)
|
||||
self.mock_client.containers.return_value = []
|
||||
|
||||
def test_parse_environment(self):
|
||||
service = Service('foo',
|
||||
environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS='],
|
||||
client=self.mock_client,
|
||||
image='image_name',
|
||||
)
|
||||
options = service._get_container_create_options({})
|
||||
self.assertEqual(
|
||||
options['environment'],
|
||||
{'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}
|
||||
)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_resolve_environment(self):
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
service = Service('foo',
|
||||
environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None},
|
||||
client=self.mock_client,
|
||||
image='image_name',
|
||||
)
|
||||
options = service._get_container_create_options({})
|
||||
self.assertEqual(
|
||||
options['environment'],
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}
|
||||
)
|
||||
|
||||
def test_env_from_file(self):
|
||||
service = Service('foo',
|
||||
env_file='tests/fixtures/env/one.env',
|
||||
client=self.mock_client,
|
||||
image='image_name',
|
||||
)
|
||||
options = service._get_container_create_options({})
|
||||
self.assertEqual(
|
||||
options['environment'],
|
||||
{'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'bar'}
|
||||
)
|
||||
|
||||
def test_env_from_multiple_files(self):
|
||||
service = Service('foo',
|
||||
env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env'],
|
||||
client=self.mock_client,
|
||||
image='image_name',
|
||||
)
|
||||
options = service._get_container_create_options({})
|
||||
self.assertEqual(
|
||||
options['environment'],
|
||||
{'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}
|
||||
)
|
||||
|
||||
def test_env_nonexistent_file(self):
|
||||
self.assertRaises(ConfigError, lambda: Service('foo', env_file='tests/fixtures/env/nonexistent.env'))
|
||||
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_resolve_environment_from_file(self):
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
service = Service('foo',
|
||||
env_file=['tests/fixtures/env/resolve.env'],
|
||||
client=self.mock_client,
|
||||
image='image_name',
|
||||
)
|
||||
options = service._get_container_create_options({})
|
||||
self.assertEqual(
|
||||
options['environment'],
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}
|
||||
)
|
||||
|
@ -65,6 +65,95 @@ class SortServiceTest(unittest.TestCase):
|
||||
self.assertEqual(sorted_services[1]['name'], 'parent')
|
||||
self.assertEqual(sorted_services[2]['name'], 'grandparent')
|
||||
|
||||
def test_sort_service_dicts_4(self):
|
||||
services = [
|
||||
{
|
||||
'name': 'child'
|
||||
},
|
||||
{
|
||||
'name': 'parent',
|
||||
'volumes_from': ['child']
|
||||
},
|
||||
{
|
||||
'links': ['parent'],
|
||||
'name': 'grandparent'
|
||||
},
|
||||
]
|
||||
|
||||
sorted_services = sort_service_dicts(services)
|
||||
self.assertEqual(len(sorted_services), 3)
|
||||
self.assertEqual(sorted_services[0]['name'], 'child')
|
||||
self.assertEqual(sorted_services[1]['name'], 'parent')
|
||||
self.assertEqual(sorted_services[2]['name'], 'grandparent')
|
||||
|
||||
def test_sort_service_dicts_5(self):
|
||||
services = [
|
||||
{
|
||||
'links': ['parent'],
|
||||
'name': 'grandparent'
|
||||
},
|
||||
{
|
||||
'name': 'parent',
|
||||
'net': 'container:child'
|
||||
},
|
||||
{
|
||||
'name': 'child'
|
||||
}
|
||||
]
|
||||
|
||||
sorted_services = sort_service_dicts(services)
|
||||
self.assertEqual(len(sorted_services), 3)
|
||||
self.assertEqual(sorted_services[0]['name'], 'child')
|
||||
self.assertEqual(sorted_services[1]['name'], 'parent')
|
||||
self.assertEqual(sorted_services[2]['name'], 'grandparent')
|
||||
|
||||
def test_sort_service_dicts_6(self):
|
||||
services = [
|
||||
{
|
||||
'links': ['parent'],
|
||||
'name': 'grandparent'
|
||||
},
|
||||
{
|
||||
'name': 'parent',
|
||||
'volumes_from': ['child']
|
||||
},
|
||||
{
|
||||
'name': 'child'
|
||||
}
|
||||
]
|
||||
|
||||
sorted_services = sort_service_dicts(services)
|
||||
self.assertEqual(len(sorted_services), 3)
|
||||
self.assertEqual(sorted_services[0]['name'], 'child')
|
||||
self.assertEqual(sorted_services[1]['name'], 'parent')
|
||||
self.assertEqual(sorted_services[2]['name'], 'grandparent')
|
||||
|
||||
def test_sort_service_dicts_7(self):
|
||||
services = [
|
||||
{
|
||||
'net': 'container:three',
|
||||
'name': 'four'
|
||||
},
|
||||
{
|
||||
'links': ['two'],
|
||||
'name': 'three'
|
||||
},
|
||||
{
|
||||
'name': 'two',
|
||||
'volumes_from': ['one']
|
||||
},
|
||||
{
|
||||
'name': 'one'
|
||||
}
|
||||
]
|
||||
|
||||
sorted_services = sort_service_dicts(services)
|
||||
self.assertEqual(len(sorted_services), 4)
|
||||
self.assertEqual(sorted_services[0]['name'], 'one')
|
||||
self.assertEqual(sorted_services[1]['name'], 'two')
|
||||
self.assertEqual(sorted_services[2]['name'], 'three')
|
||||
self.assertEqual(sorted_services[3]['name'], 'four')
|
||||
|
||||
def test_sort_service_dicts_circular_imports(self):
|
||||
services = [
|
||||
{
|
||||
|
Loading…
x
Reference in New Issue
Block a user