mirror of
https://github.com/docker/compose.git
synced 2025-04-07 19:55:07 +02:00
commit
0de9a1b388
50
CHANGELOG.md
50
CHANGELOG.md
@ -1,6 +1,56 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
1.8.0 (2016-06-14)
|
||||
-----------------
|
||||
|
||||
New Features
|
||||
|
||||
- Added `docker-compose bundle`, a command that builds a bundle file
|
||||
to be consumed by the new *Docker Stack* commands in Docker 1.12.
|
||||
This command automatically pushes and pulls images as needed.
|
||||
|
||||
- Added `docker-compose push`, a command that pushes service images
|
||||
to a registry.
|
||||
|
||||
- As announced in 1.7.0, `docker-compose rm` now removes containers
|
||||
created by `docker-compose run` by default.
|
||||
|
||||
- Compose now supports specifying a custom TLS version for
|
||||
interaction with the Docker Engine using the `COMPOSE_TLS_VERSION`
|
||||
environment variable.
|
||||
|
||||
Bug Fixes
|
||||
|
||||
- Fixed a bug where Compose would erroneously try to read `.env`
|
||||
at the project's root when it is a directory.
|
||||
|
||||
- Improved config merging when multiple compose files are involved
|
||||
for several service sub-keys.
|
||||
|
||||
- Fixed a bug where volume mappings containing Windows drives would
|
||||
sometimes be parsed incorrectly.
|
||||
|
||||
- Fixed a bug in Windows environment where volume mappings of the
|
||||
host's root directory would be parsed incorrectly.
|
||||
|
||||
- Fixed a bug where `docker-compose config` would ouput an invalid
|
||||
Compose file if external networks were specified.
|
||||
|
||||
- Fixed an issue where unset buildargs would be assigned a string
|
||||
containing `'None'` instead of the expected empty value.
|
||||
|
||||
- Fixed a bug where yes/no prompts on Windows would not show before
|
||||
receiving input.
|
||||
|
||||
- Fixed a bug where trying to `docker-compose exec` on Windows
|
||||
without the `-d` option would exit with a stacktrace. This will
|
||||
still fail for the time being, but should do so gracefully.
|
||||
|
||||
- Fixed a bug where errors during `docker-compose up` would show
|
||||
an unrelated stacktrace at the end of the process.
|
||||
|
||||
|
||||
1.7.1 (2016-05-04)
|
||||
-----------------
|
||||
|
||||
|
21
README.md
21
README.md
@ -22,16 +22,17 @@ they can be run together in an isolated environment:
|
||||
|
||||
A `docker-compose.yml` looks like this:
|
||||
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
links:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
redis:
|
||||
image: redis
|
||||
|
||||
For more information about the Compose file, see the
|
||||
[Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
|
||||
|
19
ROADMAP.md
19
ROADMAP.md
@ -1,13 +1,21 @@
|
||||
# Roadmap
|
||||
|
||||
## An even better tool for development environments
|
||||
|
||||
Compose is a great tool for development environments, but it could be even better. For example:
|
||||
|
||||
- It should be possible to define hostnames for containers which work from the host machine, e.g. “mywebcontainer.local”. This is needed by apps comprising multiple web services which generate links to one another (e.g. a frontend website and a separate admin webapp)
|
||||
|
||||
## More than just development environments
|
||||
|
||||
Over time we will extend Compose's remit to cover test, staging and production environments. This is not a simple task, and will take many incremental improvements such as:
|
||||
Compose currently works really well in development, but we want to make the Compose file format better for test, staging, and production environments. To support these use cases, there will need to be improvements to the file format, improvements to the command-line tool, integrations with other tools, and perhaps new tools altogether.
|
||||
|
||||
Some specific things we are considering:
|
||||
|
||||
- Compose currently will attempt to get your application into the correct state when running `up`, but it has a number of shortcomings:
|
||||
- It should roll back to a known good state if it fails.
|
||||
- It should allow a user to check the actions it is about to perform before running them.
|
||||
- It should be possible to partially modify the config file for different environments (dev/test/staging/prod), passing in e.g. custom ports or volume mount paths. ([#1377](https://github.com/docker/compose/issues/1377))
|
||||
- It should be possible to partially modify the config file for different environments (dev/test/staging/prod), passing in e.g. custom ports, volume mount paths, or volume drivers. ([#1377](https://github.com/docker/compose/issues/1377))
|
||||
- Compose should recommend a technique for zero-downtime deploys.
|
||||
- It should be possible to continuously attempt to keep an application in the correct state, instead of just performing `up` a single time.
|
||||
|
||||
@ -22,10 +30,3 @@ The current state of integration is documented in [SWARM.md](SWARM.md).
|
||||
Compose works well for applications that are in a single repository and depend on services that are hosted on Docker Hub. If your application depends on another application within your organisation, Compose doesn't work as well.
|
||||
|
||||
There are several ideas about how this could work, such as [including external files](https://github.com/docker/fig/issues/318).
|
||||
|
||||
## An even better tool for development environments
|
||||
|
||||
Compose is a great tool for development environments, but it could be even better. For example:
|
||||
|
||||
- [Compose could watch your code and automatically kick off builds when something changes.](https://github.com/docker/fig/issues/184)
|
||||
- It should be possible to define hostnames for containers which work from the host machine, e.g. “mywebcontainer.local”. This is needed by apps comprising multiple web services which generate links to one another (e.g. a frontend website and a separate admin webapp)
|
||||
|
@ -1,4 +1,4 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__version__ = '1.7.1'
|
||||
__version__ = '1.8.0-rc1'
|
||||
|
224
compose/bundle.py
Normal file
224
compose/bundle.py
Normal file
@ -0,0 +1,224 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import json
|
||||
import logging
|
||||
|
||||
import six
|
||||
from docker.utils import split_command
|
||||
from docker.utils.ports import split_port
|
||||
|
||||
from .cli.errors import UserError
|
||||
from .config.serialize import denormalize_config
|
||||
from .network import get_network_defs_for_service
|
||||
from .service import format_environment
|
||||
from .service import NoSuchImageError
|
||||
from .service import parse_repository_tag
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
SERVICE_KEYS = {
|
||||
'working_dir': 'WorkingDir',
|
||||
'user': 'User',
|
||||
'labels': 'Labels',
|
||||
}
|
||||
|
||||
IGNORED_KEYS = {'build'}
|
||||
|
||||
SUPPORTED_KEYS = {
|
||||
'image',
|
||||
'ports',
|
||||
'expose',
|
||||
'networks',
|
||||
'command',
|
||||
'environment',
|
||||
'entrypoint',
|
||||
} | set(SERVICE_KEYS)
|
||||
|
||||
VERSION = '0.1'
|
||||
|
||||
|
||||
def serialize_bundle(config, image_digests):
|
||||
if config.networks:
|
||||
log.warn("Unsupported top level key 'networks' - ignoring")
|
||||
|
||||
if config.volumes:
|
||||
log.warn("Unsupported top level key 'volumes' - ignoring")
|
||||
|
||||
return json.dumps(
|
||||
to_bundle(config, image_digests),
|
||||
indent=2,
|
||||
sort_keys=True,
|
||||
)
|
||||
|
||||
|
||||
def get_image_digests(project):
|
||||
return {
|
||||
service.name: get_image_digest(service)
|
||||
for service in project.services
|
||||
}
|
||||
|
||||
|
||||
def get_image_digest(service):
|
||||
if 'image' not in service.options:
|
||||
raise UserError(
|
||||
"Service '{s.name}' doesn't define an image tag. An image name is "
|
||||
"required to generate a proper image digest for the bundle. Specify "
|
||||
"an image repo and tag with the 'image' option.".format(s=service))
|
||||
|
||||
repo, tag, separator = parse_repository_tag(service.options['image'])
|
||||
# Compose file already uses a digest, no lookup required
|
||||
if separator == '@':
|
||||
return service.options['image']
|
||||
|
||||
try:
|
||||
image = service.image()
|
||||
except NoSuchImageError:
|
||||
action = 'build' if 'build' in service.options else 'pull'
|
||||
raise UserError(
|
||||
"Image not found for service '{service}'. "
|
||||
"You might need to run `docker-compose {action} {service}`."
|
||||
.format(service=service.name, action=action))
|
||||
|
||||
if image['RepoDigests']:
|
||||
# TODO: pick a digest based on the image tag if there are multiple
|
||||
# digests
|
||||
return image['RepoDigests'][0]
|
||||
|
||||
if 'build' not in service.options:
|
||||
log.warn(
|
||||
"Compose needs to pull the image for '{s.name}' in order to create "
|
||||
"a bundle. This may result in a more recent image being used. "
|
||||
"It is recommended that you use an image tagged with a "
|
||||
"specific version to minimize the potential "
|
||||
"differences.".format(s=service))
|
||||
digest = service.pull()
|
||||
else:
|
||||
try:
|
||||
digest = service.push()
|
||||
except:
|
||||
log.error(
|
||||
"Failed to push image for service '{s.name}'. Please use an "
|
||||
"image tag that can be pushed to a Docker "
|
||||
"registry.".format(s=service))
|
||||
raise
|
||||
|
||||
if not digest:
|
||||
raise ValueError("Failed to get digest for %s" % service.name)
|
||||
|
||||
identifier = '{repo}@{digest}'.format(repo=repo, digest=digest)
|
||||
|
||||
# Pull by digest so that image['RepoDigests'] is populated for next time
|
||||
# and we don't have to pull/push again
|
||||
service.client.pull(identifier)
|
||||
|
||||
return identifier
|
||||
|
||||
|
||||
def to_bundle(config, image_digests):
|
||||
config = denormalize_config(config)
|
||||
|
||||
return {
|
||||
'version': VERSION,
|
||||
'services': {
|
||||
name: convert_service_to_bundle(
|
||||
name,
|
||||
service_dict,
|
||||
image_digests[name],
|
||||
)
|
||||
for name, service_dict in config['services'].items()
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def convert_service_to_bundle(name, service_dict, image_digest):
|
||||
container_config = {'Image': image_digest}
|
||||
|
||||
for key, value in service_dict.items():
|
||||
if key in IGNORED_KEYS:
|
||||
continue
|
||||
|
||||
if key not in SUPPORTED_KEYS:
|
||||
log.warn("Unsupported key '{}' in services.{} - ignoring".format(key, name))
|
||||
continue
|
||||
|
||||
if key == 'environment':
|
||||
container_config['Env'] = format_environment({
|
||||
envkey: envvalue for envkey, envvalue in value.items()
|
||||
if envvalue
|
||||
})
|
||||
continue
|
||||
|
||||
if key in SERVICE_KEYS:
|
||||
container_config[SERVICE_KEYS[key]] = value
|
||||
continue
|
||||
|
||||
set_command_and_args(
|
||||
container_config,
|
||||
service_dict.get('entrypoint', []),
|
||||
service_dict.get('command', []))
|
||||
container_config['Networks'] = make_service_networks(name, service_dict)
|
||||
|
||||
ports = make_port_specs(service_dict)
|
||||
if ports:
|
||||
container_config['Ports'] = ports
|
||||
|
||||
return container_config
|
||||
|
||||
|
||||
# See https://github.com/docker/swarmkit/blob//agent/exec/container/container.go#L95
|
||||
def set_command_and_args(config, entrypoint, command):
|
||||
if isinstance(entrypoint, six.string_types):
|
||||
entrypoint = split_command(entrypoint)
|
||||
if isinstance(command, six.string_types):
|
||||
command = split_command(command)
|
||||
|
||||
if entrypoint:
|
||||
config['Command'] = entrypoint + command
|
||||
return
|
||||
|
||||
if command:
|
||||
config['Args'] = command
|
||||
|
||||
|
||||
def make_service_networks(name, service_dict):
|
||||
networks = []
|
||||
|
||||
for network_name, network_def in get_network_defs_for_service(service_dict).items():
|
||||
for key in network_def.keys():
|
||||
log.warn(
|
||||
"Unsupported key '{}' in services.{}.networks.{} - ignoring"
|
||||
.format(key, name, network_name))
|
||||
|
||||
networks.append(network_name)
|
||||
|
||||
return networks
|
||||
|
||||
|
||||
def make_port_specs(service_dict):
|
||||
ports = []
|
||||
|
||||
internal_ports = [
|
||||
internal_port
|
||||
for port_def in service_dict.get('ports', [])
|
||||
for internal_port in split_port(port_def)[0]
|
||||
]
|
||||
|
||||
internal_ports += service_dict.get('expose', [])
|
||||
|
||||
for internal_port in internal_ports:
|
||||
spec = make_port_spec(internal_port)
|
||||
if spec not in ports:
|
||||
ports.append(spec)
|
||||
|
||||
return ports
|
||||
|
||||
|
||||
def make_port_spec(value):
|
||||
components = six.text_type(value).partition('/')
|
||||
return {
|
||||
'Protocol': components[2] or 'tcp',
|
||||
'Port': int(components[0]),
|
||||
}
|
@ -4,6 +4,7 @@ from __future__ import unicode_literals
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import ssl
|
||||
|
||||
import six
|
||||
|
||||
@ -35,6 +36,16 @@ def project_from_options(project_dir, options):
|
||||
)
|
||||
|
||||
|
||||
def get_config_from_options(base_dir, options):
|
||||
environment = Environment.from_env_file(base_dir)
|
||||
config_path = get_config_path_from_options(
|
||||
base_dir, options, environment
|
||||
)
|
||||
return config.load(
|
||||
config.find(base_dir, config_path, environment)
|
||||
)
|
||||
|
||||
|
||||
def get_config_path_from_options(base_dir, options, environment):
|
||||
file_option = options.get('--file')
|
||||
if file_option:
|
||||
@ -46,10 +57,28 @@ def get_config_path_from_options(base_dir, options, environment):
|
||||
return None
|
||||
|
||||
|
||||
def get_client(environment, verbose=False, version=None, tls_config=None, host=None):
|
||||
def get_tls_version(environment):
|
||||
compose_tls_version = environment.get('COMPOSE_TLS_VERSION', None)
|
||||
if not compose_tls_version:
|
||||
return None
|
||||
|
||||
tls_attr_name = "PROTOCOL_{}".format(compose_tls_version)
|
||||
if not hasattr(ssl, tls_attr_name):
|
||||
log.warn(
|
||||
'The {} protocol is unavailable. You may need to update your '
|
||||
'version of Python or OpenSSL. Falling back to TLSv1 (default).'
|
||||
)
|
||||
return None
|
||||
|
||||
return getattr(ssl, tls_attr_name)
|
||||
|
||||
|
||||
def get_client(environment, verbose=False, version=None, tls_config=None, host=None,
|
||||
tls_version=None):
|
||||
|
||||
client = docker_client(
|
||||
version=version, tls_config=tls_config, host=host,
|
||||
environment=environment
|
||||
environment=environment, tls_version=get_tls_version(environment)
|
||||
)
|
||||
if verbose:
|
||||
version_info = six.iteritems(client.version())
|
||||
@ -74,6 +103,7 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
|
||||
api_version = environment.get(
|
||||
'COMPOSE_API_VERSION',
|
||||
API_VERSIONS[config_data.version])
|
||||
|
||||
client = get_client(
|
||||
verbose=verbose, version=api_version, tls_config=tls_config,
|
||||
host=host, environment=environment
|
||||
|
@ -39,7 +39,8 @@ def tls_config_from_options(options):
|
||||
return None
|
||||
|
||||
|
||||
def docker_client(environment, version=None, tls_config=None, host=None):
|
||||
def docker_client(environment, version=None, tls_config=None, host=None,
|
||||
tls_version=None):
|
||||
"""
|
||||
Returns a docker-py client configured using environment variables
|
||||
according to the same logic as the official Docker client.
|
||||
@ -49,7 +50,7 @@ def docker_client(environment, version=None, tls_config=None, host=None):
|
||||
"Please use COMPOSE_HTTP_TIMEOUT instead.")
|
||||
|
||||
try:
|
||||
kwargs = kwargs_from_env(environment=environment)
|
||||
kwargs = kwargs_from_env(environment=environment, ssl_version=tls_version)
|
||||
except TLSParameterError:
|
||||
raise UserError(
|
||||
"TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY "
|
||||
|
@ -14,10 +14,10 @@ from operator import attrgetter
|
||||
from . import errors
|
||||
from . import signals
|
||||
from .. import __version__
|
||||
from ..config import config
|
||||
from ..bundle import get_image_digests
|
||||
from ..bundle import serialize_bundle
|
||||
from ..config import ConfigurationError
|
||||
from ..config import parse_environment
|
||||
from ..config.environment import Environment
|
||||
from ..config.serialize import serialize_config
|
||||
from ..const import DEFAULT_TIMEOUT
|
||||
from ..const import IS_WINDOWS_PLATFORM
|
||||
@ -30,7 +30,7 @@ from ..service import BuildError
|
||||
from ..service import ConvergenceStrategy
|
||||
from ..service import ImageType
|
||||
from ..service import NeedsBuildError
|
||||
from .command import get_config_path_from_options
|
||||
from .command import get_config_from_options
|
||||
from .command import project_from_options
|
||||
from .docopt_command import DocoptDispatcher
|
||||
from .docopt_command import get_handler
|
||||
@ -98,7 +98,7 @@ def perform_command(options, handler, command_options):
|
||||
handler(command_options)
|
||||
return
|
||||
|
||||
if options['COMMAND'] == 'config':
|
||||
if options['COMMAND'] in ('config', 'bundle'):
|
||||
command = TopLevelCommand(None)
|
||||
handler(command, options, command_options)
|
||||
return
|
||||
@ -164,6 +164,7 @@ class TopLevelCommand(object):
|
||||
|
||||
Commands:
|
||||
build Build or rebuild services
|
||||
bundle Generate a Docker bundle from the Compose file
|
||||
config Validate and view the compose file
|
||||
create Create services
|
||||
down Stop and remove containers, networks, images, and volumes
|
||||
@ -176,6 +177,7 @@ class TopLevelCommand(object):
|
||||
port Print the public port for a port binding
|
||||
ps List containers
|
||||
pull Pulls service images
|
||||
push Push service images
|
||||
restart Restart services
|
||||
rm Remove stopped containers
|
||||
run Run a one-off command
|
||||
@ -212,6 +214,34 @@ class TopLevelCommand(object):
|
||||
pull=bool(options.get('--pull', False)),
|
||||
force_rm=bool(options.get('--force-rm', False)))
|
||||
|
||||
def bundle(self, config_options, options):
|
||||
"""
|
||||
Generate a Docker bundle from the Compose file.
|
||||
|
||||
Local images will be pushed to a Docker registry, and remote images
|
||||
will be pulled to fetch an image digest.
|
||||
|
||||
Usage: bundle [options]
|
||||
|
||||
Options:
|
||||
-o, --output PATH Path to write the bundle file to.
|
||||
Defaults to "<project name>.dsb".
|
||||
"""
|
||||
self.project = project_from_options('.', config_options)
|
||||
compose_config = get_config_from_options(self.project_dir, config_options)
|
||||
|
||||
output = options["--output"]
|
||||
if not output:
|
||||
output = "{}.dsb".format(self.project.name)
|
||||
|
||||
with errors.handle_connection_errors(self.project.client):
|
||||
image_digests = get_image_digests(self.project)
|
||||
|
||||
with open(output, 'w') as f:
|
||||
f.write(serialize_bundle(compose_config, image_digests))
|
||||
|
||||
log.info("Wrote bundle to {}".format(output))
|
||||
|
||||
def config(self, config_options, options):
|
||||
"""
|
||||
Validate and view the compose file.
|
||||
@ -224,13 +254,7 @@ class TopLevelCommand(object):
|
||||
--services Print the service names, one per line.
|
||||
|
||||
"""
|
||||
environment = Environment.from_env_file(self.project_dir)
|
||||
config_path = get_config_path_from_options(
|
||||
self.project_dir, config_options, environment
|
||||
)
|
||||
compose_config = config.load(
|
||||
config.find(self.project_dir, config_path, environment)
|
||||
)
|
||||
compose_config = get_config_from_options(self.project_dir, config_options)
|
||||
|
||||
if options['--quiet']:
|
||||
return
|
||||
@ -265,18 +289,29 @@ class TopLevelCommand(object):
|
||||
|
||||
def down(self, options):
|
||||
"""
|
||||
Stop containers and remove containers, networks, volumes, and images
|
||||
created by `up`. Only containers and networks are removed by default.
|
||||
Stops containers and removes containers, networks, volumes, and images
|
||||
created by `up`.
|
||||
|
||||
By default, the only things removed are:
|
||||
|
||||
- Containers for services defined in the Compose file
|
||||
- Networks defined in the `networks` section of the Compose file
|
||||
- The default network, if one is used
|
||||
|
||||
Networks and volumes defined as `external` are never removed.
|
||||
|
||||
Usage: down [options]
|
||||
|
||||
Options:
|
||||
--rmi type Remove images, type may be one of: 'all' to remove
|
||||
all images, or 'local' to remove only images that
|
||||
don't have an custom name set by the `image` field
|
||||
-v, --volumes Remove data volumes
|
||||
--remove-orphans Remove containers for services not defined in
|
||||
the Compose file
|
||||
--rmi type Remove images. Type must be one of:
|
||||
'all': Remove all images used by any service.
|
||||
'local': Remove only images that don't have a custom tag
|
||||
set by the `image` field.
|
||||
-v, --volumes Remove named volumes declared in the `volumes` section
|
||||
of the Compose file and anonymous volumes
|
||||
attached to containers.
|
||||
--remove-orphans Remove containers for services not defined in the
|
||||
Compose file
|
||||
"""
|
||||
image_type = image_type_from_opt('--rmi', options['--rmi'])
|
||||
self.project.down(image_type, options['--volumes'], options['--remove-orphans'])
|
||||
@ -323,6 +358,13 @@ class TopLevelCommand(object):
|
||||
"""
|
||||
index = int(options.get('--index'))
|
||||
service = self.project.get_service(options['SERVICE'])
|
||||
detach = options['-d']
|
||||
|
||||
if IS_WINDOWS_PLATFORM and not detach:
|
||||
raise UserError(
|
||||
"Interactive mode is not yet supported on Windows.\n"
|
||||
"Please pass the -d flag when using `docker-compose exec`."
|
||||
)
|
||||
try:
|
||||
container = service.get_container(number=index)
|
||||
except ValueError as e:
|
||||
@ -339,7 +381,7 @@ class TopLevelCommand(object):
|
||||
|
||||
exec_id = container.create_exec(command, **create_exec_options)
|
||||
|
||||
if options['-d']:
|
||||
if detach:
|
||||
container.start_exec(exec_id, tty=tty)
|
||||
return
|
||||
|
||||
@ -500,12 +542,26 @@ class TopLevelCommand(object):
|
||||
ignore_pull_failures=options.get('--ignore-pull-failures')
|
||||
)
|
||||
|
||||
def push(self, options):
|
||||
"""
|
||||
Pushes images for services.
|
||||
|
||||
Usage: push [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--ignore-push-failures Push what it can and ignores images with push failures.
|
||||
"""
|
||||
self.project.push(
|
||||
service_names=options['SERVICE'],
|
||||
ignore_push_failures=options.get('--ignore-push-failures')
|
||||
)
|
||||
|
||||
def rm(self, options):
|
||||
"""
|
||||
Remove stopped service containers.
|
||||
Removes stopped service containers.
|
||||
|
||||
By default, volumes attached to containers will not be removed. You can see all
|
||||
volumes with `docker volume ls`.
|
||||
By default, anonymous volumes attached to containers will not be removed. You
|
||||
can override this with `-v`. To list all volumes, use `docker volume ls`.
|
||||
|
||||
Any data which is not in a volume will be lost.
|
||||
|
||||
@ -513,18 +569,16 @@ class TopLevelCommand(object):
|
||||
|
||||
Options:
|
||||
-f, --force Don't ask to confirm removal
|
||||
-v Remove volumes associated with containers
|
||||
-a, --all Also remove one-off containers created by
|
||||
-v Remove any anonymous volumes attached to containers
|
||||
-a, --all Obsolete. Also remove one-off containers created by
|
||||
docker-compose run
|
||||
"""
|
||||
if options.get('--all'):
|
||||
one_off = OneOffFilter.include
|
||||
else:
|
||||
log.warn(
|
||||
'Not including one-off containers created by `docker-compose run`.\n'
|
||||
'To include them, use `docker-compose rm --all`.\n'
|
||||
'This will be the default behavior in the next version of Compose.\n')
|
||||
one_off = OneOffFilter.exclude
|
||||
'--all flag is obsolete. This is now the default behavior '
|
||||
'of `docker-compose rm`'
|
||||
)
|
||||
one_off = OneOffFilter.include
|
||||
|
||||
all_containers = self.project.containers(
|
||||
service_names=options['SERVICE'], stopped=True, one_off=one_off
|
||||
|
@ -6,9 +6,9 @@ import os
|
||||
import platform
|
||||
import ssl
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import docker
|
||||
from six.moves import input
|
||||
|
||||
import compose
|
||||
|
||||
@ -42,6 +42,16 @@ def yesno(prompt, default=None):
|
||||
return None
|
||||
|
||||
|
||||
def input(prompt):
|
||||
"""
|
||||
Version of input (raw_input in Python 2) which forces a flush of sys.stdout
|
||||
to avoid problems where the prompt fails to appear due to line buffering
|
||||
"""
|
||||
sys.stdout.write(prompt)
|
||||
sys.stdout.flush()
|
||||
return sys.stdin.readline().rstrip('\n')
|
||||
|
||||
|
||||
def call_silently(*args, **kwargs):
|
||||
"""
|
||||
Like subprocess.call(), but redirects stdout and stderr to /dev/null.
|
||||
|
@ -3,7 +3,7 @@ from __future__ import unicode_literals
|
||||
|
||||
import functools
|
||||
import logging
|
||||
import operator
|
||||
import ntpath
|
||||
import os
|
||||
import string
|
||||
import sys
|
||||
@ -748,13 +748,10 @@ def merge_service_dicts(base, override, version):
|
||||
md.merge_field(field, merge_path_mappings)
|
||||
|
||||
for field in [
|
||||
'depends_on',
|
||||
'expose',
|
||||
'external_links',
|
||||
'ports',
|
||||
'volumes_from',
|
||||
'ports', 'cap_add', 'cap_drop', 'expose', 'external_links',
|
||||
'security_opt', 'volumes_from', 'depends_on',
|
||||
]:
|
||||
md.merge_field(field, operator.add, default=[])
|
||||
md.merge_field(field, merge_unique_items_lists, default=[])
|
||||
|
||||
for field in ['dns', 'dns_search', 'env_file', 'tmpfs']:
|
||||
md.merge_field(field, merge_list_or_string)
|
||||
@ -770,6 +767,10 @@ def merge_service_dicts(base, override, version):
|
||||
return dict(md)
|
||||
|
||||
|
||||
def merge_unique_items_lists(base, override):
|
||||
return sorted(set().union(base, override))
|
||||
|
||||
|
||||
def merge_build(output, base, override):
|
||||
def to_dict(service):
|
||||
build_config = service.get('build', {})
|
||||
@ -939,12 +940,13 @@ def split_path_mapping(volume_path):
|
||||
path. Using splitdrive so windows absolute paths won't cause issues with
|
||||
splitting on ':'.
|
||||
"""
|
||||
# splitdrive has limitations when it comes to relative paths, so when it's
|
||||
# relative, handle special case to set the drive to ''
|
||||
if volume_path.startswith('.') or volume_path.startswith('~'):
|
||||
# splitdrive is very naive, so handle special cases where we can be sure
|
||||
# the first character is not a drive.
|
||||
if (volume_path.startswith('.') or volume_path.startswith('~') or
|
||||
volume_path.startswith('/')):
|
||||
drive, volume_config = '', volume_path
|
||||
else:
|
||||
drive, volume_config = os.path.splitdrive(volume_path)
|
||||
drive, volume_config = ntpath.splitdrive(volume_path)
|
||||
|
||||
if ':' in volume_config:
|
||||
(host, container) = volume_config.split(':', 1)
|
||||
|
@ -28,6 +28,8 @@ def env_vars_from_file(filename):
|
||||
"""
|
||||
if not os.path.exists(filename):
|
||||
raise ConfigurationError("Couldn't find env file: %s" % filename)
|
||||
elif not os.path.isfile(filename):
|
||||
raise ConfigurationError("%s is not a file." % (filename))
|
||||
env = {}
|
||||
for line in codecs.open(filename, 'r', 'utf-8'):
|
||||
line = line.strip()
|
||||
|
@ -18,7 +18,7 @@ yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
|
||||
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
|
||||
|
||||
|
||||
def serialize_config(config):
|
||||
def denormalize_config(config):
|
||||
denormalized_services = [
|
||||
denormalize_service_dict(service_dict, config.version)
|
||||
for service_dict in config.services
|
||||
@ -27,16 +27,22 @@ def serialize_config(config):
|
||||
service_dict.pop('name'): service_dict
|
||||
for service_dict in denormalized_services
|
||||
}
|
||||
networks = config.networks.copy()
|
||||
for net_name, net_conf in networks.items():
|
||||
if 'external_name' in net_conf:
|
||||
del net_conf['external_name']
|
||||
|
||||
output = {
|
||||
return {
|
||||
'version': V2_0,
|
||||
'services': services,
|
||||
'networks': config.networks,
|
||||
'networks': networks,
|
||||
'volumes': config.volumes,
|
||||
}
|
||||
|
||||
|
||||
def serialize_config(config):
|
||||
return yaml.safe_dump(
|
||||
output,
|
||||
denormalize_config(config),
|
||||
default_flow_style=False,
|
||||
indent=2,
|
||||
width=80)
|
||||
|
@ -91,3 +91,22 @@ def print_output_event(event, stream, is_terminal):
|
||||
stream.write("%s%s" % (event['stream'], terminator))
|
||||
else:
|
||||
stream.write("%s%s\n" % (status, terminator))
|
||||
|
||||
|
||||
def get_digest_from_pull(events):
|
||||
for event in events:
|
||||
status = event.get('status')
|
||||
if not status or 'Digest' not in status:
|
||||
continue
|
||||
|
||||
_, digest = status.split(':', 1)
|
||||
return digest.strip()
|
||||
return None
|
||||
|
||||
|
||||
def get_digest_from_push(events):
|
||||
for event in events:
|
||||
digest = event.get('aux', {}).get('Digest')
|
||||
if digest:
|
||||
return digest
|
||||
return None
|
||||
|
@ -440,6 +440,10 @@ class Project(object):
|
||||
for service in self.get_services(service_names, include_deps=False):
|
||||
service.pull(ignore_pull_failures)
|
||||
|
||||
def push(self, service_names=None, ignore_push_failures=False):
|
||||
for service in self.get_services(service_names, include_deps=False):
|
||||
service.push(ignore_push_failures)
|
||||
|
||||
def _labeled_containers(self, stopped=False, one_off=OneOffFilter.exclude):
|
||||
return list(filter(None, [
|
||||
Container.from_ps(self.client, container)
|
||||
@ -539,4 +543,5 @@ class NoSuchService(Exception):
|
||||
|
||||
|
||||
class ProjectError(Exception):
|
||||
pass
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
@ -15,6 +15,7 @@ from docker.utils.ports import build_port_bindings
|
||||
from docker.utils.ports import split_port
|
||||
|
||||
from . import __version__
|
||||
from . import progress_stream
|
||||
from .config import DOCKER_CONFIG_KEYS
|
||||
from .config import merge_environment
|
||||
from .config.types import VolumeSpec
|
||||
@ -179,7 +180,7 @@ class Service(object):
|
||||
'Remove the custom name to scale the service.'
|
||||
% (self.name, self.custom_container_name))
|
||||
|
||||
if self.specifies_host_port():
|
||||
if self.specifies_host_port() and desired_num > 1:
|
||||
log.warn('The "%s" service specifies a port on the host. If multiple containers '
|
||||
'for this service are created on a single host, the port will clash.'
|
||||
% self.name)
|
||||
@ -806,20 +807,35 @@ class Service(object):
|
||||
repo, tag, separator = parse_repository_tag(self.options['image'])
|
||||
tag = tag or 'latest'
|
||||
log.info('Pulling %s (%s%s%s)...' % (self.name, repo, separator, tag))
|
||||
output = self.client.pull(
|
||||
repo,
|
||||
tag=tag,
|
||||
stream=True,
|
||||
)
|
||||
output = self.client.pull(repo, tag=tag, stream=True)
|
||||
|
||||
try:
|
||||
stream_output(output, sys.stdout)
|
||||
return progress_stream.get_digest_from_pull(
|
||||
stream_output(output, sys.stdout))
|
||||
except StreamOutputError as e:
|
||||
if not ignore_pull_failures:
|
||||
raise
|
||||
else:
|
||||
log.error(six.text_type(e))
|
||||
|
||||
def push(self, ignore_push_failures=False):
|
||||
if 'image' not in self.options or 'build' not in self.options:
|
||||
return
|
||||
|
||||
repo, tag, separator = parse_repository_tag(self.options['image'])
|
||||
tag = tag or 'latest'
|
||||
log.info('Pushing %s (%s%s%s)...' % (self.name, repo, separator, tag))
|
||||
output = self.client.push(repo, tag=tag, stream=True)
|
||||
|
||||
try:
|
||||
return progress_stream.get_digest_from_push(
|
||||
stream_output(output, sys.stdout))
|
||||
except StreamOutputError as e:
|
||||
if not ignore_push_failures:
|
||||
raise
|
||||
else:
|
||||
log.error(six.text_type(e))
|
||||
|
||||
|
||||
def short_id_alias_exists(container, network):
|
||||
aliases = container.get(
|
||||
|
@ -95,4 +95,4 @@ def microseconds_from_time_nano(time_nano):
|
||||
|
||||
|
||||
def build_string_dict(source_dict):
|
||||
return dict((k, str(v)) for k, v in source_dict.items())
|
||||
return dict((k, str(v if v is not None else '')) for k, v in source_dict.items())
|
||||
|
@ -325,7 +325,7 @@ _docker_compose_restart() {
|
||||
_docker_compose_rm() {
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--all -a --force -f --help -v" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--force -f --help -v" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker_compose_services_stopped
|
||||
|
@ -281,7 +281,6 @@ __docker-compose_subcommand() {
|
||||
(rm)
|
||||
_arguments \
|
||||
$opts_help \
|
||||
'(-a --all)'{-a,--all}"[Also remove one-off containers]" \
|
||||
'(-f --force)'{-f,--force}"[Don't ask to confirm removal]" \
|
||||
'-v[Remove volumes associated with containers]' \
|
||||
'*:stopped services:__docker-compose_stoppedservices' && ret=0
|
||||
|
@ -1,18 +1,8 @@
|
||||
FROM docs/base:latest
|
||||
MAINTAINER Mary Anthony <mary@docker.com> (@moxiegirl)
|
||||
|
||||
RUN svn checkout https://github.com/docker/docker/trunk/docs /docs/content/engine
|
||||
RUN svn checkout https://github.com/docker/swarm/trunk/docs /docs/content/swarm
|
||||
RUN svn checkout https://github.com/docker/machine/trunk/docs /docs/content/machine
|
||||
RUN svn checkout https://github.com/docker/distribution/trunk/docs /docs/content/registry
|
||||
RUN svn checkout https://github.com/docker/notary/trunk/docs /docs/content/notary
|
||||
RUN svn checkout https://github.com/docker/kitematic/trunk/docs /docs/content/kitematic
|
||||
RUN svn checkout https://github.com/docker/toolbox/trunk/docs /docs/content/toolbox
|
||||
RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content/project
|
||||
|
||||
FROM docs/base:oss
|
||||
MAINTAINER Docker Docs <docs@docker.com>
|
||||
|
||||
ENV PROJECT=compose
|
||||
# To get the git info for this repo
|
||||
COPY . /src
|
||||
|
||||
RUN rm -rf /docs/content/$PROJECT/
|
||||
COPY . /docs/content/$PROJECT/
|
||||
|
@ -1,17 +1,4 @@
|
||||
.PHONY: all binary build cross default docs docs-build docs-shell shell test test-unit test-integration test-integration-cli test-docker-py validate
|
||||
|
||||
# env vars passed through directly to Docker's build scripts
|
||||
# to allow things like `make DOCKER_CLIENTONLY=1 binary` easily
|
||||
# `docs/sources/contributing/devenvironment.md ` and `project/PACKAGERS.md` have some limited documentation of some of these
|
||||
DOCKER_ENVS := \
|
||||
-e BUILDFLAGS \
|
||||
-e DOCKER_CLIENTONLY \
|
||||
-e DOCKER_EXECDRIVER \
|
||||
-e DOCKER_GRAPHDRIVER \
|
||||
-e TESTDIRS \
|
||||
-e TESTFLAGS \
|
||||
-e TIMEOUT
|
||||
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
|
||||
.PHONY: all default docs docs-build docs-shell shell test
|
||||
|
||||
# to allow `make DOCSDIR=1 docs-shell` (to create a bind mount in docs)
|
||||
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR):/docs/content/compose)
|
||||
@ -25,9 +12,8 @@ HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER
|
||||
HUGO_BIND_IP=0.0.0.0
|
||||
|
||||
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
DOCKER_DOCS_IMAGE := docs-base$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
|
||||
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
|
||||
DOCKER_DOCS_IMAGE := docker-docs$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
|
||||
|
||||
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
|
||||
|
||||
@ -42,14 +28,11 @@ docs: docs-build
|
||||
docs-draft: docs-build
|
||||
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
|
||||
|
||||
|
||||
docs-shell: docs-build
|
||||
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
|
||||
|
||||
test: docs-build
|
||||
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)"
|
||||
|
||||
docs-build:
|
||||
# ( git remote | grep -v upstream ) || git diff --name-status upstream/release..upstream/docs ./ > ./changed-files
|
||||
# echo "$(GIT_BRANCH)" > GIT_BRANCH
|
||||
# echo "$(AWS_S3_BUCKET)" > AWS_S3_BUCKET
|
||||
# echo "$(GITCOMMIT)" > GITCOMMIT
|
||||
docker build -t "$(DOCKER_DOCS_IMAGE)" .
|
||||
|
@ -59,13 +59,13 @@ optionally [dockerfile](#dockerfile) and [args](#args).
|
||||
args:
|
||||
buildno: 1
|
||||
|
||||
If you specify `image` as well as `build`, then Compose tags the built image
|
||||
with the tag specified in `image`:
|
||||
If you specify `image` as well as `build`, then Compose names the built image
|
||||
with the `webapp` and optional `tag` specified in `image`:
|
||||
|
||||
build: ./dir
|
||||
image: webapp
|
||||
image: webapp:tag
|
||||
|
||||
This will result in an image tagged `webapp`, built from `./dir`.
|
||||
This will result in an image named `webapp` and tagged `tag`, built from `./dir`.
|
||||
|
||||
> **Note**: In the [version 1 file format](#version-1), `build` is different in
|
||||
> two ways:
|
||||
@ -502,9 +502,11 @@ the special form `service:[service name]`.
|
||||
Networks to join, referencing entries under the
|
||||
[top-level `networks` key](#network-configuration-reference).
|
||||
|
||||
networks:
|
||||
- some-network
|
||||
- other-network
|
||||
services:
|
||||
some-service:
|
||||
networks:
|
||||
- some-network
|
||||
- other-network
|
||||
|
||||
#### aliases
|
||||
|
||||
@ -516,14 +518,16 @@ Since `aliases` is network-scoped, the same service can have different aliases o
|
||||
|
||||
The general format is shown here.
|
||||
|
||||
networks:
|
||||
some-network:
|
||||
aliases:
|
||||
- alias1
|
||||
- alias3
|
||||
other-network:
|
||||
aliases:
|
||||
- alias2
|
||||
services:
|
||||
some-service:
|
||||
networks:
|
||||
some-network:
|
||||
aliases:
|
||||
- alias1
|
||||
- alias3
|
||||
other-network:
|
||||
aliases:
|
||||
- alias2
|
||||
|
||||
In the example below, three services are provided (`web`, `worker`, and `db`), along with two networks (`new` and `legacy`). The `db` service is reachable at the hostname `db` or `database` on the `new` network, and at `db` or `mysql` on the `legacy` network.
|
||||
|
||||
@ -1079,7 +1083,7 @@ It's more complicated if you're using particular configuration features:
|
||||
data: {}
|
||||
|
||||
By default, Compose creates a volume whose name is prefixed with your
|
||||
project name. If you want it to just be called `data`, declared it as
|
||||
project name. If you want it to just be called `data`, declare it as
|
||||
external:
|
||||
|
||||
volumes:
|
||||
@ -1089,21 +1093,24 @@ It's more complicated if you're using particular configuration features:
|
||||
## Variable substitution
|
||||
|
||||
Your configuration options can contain environment variables. Compose uses the
|
||||
variable values from the shell environment in which `docker-compose` is run. For
|
||||
example, suppose the shell contains `POSTGRES_VERSION=9.3` and you supply this
|
||||
configuration:
|
||||
variable values from the shell environment in which `docker-compose` is run.
|
||||
For example, suppose the shell contains `EXTERNAL_PORT=8000` and you supply
|
||||
this configuration:
|
||||
|
||||
db:
|
||||
image: "postgres:${POSTGRES_VERSION}"
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "${EXTERNAL_PORT}:5000"
|
||||
|
||||
When you run `docker-compose up` with this configuration, Compose looks for the
|
||||
`POSTGRES_VERSION` environment variable in the shell and substitutes its value
|
||||
in. For this example, Compose resolves the `image` to `postgres:9.3` before
|
||||
running the configuration.
|
||||
When you run `docker-compose up` with this configuration, Compose looks for
|
||||
the `EXTERNAL_PORT` environment variable in the shell and substitutes its
|
||||
value in. In this example, Compose resolves the port mapping to `"8000:5000"`
|
||||
before creating the `web` container.
|
||||
|
||||
If an environment variable is not set, Compose substitutes with an empty
|
||||
string. In the example above, if `POSTGRES_VERSION` is not set, the value for
|
||||
the `image` option is `postgres:`.
|
||||
string. In the example above, if `EXTERNAL_PORT` is not set, the value for the
|
||||
port mapping is `:5000` (which is of course an invalid port mapping, and will
|
||||
result in an error when attempting to create the container).
|
||||
|
||||
Both `$VARIABLE` and `${VARIABLE}` syntax are supported. Extended shell-style
|
||||
features, such as `${VARIABLE-default}` and `${VARIABLE/foo/bar}`, are not
|
||||
|
@ -15,7 +15,7 @@ weight=4
|
||||
This quick-start guide demonstrates how to use Docker Compose to set up and run a simple Django/PostgreSQL app. Before starting, you'll need to have
|
||||
[Compose installed](install.md).
|
||||
|
||||
## Define the project components
|
||||
### Define the project components
|
||||
|
||||
For this project, you need to create a Dockerfile, a Python dependencies file,
|
||||
and a `docker-compose.yml` file.
|
||||
@ -29,8 +29,8 @@ and a `docker-compose.yml` file.
|
||||
The Dockerfile defines an application's image content via one or more build
|
||||
commands that configure that image. Once built, you can run the image in a
|
||||
container. For more information on `Dockerfiles`, see the [Docker user
|
||||
guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile)
|
||||
and the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
|
||||
guide](/engine/userguide/containers/dockerimages.md#building-an-image-from-a-dockerfile)
|
||||
and the [Dockerfile reference](/engine/reference/builder.md).
|
||||
|
||||
3. Add the following content to the `Dockerfile`.
|
||||
|
||||
@ -89,7 +89,7 @@ and a `docker-compose.yml` file.
|
||||
|
||||
10. Save and close the `docker-compose.yml` file.
|
||||
|
||||
## Create a Django project
|
||||
### Create a Django project
|
||||
|
||||
In this step, you create a Django started project by building the image from the build context defined in the previous procedure.
|
||||
|
||||
@ -137,7 +137,7 @@ In this step, you create a Django started project by building the image from the
|
||||
-rw-r--r-- 1 user staff 16 Feb 13 23:01 requirements.txt
|
||||
|
||||
|
||||
## Connect the database
|
||||
### Connect the database
|
||||
|
||||
In this section, you set up the database connection for Django.
|
||||
|
||||
|
107
docs/environment-variables.md
Normal file
107
docs/environment-variables.md
Normal file
@ -0,0 +1,107 @@
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Environment variables in Compose"
|
||||
description = "How to set, use and manage environment variables in Compose"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, environment, variables, env file"]
|
||||
[menu.main]
|
||||
parent = "workw_compose"
|
||||
weight=10
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Environment variables in Compose
|
||||
|
||||
There are multiple parts of Compose that deal with environment variables in one sense or another. This page should help you find the information you need.
|
||||
|
||||
|
||||
## Substituting environment variables in Compose files
|
||||
|
||||
It's possible to use environment variables in your shell to populate values inside a Compose file:
|
||||
|
||||
web:
|
||||
image: "webapp:${TAG}"
|
||||
|
||||
For more information, see the [Variable substitution](compose-file.md#variable-substitution) section in the Compose file reference.
|
||||
|
||||
|
||||
## Setting environment variables in containers
|
||||
|
||||
You can set environment variables in a service's containers with the ['environment' key](compose-file.md#environment), just like with `docker run -e VARIABLE=VALUE ...`:
|
||||
|
||||
web:
|
||||
environment:
|
||||
- DEBUG=1
|
||||
|
||||
|
||||
## Passing environment variables through to containers
|
||||
|
||||
You can pass environment variables from your shell straight through to a service's containers with the ['environment' key](compose-file.md#environment) by not giving them a value, just like with `docker run -e VARIABLE ...`:
|
||||
|
||||
web:
|
||||
environment:
|
||||
- DEBUG
|
||||
|
||||
The value of the `DEBUG` variable in the container will be taken from the value for the same variable in the shell in which Compose is run.
|
||||
|
||||
|
||||
## The “env_file” configuration option
|
||||
|
||||
You can pass multiple environment variables from an external file through to a service's containers with the ['env_file' option](compose-file.md#env-file), just like with `docker run --env-file=FILE ...`:
|
||||
|
||||
web:
|
||||
env_file:
|
||||
- web-variables.env
|
||||
|
||||
|
||||
## Setting environment variables with 'docker-compose run'
|
||||
|
||||
Just like with `docker run -e`, you can set environment variables on a one-off container with `docker-compose run -e`:
|
||||
|
||||
$ docker-compose run -e DEBUG=1 web python console.py
|
||||
|
||||
You can also pass a variable through from the shell by not giving it a value:
|
||||
|
||||
$ docker-compose run -e DEBUG web python console.py
|
||||
|
||||
The value of the `DEBUG` variable in the container will be taken from the value for the same variable in the shell in which Compose is run.
|
||||
|
||||
|
||||
## The “.env” file
|
||||
|
||||
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an [environment file](env-file.md) named `.env`:
|
||||
|
||||
$ cat .env
|
||||
TAG=v1.5
|
||||
|
||||
$ cat docker-compose.yml
|
||||
version: '2.0'
|
||||
services:
|
||||
web:
|
||||
image: "webapp:${TAG}"
|
||||
|
||||
When you run `docker-compose up`, the `web` service defined above uses the image `webapp:v1.5`. You can verify this with the [config command](reference/config.md), which prints your resolved application config to the terminal:
|
||||
|
||||
$ docker-compose config
|
||||
version: '2.0'
|
||||
services:
|
||||
web:
|
||||
image: 'webapp:v1.5'
|
||||
|
||||
Values in the shell take precedence over those specified in the `.env` file. If you set `TAG` to a different value in your shell, the substitution in `image` uses that instead:
|
||||
|
||||
$ export TAG=v2.0
|
||||
|
||||
$ docker-compose config
|
||||
version: '2.0'
|
||||
services:
|
||||
web:
|
||||
image: 'webapp:v2.0'
|
||||
|
||||
## Configuring Compose using environment variables
|
||||
|
||||
Several environment variables are available for you to configure the Docker Compose command-line behaviour. They begin with `COMPOSE_` or `DOCKER_`, and are documented in [CLI Environment Variables](reference/envvars.md).
|
||||
|
||||
|
||||
## Environment variables created by links
|
||||
|
||||
When using the ['links' option](compose-file.md#links) in a [v1 Compose file](compose-file.md#version-1), environment variables will be created for each link. They are documented in the [Link environment variables reference](link-env-deprecated.md). Please note, however, that these variables are deprecated - you should just use the link alias as a hostname instead.
|
@ -77,7 +77,7 @@ dependencies the Python application requires, including Python itself.
|
||||
* Install the Python dependencies.
|
||||
* Set the default command for the container to `python app.py`
|
||||
|
||||
For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
For more information on how to write Dockerfiles, see the [Docker user guide](/engine/userguide/containers/dockerimages.md#building-an-image-from-a-dockerfile) and the [Dockerfile reference](/engine/reference/builder.md).
|
||||
|
||||
2. Build the image.
|
||||
|
||||
@ -137,8 +137,8 @@ The `redis` service uses the latest public [Redis](https://registry.hub.docker.c
|
||||
2. Enter `http://0.0.0.0:5000/` in a browser to see the application running.
|
||||
|
||||
If you're using Docker on Linux natively, then the web app should now be
|
||||
listening on port 5000 on your Docker daemon host. If http://0.0.0.0:5000
|
||||
doesn't resolve, you can also try http://localhost:5000.
|
||||
listening on port 5000 on your Docker daemon host. If `http://0.0.0.0:5000`
|
||||
doesn't resolve, you can also try `http://localhost:5000`.
|
||||
|
||||
If you're using Docker Machine on a Mac, use `docker-machine ip MACHINE_VM` to get
|
||||
the IP address of your Docker host. Then, `open http://MACHINE_VM_IP:5000` in a
|
||||
|
@ -39,7 +39,7 @@ which the release page specifies, in your terminal.
|
||||
|
||||
The following is an example command illustrating the format:
|
||||
|
||||
curl -L https://github.com/docker/compose/releases/download/1.7.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/1.8.0-rc1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
|
||||
If you have problems installing with `curl`, see
|
||||
[Alternative Install Options](#alternative-install-options).
|
||||
@ -54,7 +54,7 @@ which the release page specifies, in your terminal.
|
||||
7. Test the installation.
|
||||
|
||||
$ docker-compose --version
|
||||
docker-compose version: 1.7.1
|
||||
docker-compose version: 1.8.0-rc1
|
||||
|
||||
|
||||
## Alternative install options
|
||||
@ -77,7 +77,7 @@ to get started.
|
||||
Compose can also be run inside a container, from a small bash script wrapper.
|
||||
To install compose as a container run:
|
||||
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.7.1/run.sh > /usr/local/bin/docker-compose
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.8.0-rc1/run.sh > /usr/local/bin/docker-compose
|
||||
$ chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
## Master builds
|
||||
|
@ -16,7 +16,9 @@ weight=89
|
||||
>
|
||||
> Environment variables will only be populated if you're using the [legacy version 1 Compose file format](compose-file.md#versioning).
|
||||
|
||||
Compose uses [Docker links] to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container.
|
||||
Compose uses [Docker links](/engine/userguide/networking/default_network/dockerlinks.md)
|
||||
to expose services' containers to one another. Each linked container injects a set of
|
||||
environment variables, each of which begins with the uppercase name of the container.
|
||||
|
||||
To see what environment variables are available to a service, run `docker-compose run SERVICE env`.
|
||||
|
||||
@ -38,8 +40,6 @@ Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp`
|
||||
<b><i>name</i>\_NAME</b><br>
|
||||
Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
|
||||
|
||||
[Docker links]: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
|
||||
|
||||
## Related Information
|
||||
|
||||
- [User guide](index.md)
|
||||
|
@ -159,8 +159,8 @@ and destroy isolated testing environments for your test suite. By defining the f
|
||||
|
||||
Compose has traditionally been focused on development and testing workflows,
|
||||
but with each release we're making progress on more production-oriented features. You can use Compose to deploy to a remote Docker Engine. The Docker Engine may be a single instance provisioned with
|
||||
[Docker Machine](https://docs.docker.com/machine/) or an entire
|
||||
[Docker Swarm](https://docs.docker.com/swarm/) cluster.
|
||||
[Docker Machine](/machine/overview.md) or an entire
|
||||
[Docker Swarm](/swarm/overview.md) cluster.
|
||||
|
||||
For details on using production-oriented features, see
|
||||
[compose in production](production.md) in this documentation.
|
||||
|
@ -65,7 +65,7 @@ recreating any services which `web` depends on.
|
||||
You can use Compose to deploy an app to a remote Docker host by setting the
|
||||
`DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables
|
||||
appropriately. For tasks like this,
|
||||
[Docker Machine](/machine/overview) makes managing local and
|
||||
[Docker Machine](/machine/overview.md) makes managing local and
|
||||
remote Docker hosts very easy, and is recommended even if you're not deploying
|
||||
remotely.
|
||||
|
||||
@ -74,7 +74,7 @@ commands will work with no further configuration.
|
||||
|
||||
### Running Compose on a Swarm cluster
|
||||
|
||||
[Docker Swarm](/swarm/overview), a Docker-native clustering
|
||||
[Docker Swarm](/swarm/overview.md), a Docker-native clustering
|
||||
system, exposes the same API as a single Docker host, which means you can use
|
||||
Compose against a Swarm instance and run your apps across multiple hosts.
|
||||
|
||||
|
@ -22,7 +22,7 @@ container. This is done using a file called `Dockerfile`. To begin with, the
|
||||
Dockerfile consists of:
|
||||
|
||||
FROM ruby:2.2.0
|
||||
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
|
||||
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
|
||||
RUN mkdir /myapp
|
||||
WORKDIR /myapp
|
||||
ADD Gemfile /myapp/Gemfile
|
||||
@ -32,7 +32,7 @@ Dockerfile consists of:
|
||||
|
||||
That'll put your application code inside an image that will build a container
|
||||
with Ruby, Bundler and all your dependencies inside it. For more information on
|
||||
how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
|
||||
how to write Dockerfiles, see the [Docker user guide](/engine/userguide/containers/dockerimages.md#building-an-image-from-a-dockerfile) and the [Dockerfile reference](/engine/reference/builder.md).
|
||||
|
||||
Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`.
|
||||
|
||||
@ -152,7 +152,7 @@ Finally, you need to create the database. In another terminal, run:
|
||||
|
||||
$ docker-compose run web rake db:create
|
||||
|
||||
That's it. Your app should now be running on port 3000 on your Docker daemon. If you're using [Docker Machine](https://docs.docker.com/machine/), then `docker-machine ip MACHINE_VM` returns the Docker host IP address.
|
||||
That's it. Your app should now be running on port 3000 on your Docker daemon. If you're using [Docker Machine](/machine/overview.md), then `docker-machine ip MACHINE_VM` returns the Docker host IP address.
|
||||
|
||||

|
||||
|
||||
|
@ -12,17 +12,27 @@ parent = "smn_compose_cli"
|
||||
# down
|
||||
|
||||
```
|
||||
Stop containers and remove containers, networks, volumes, and images
|
||||
created by `up`. Only containers and networks are removed by default.
|
||||
|
||||
Usage: down [options]
|
||||
|
||||
Options:
|
||||
--rmi type Remove images, type may be one of: 'all' to remove
|
||||
all images, or 'local' to remove only images that
|
||||
don't have an custom name set by the `image` field
|
||||
-v, --volumes Remove data volumes
|
||||
|
||||
--rmi type Remove images. Type must be one of:
|
||||
'all': Remove all images used by any service.
|
||||
'local': Remove only images that don't have a custom tag
|
||||
set by the `image` field.
|
||||
-v, --volumes Remove named volumes declared in the `volumes` section
|
||||
of the Compose file and anonymous volumes
|
||||
attached to containers.
|
||||
--remove-orphans Remove containers for services not defined in the
|
||||
Compose file
|
||||
```
|
||||
|
||||
Stops containers and removes containers, networks, volumes, and images
|
||||
created by `up`.
|
||||
|
||||
By default, the only things removed are:
|
||||
|
||||
- Containers for services defined in the Compose file
|
||||
- Networks defined in the `networks` section of the Compose file
|
||||
- The default network, if one is used
|
||||
|
||||
Networks and volumes defined as `external` are never removed.
|
||||
|
@ -78,6 +78,11 @@ Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TL
|
||||
Configures the time (in seconds) a request to the Docker daemon is allowed to hang before Compose considers
|
||||
it failed. Defaults to 60 seconds.
|
||||
|
||||
## COMPOSE\_TLS\_VERSION
|
||||
|
||||
Configure which TLS version is used for TLS communication with the `docker`
|
||||
daemon. Defaults to `TLSv1`.
|
||||
Supported values are: `TLSv1`, `TLSv1_1`, `TLSv1_2`.
|
||||
|
||||
## Related Information
|
||||
|
||||
|
@ -15,14 +15,15 @@ parent = "smn_compose_cli"
|
||||
Usage: rm [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
-f, --force Don't ask to confirm removal
|
||||
-v Remove volumes associated with containers
|
||||
-a, --all Also remove one-off containers
|
||||
-f, --force Don't ask to confirm removal
|
||||
-v Remove any anonymous volumes attached to containers
|
||||
-a, --all Also remove one-off containers created by
|
||||
docker-compose run
|
||||
```
|
||||
|
||||
Removes stopped service containers.
|
||||
|
||||
By default, volumes attached to containers will not be removed. You can see all
|
||||
volumes with `docker volume ls`.
|
||||
By default, anonymous volumes attached to containers will not be removed. You
|
||||
can override this with `-v`. To list all volumes, use `docker volume ls`.
|
||||
|
||||
Any data which is not in a volume will be lost.
|
||||
|
@ -11,7 +11,7 @@ parent="workw_compose"
|
||||
|
||||
# Using Compose with Swarm
|
||||
|
||||
Docker Compose and [Docker Swarm](/swarm/overview) aim to have full integration, meaning
|
||||
Docker Compose and [Docker Swarm](/swarm/overview.md) aim to have full integration, meaning
|
||||
you can point a Compose app at a Swarm cluster and have it all just work as if
|
||||
you were using a single Docker host.
|
||||
|
||||
@ -30,7 +30,7 @@ format](compose-file.md#versioning) you are using:
|
||||
or a custom driver which supports multi-host networking.
|
||||
|
||||
Read [Get started with multi-host networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) to see how to
|
||||
set up a Swarm cluster with [Docker Machine](/machine/overview) and the overlay driver. Once you've got it running, deploying your app to it should be as simple as:
|
||||
set up a Swarm cluster with [Docker Machine](/machine/overview.md) and the overlay driver. Once you've got it running, deploying your app to it should be as simple as:
|
||||
|
||||
$ eval "$(docker-machine env --swarm <name of swarm master machine>)"
|
||||
$ docker-compose up
|
||||
|
@ -16,13 +16,13 @@ You can use Docker Compose to easily run WordPress in an isolated environment bu
|
||||
with Docker containers. This quick-start guide demonstrates how to use Compose to set up and run WordPress. Before starting, you'll need to have
|
||||
[Compose installed](install.md).
|
||||
|
||||
## Define the project
|
||||
### Define the project
|
||||
|
||||
1. Create an empty project directory.
|
||||
|
||||
You can name the directory something easy for you to remember. This directory is the context for your application image. The directory should only contain resources to build that image.
|
||||
|
||||
This project directory will contain a `Dockerfile`, a `docker-compose.yaml` file, along with a downloaded `wordpress` directory and a custom `wp-config.php`, all of which you will create in the following steps.
|
||||
This project directory will contain a `docker-compose.yaml` file which will be complete in itself for a good starter wordpress project.
|
||||
|
||||
2. Change directories into your project directory.
|
||||
|
||||
@ -30,109 +30,72 @@ with Docker containers. This quick-start guide demonstrates how to use Compose t
|
||||
|
||||
$ cd my-wordpress/
|
||||
|
||||
3. Create a `Dockerfile`, a file that defines the environment in which your application will run.
|
||||
|
||||
For more information on how to write Dockerfiles, see the [Docker Engine user guide](https://docs.docker.com/engine/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).
|
||||
|
||||
In this case, your Dockerfile should include these two lines:
|
||||
|
||||
FROM orchardup/php5
|
||||
ADD . /code
|
||||
|
||||
This tells the Docker Engine daemon how to build an image defining a container that contains PHP and WordPress.
|
||||
|
||||
4. Create a `docker-compose.yml` file that will start your web service and a separate MySQL instance:
|
||||
3. Create a `docker-compose.yml` file that will start your `Wordpress` blog and a separate `MySQL` instance with a volume mount for data persistence:
|
||||
|
||||
version: '2'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
command: php -S 0.0.0.0:8000 -t /code/wordpress/
|
||||
ports:
|
||||
- "8000:8000"
|
||||
db:
|
||||
image: mysql:5.7
|
||||
volumes:
|
||||
- "./.data/db:/var/lib/mysql"
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: wordpress
|
||||
MYSQL_DATABASE: wordpress
|
||||
MYSQL_USER: wordpress
|
||||
MYSQL_PASSWORD: wordpress
|
||||
|
||||
wordpress:
|
||||
depends_on:
|
||||
- db
|
||||
volumes:
|
||||
- .:/code
|
||||
db:
|
||||
image: orchardup/mysql
|
||||
image: wordpress:latest
|
||||
links:
|
||||
- db
|
||||
ports:
|
||||
- "8000:80"
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_DATABASE: wordpress
|
||||
WORDPRESS_DB_HOST: db:3306
|
||||
WORDPRESS_DB_PASSWORD: wordpress
|
||||
|
||||
5. Download WordPress into the current directory:
|
||||
|
||||
$ curl https://wordpress.org/latest.tar.gz | tar -xvzf -
|
||||
|
||||
This creates a directory called `wordpress` in your project directory.
|
||||
|
||||
6. Create a `wp-config.php` file within the `wordpress` directory.
|
||||
|
||||
A supporting file is needed to get this working. At the top level of the wordpress directory, add a new file called `wp-config.php` as shown. This is the standard WordPress config file with a single change to point the database configuration at the `db` container:
|
||||
|
||||
<?php
|
||||
define('DB_NAME', 'wordpress');
|
||||
define('DB_USER', 'root');
|
||||
define('DB_PASSWORD', '');
|
||||
define('DB_HOST', "db:3306");
|
||||
define('DB_CHARSET', 'utf8');
|
||||
define('DB_COLLATE', '');
|
||||
|
||||
define('AUTH_KEY', 'put your unique phrase here');
|
||||
define('SECURE_AUTH_KEY', 'put your unique phrase here');
|
||||
define('LOGGED_IN_KEY', 'put your unique phrase here');
|
||||
define('NONCE_KEY', 'put your unique phrase here');
|
||||
define('AUTH_SALT', 'put your unique phrase here');
|
||||
define('SECURE_AUTH_SALT', 'put your unique phrase here');
|
||||
define('LOGGED_IN_SALT', 'put your unique phrase here');
|
||||
define('NONCE_SALT', 'put your unique phrase here');
|
||||
|
||||
$table_prefix = 'wp_';
|
||||
define('WPLANG', '');
|
||||
define('WP_DEBUG', false);
|
||||
|
||||
if ( !defined('ABSPATH') )
|
||||
define('ABSPATH', dirname(__FILE__) . '/');
|
||||
|
||||
require_once(ABSPATH . 'wp-settings.php');
|
||||
?>
|
||||
|
||||
7. Verify the contents and structure of your project directory.
|
||||
<!--
|
||||
Dockerfile
|
||||
docker-compose.yaml
|
||||
wordpress/
|
||||
index.php
|
||||
license.txt
|
||||
readme.html
|
||||
wp-activate.php
|
||||
wp-admin/
|
||||
wp-blog-header.php
|
||||
wp-comments-post.php
|
||||
wp-config-sample.php
|
||||
wp-config.php
|
||||
wp-content/
|
||||
wp-cron.php
|
||||
wp-includes/
|
||||
wp-links-opml.php
|
||||
wp-load.php
|
||||
wp-login.php
|
||||
wp-mail.php
|
||||
wp-settings.php
|
||||
wp-signup.php
|
||||
wp-trackback.php
|
||||
xmlrpc.php
|
||||
-->
|
||||
|
||||

|
||||
**NOTE**: The folder `./.data/db` will be automatically created in the project directory
|
||||
alongside the `docker-compose.yml` which will persist any updates made by wordpress to the
|
||||
database.
|
||||
|
||||
### Build the project
|
||||
|
||||
With those four new files in place, run `docker-compose up` from your project directory. This will pull and build the needed images, and then start the web and database containers.
|
||||
Now, run `docker-compose up -d` from your project directory.
|
||||
|
||||
This pulls the needed images, and starts the wordpress and database containers, as shown in the example below.
|
||||
|
||||
$ docker-compose up -d
|
||||
Creating network "my_wordpress_default" with the default driver
|
||||
Pulling db (mysql:5.7)...
|
||||
5.7: Pulling from library/mysql
|
||||
efd26ecc9548: Pull complete
|
||||
a3ed95caeb02: Pull complete
|
||||
...
|
||||
Digest: sha256:34a0aca88e85f2efa5edff1cea77cf5d3147ad93545dbec99cfe705b03c520de
|
||||
Status: Downloaded newer image for mysql:5.7
|
||||
Pulling wordpress (wordpress:latest)...
|
||||
latest: Pulling from library/wordpress
|
||||
efd26ecc9548: Already exists
|
||||
a3ed95caeb02: Pull complete
|
||||
589a9d9a7c64: Pull complete
|
||||
...
|
||||
Digest: sha256:ed28506ae44d5def89075fd5c01456610cd6c64006addfe5210b8c675881aff6
|
||||
Status: Downloaded newer image for wordpress:latest
|
||||
Creating my_wordpress_db_1
|
||||
Creating my_wordpress_wordpress_1
|
||||
|
||||
### Bring up WordPress in a web browser
|
||||
|
||||
If you're using [Docker Machine](https://docs.docker.com/machine/), then `docker-machine ip MACHINE_VM` gives you the machine address and you can open `http://MACHINE_VM_IP:8000` in a browser.
|
||||
|
||||
At this point, WordPress should be running on port `8000` of your Docker Host, and you can complete the "famous five-minute installation" as a WordPress administrator.
|
||||
|
||||
**NOTE**: The Wordpress site will not be immediately available on port `8000` because the containers are still being initialized and may take a couple of minutes before the first load.
|
||||
|
||||

|
||||
|
||||

|
||||
|
@ -15,7 +15,7 @@
|
||||
|
||||
set -e
|
||||
|
||||
VERSION="1.7.1"
|
||||
VERSION="1.8.0-rc1"
|
||||
IMAGE="docker/compose:$VERSION"
|
||||
|
||||
|
||||
|
@ -14,9 +14,9 @@ desired_python_version="2.7.9"
|
||||
desired_python_brew_version="2.7.9"
|
||||
python_formula="https://raw.githubusercontent.com/Homebrew/homebrew/1681e193e4d91c9620c4901efd4458d9b6fcda8e/Library/Formula/python.rb"
|
||||
|
||||
desired_openssl_version="1.0.1j"
|
||||
desired_openssl_brew_version="1.0.1j_1"
|
||||
openssl_formula="https://raw.githubusercontent.com/Homebrew/homebrew/62fc2a1a65e83ba9dbb30b2e0a2b7355831c714b/Library/Formula/openssl.rb"
|
||||
desired_openssl_version="1.0.2h"
|
||||
desired_openssl_brew_version="1.0.2h"
|
||||
openssl_formula="https://raw.githubusercontent.com/Homebrew/homebrew-core/30d3766453347f6e22b3ed6c74bb926d6def2eb5/Formula/openssl.rb"
|
||||
|
||||
PATH="/usr/local/bin:$PATH"
|
||||
|
||||
|
@ -28,6 +28,7 @@ from __future__ import unicode_literals
|
||||
import argparse
|
||||
import itertools
|
||||
import operator
|
||||
import sys
|
||||
from collections import namedtuple
|
||||
|
||||
import requests
|
||||
@ -103,6 +104,14 @@ def get_default(versions):
|
||||
return version
|
||||
|
||||
|
||||
def get_versions(tags):
|
||||
for tag in tags:
|
||||
try:
|
||||
yield Version.parse(tag['name'])
|
||||
except ValueError:
|
||||
print("Skipping invalid tag: {name}".format(**tag), file=sys.stderr)
|
||||
|
||||
|
||||
def get_github_releases(project):
|
||||
"""Query the Github API for a list of version tags and return them in
|
||||
sorted order.
|
||||
@ -112,7 +121,7 @@ def get_github_releases(project):
|
||||
url = '{}/{}/tags'.format(GITHUB_API, project)
|
||||
response = requests.get(url)
|
||||
response.raise_for_status()
|
||||
versions = [Version.parse(tag['name']) for tag in response.json()]
|
||||
versions = get_versions(response.json())
|
||||
return sorted(versions, reverse=True, key=operator.attrgetter('order'))
|
||||
|
||||
|
||||
|
@ -224,6 +224,20 @@ class CLITestCase(DockerClientTestCase):
|
||||
'volumes': {},
|
||||
}
|
||||
|
||||
def test_config_external_network(self):
|
||||
self.base_dir = 'tests/fixtures/networks'
|
||||
result = self.dispatch(['-f', 'external-networks.yml', 'config'])
|
||||
json_result = yaml.load(result.stdout)
|
||||
assert 'networks' in json_result
|
||||
assert json_result['networks'] == {
|
||||
'networks_foo': {
|
||||
'external': True # {'name': 'networks_foo'}
|
||||
},
|
||||
'bar': {
|
||||
'external': {'name': 'networks_bar'}
|
||||
}
|
||||
}
|
||||
|
||||
def test_config_v1(self):
|
||||
self.base_dir = 'tests/fixtures/v1-config'
|
||||
result = self.dispatch(['config'])
|
||||
@ -1192,8 +1206,6 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 1)
|
||||
self.dispatch(['rm', '-f'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 0)
|
||||
self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 1)
|
||||
self.dispatch(['rm', '-f', '-a'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 0)
|
||||
|
||||
service.create_container(one_off=False)
|
||||
@ -1306,13 +1318,14 @@ class CLITestCase(DockerClientTestCase):
|
||||
'logscomposefile_another_1',
|
||||
'exited'))
|
||||
|
||||
# sleep for a short period to allow the tailing thread to receive the
|
||||
# event. This is not great, but there isn't an easy way to do this
|
||||
# without being able to stream stdout from the process.
|
||||
time.sleep(0.5)
|
||||
os.kill(proc.pid, signal.SIGINT)
|
||||
result = wait_on_process(proc, returncode=1)
|
||||
self.dispatch(['kill', 'simple'])
|
||||
|
||||
result = wait_on_process(proc)
|
||||
|
||||
assert 'hello' in result.stdout
|
||||
assert 'test' in result.stdout
|
||||
assert 'logscomposefile_another_1 exited with code 0' in result.stdout
|
||||
assert 'logscomposefile_simple_1 exited with code 137' in result.stdout
|
||||
|
||||
def test_logs_default(self):
|
||||
self.base_dir = 'tests/fixtures/logs-composefile'
|
||||
@ -1474,6 +1487,17 @@ class CLITestCase(DockerClientTestCase):
|
||||
assert Counter(e['action'] for e in lines) == {'create': 2, 'start': 2}
|
||||
|
||||
def test_events_human_readable(self):
|
||||
|
||||
def has_timestamp(string):
|
||||
str_iso_date, str_iso_time, container_info = string.split(' ', 2)
|
||||
try:
|
||||
return isinstance(datetime.datetime.strptime(
|
||||
'%s %s' % (str_iso_date, str_iso_time),
|
||||
'%Y-%m-%d %H:%M:%S.%f'),
|
||||
datetime.datetime)
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
events_proc = start_process(self.base_dir, ['events'])
|
||||
self.dispatch(['up', '-d', 'simple'])
|
||||
wait_on_condition(ContainerCountCondition(self.project, 1))
|
||||
@ -1490,7 +1514,8 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
assert expected_template.format('create', container.id) in lines[0]
|
||||
assert expected_template.format('start', container.id) in lines[1]
|
||||
assert lines[0].startswith(datetime.date.today().isoformat())
|
||||
|
||||
assert has_timestamp(lines[0])
|
||||
|
||||
def test_env_file_relative_to_compose_file(self):
|
||||
config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml')
|
||||
|
@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: sh -c "echo hello && sleep 200"
|
||||
command: sh -c "echo hello && tail -f /dev/null"
|
||||
another:
|
||||
image: busybox:latest
|
||||
command: sh -c "echo test"
|
||||
|
@ -834,6 +834,42 @@ class ProjectTest(DockerClientTestCase):
|
||||
self.assertTrue(log_config)
|
||||
self.assertEqual(log_config.get('Type'), 'none')
|
||||
|
||||
@v2_only()
|
||||
def test_project_up_port_mappings_with_multiple_files(self):
|
||||
base_file = config.ConfigFile(
|
||||
'base.yml',
|
||||
{
|
||||
'version': V2_0,
|
||||
'services': {
|
||||
'simple': {
|
||||
'image': 'busybox:latest',
|
||||
'command': 'top',
|
||||
'ports': ['1234:1234']
|
||||
},
|
||||
},
|
||||
|
||||
})
|
||||
override_file = config.ConfigFile(
|
||||
'override.yml',
|
||||
{
|
||||
'version': V2_0,
|
||||
'services': {
|
||||
'simple': {
|
||||
'ports': ['1234:1234']
|
||||
}
|
||||
}
|
||||
|
||||
})
|
||||
details = config.ConfigDetails('.', [base_file, override_file])
|
||||
|
||||
config_data = config.load(details)
|
||||
project = Project.from_config(
|
||||
name='composetest', config_data=config_data, client=self.client
|
||||
)
|
||||
project.up()
|
||||
containers = project.containers()
|
||||
self.assertEqual(len(containers), 1)
|
||||
|
||||
@v2_only()
|
||||
def test_initialize_volumes(self):
|
||||
vol_name = '{0:x}'.format(random.getrandbits(32))
|
||||
|
@ -2,10 +2,12 @@ from __future__ import absolute_import
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os
|
||||
import ssl
|
||||
|
||||
import pytest
|
||||
|
||||
from compose.cli.command import get_config_path_from_options
|
||||
from compose.cli.command import get_tls_version
|
||||
from compose.config.environment import Environment
|
||||
from compose.const import IS_WINDOWS_PLATFORM
|
||||
from tests import mock
|
||||
@ -46,3 +48,21 @@ class TestGetConfigPathFromOptions(object):
|
||||
def test_no_path(self):
|
||||
environment = Environment.from_env_file('.')
|
||||
assert not get_config_path_from_options('.', {}, environment)
|
||||
|
||||
|
||||
class TestGetTlsVersion(object):
|
||||
def test_get_tls_version_default(self):
|
||||
environment = {}
|
||||
assert get_tls_version(environment) is None
|
||||
|
||||
@pytest.mark.skipif(not hasattr(ssl, 'PROTOCOL_TLSv1_2'), reason='TLS v1.2 unsupported')
|
||||
def test_get_tls_version_upgrade(self):
|
||||
environment = {'COMPOSE_TLS_VERSION': 'TLSv1_2'}
|
||||
assert get_tls_version(environment) == ssl.PROTOCOL_TLSv1_2
|
||||
|
||||
def test_get_tls_version_unavailable(self):
|
||||
environment = {'COMPOSE_TLS_VERSION': 'TLSv5_5'}
|
||||
with mock.patch('compose.cli.command.log') as mock_log:
|
||||
tls_version = get_tls_version(environment)
|
||||
mock_log.warn.assert_called_once_with(mock.ANY)
|
||||
assert tls_version is None
|
||||
|
@ -715,7 +715,35 @@ class ConfigTest(unittest.TestCase):
|
||||
).services[0]
|
||||
assert 'args' in service['build']
|
||||
assert 'foo' in service['build']['args']
|
||||
assert service['build']['args']['foo'] == 'None'
|
||||
assert service['build']['args']['foo'] == ''
|
||||
|
||||
# If build argument is None then it will be converted to the empty
|
||||
# string. Make sure that int zero kept as it is, i.e. not converted to
|
||||
# the empty string
|
||||
def test_build_args_check_zero_preserved(self):
|
||||
service = config.load(
|
||||
build_config_details(
|
||||
{
|
||||
'version': '2',
|
||||
'services': {
|
||||
'web': {
|
||||
'build': {
|
||||
'context': '.',
|
||||
'dockerfile': 'Dockerfile-alt',
|
||||
'args': {
|
||||
'foo': 0
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
'tests/fixtures/extends',
|
||||
'filename.yml'
|
||||
)
|
||||
).services[0]
|
||||
assert 'args' in service['build']
|
||||
assert 'foo' in service['build']['args']
|
||||
assert service['build']['args']['foo'] == '0'
|
||||
|
||||
def test_load_with_multiple_files_mismatched_networks_format(self):
|
||||
base_file = config.ConfigFile(
|
||||
@ -1912,6 +1940,14 @@ class MergePortsTest(unittest.TestCase, MergeListsTest):
|
||||
base_config = ['10:8000', '9000']
|
||||
override_config = ['20:8000']
|
||||
|
||||
def test_duplicate_port_mappings(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{self.config_name: self.base_config},
|
||||
{self.config_name: self.base_config},
|
||||
DEFAULT_VERSION
|
||||
)
|
||||
assert set(service_dict[self.config_name]) == set(self.base_config)
|
||||
|
||||
|
||||
class MergeNetworksTest(unittest.TestCase, MergeListsTest):
|
||||
config_name = 'networks'
|
||||
@ -2658,15 +2694,28 @@ class ExpandPathTest(unittest.TestCase):
|
||||
|
||||
|
||||
class VolumePathTest(unittest.TestCase):
|
||||
|
||||
@pytest.mark.xfail((not IS_WINDOWS_PLATFORM), reason='does not have a drive')
|
||||
def test_split_path_mapping_with_windows_path(self):
|
||||
host_path = "c:\\Users\\msamblanet\\Documents\\anvil\\connect\\config"
|
||||
windows_volume_path = host_path + ":/opt/connect/config:ro"
|
||||
expected_mapping = ("/opt/connect/config:ro", host_path)
|
||||
|
||||
mapping = config.split_path_mapping(windows_volume_path)
|
||||
self.assertEqual(mapping, expected_mapping)
|
||||
assert mapping == expected_mapping
|
||||
|
||||
def test_split_path_mapping_with_windows_path_in_container(self):
|
||||
host_path = 'c:\\Users\\remilia\\data'
|
||||
container_path = 'c:\\scarletdevil\\data'
|
||||
expected_mapping = (container_path, host_path)
|
||||
|
||||
mapping = config.split_path_mapping('{0}:{1}'.format(host_path, container_path))
|
||||
assert mapping == expected_mapping
|
||||
|
||||
def test_split_path_mapping_with_root_mount(self):
|
||||
host_path = '/'
|
||||
container_path = '/var/hostroot'
|
||||
expected_mapping = (container_path, host_path)
|
||||
mapping = config.split_path_mapping('{0}:{1}'.format(host_path, container_path))
|
||||
assert mapping == expected_mapping
|
||||
|
||||
|
||||
@pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
|
||||
|
@ -642,6 +642,26 @@ class ServiceTest(unittest.TestCase):
|
||||
service = Service('foo', project='testing')
|
||||
assert service.image_name == 'testing_foo'
|
||||
|
||||
@mock.patch('compose.service.log', autospec=True)
|
||||
def test_only_log_warning_when_host_ports_clash(self, mock_log):
|
||||
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
|
||||
name = 'foo'
|
||||
service = Service(
|
||||
name,
|
||||
client=self.mock_client,
|
||||
ports=["8080:80"])
|
||||
|
||||
service.scale(0)
|
||||
self.assertFalse(mock_log.warn.called)
|
||||
|
||||
service.scale(1)
|
||||
self.assertFalse(mock_log.warn.called)
|
||||
|
||||
service.scale(2)
|
||||
mock_log.warn.assert_called_once_with(
|
||||
'The "{}" service specifies a port on the host. If multiple containers '
|
||||
'for this service are created on a single host, the port will clash.'.format(name))
|
||||
|
||||
|
||||
class TestServiceNetwork(object):
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user