Update Swarm integration guide and make it an official part of the docs

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
This commit is contained in:
Aanand Prasad 2016-02-02 17:55:21 +00:00
parent 5fc0df4be2
commit 520c695bf4
4 changed files with 197 additions and 51 deletions

View File

@ -1,39 +1 @@
Docker Compose/Swarm integration This file has moved to: https://docs.docker.com/compose/swarm/
================================
Eventually, Compose and Swarm aim to have full integration, meaning you can point a Compose app at a Swarm cluster and have it all just work as if you were using a single Docker host.
However, integration is currently incomplete: Compose can create containers on a Swarm cluster, but the majority of Compose apps wont work out of the box unless all containers are scheduled on one host, because links between containers do not work across hosts.
Docker networking is [getting overhauled](https://github.com/docker/libnetwork) in such a way that itll fit the multi-host model much better. For now, linked containers are automatically scheduled on the same host.
Building
--------
Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the resulting image will only live on a single node and won't be distributed to other nodes.
If you want to use Compose to scale the service in question to multiple nodes, you'll have to build it yourself, push it to a registry (e.g. the Docker Hub) and reference it from `docker-compose.yml`:
$ docker build -t myusername/web .
$ docker push myusername/web
$ cat docker-compose.yml
web:
image: myusername/web
$ docker-compose up -d
$ docker-compose scale web=3
Scheduling
----------
Swarm offers a rich set of scheduling and affinity hints, enabling you to control where containers are located. They are specified via container environment variables, so you can use Compose's `environment` option to set them.
environment:
# Schedule containers on a node that has the 'storage' label set to 'ssd'
- "constraint:storage==ssd"
# Schedule containers where the 'redis' image is already pulled
- "affinity:image==redis"
For the full set of available filters and expressions, see the [Swarm documentation](https://docs.docker.com/swarm/scheduler/filter/).

View File

@ -76,7 +76,9 @@ See the [links reference](compose-file.md#links) for more information.
## Multi-host networking ## Multi-host networking
When deploying a Compose application to a Swarm cluster, you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to application code. Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay.md) to see how to set up the overlay driver, and then specify `driver: overlay` in your networking config (see the sections below for how to do this). When [deploying a Compose application to a Swarm cluster](swarm.md), you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to your Compose file or application code.
Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay.md) to see how to set up a Swarm cluster. The cluster will use the `overlay` driver by default, but you can specify it explicitly if you prefer - see below for how to do this.
## Specifying custom networks ## Specifying custom networks
@ -105,11 +107,11 @@ Here's an example Compose file defining two custom networks. The `proxy` service
networks: networks:
front: front:
# Use the overlay driver for multi-host communication # Use a custom driver
driver: overlay driver: custom-driver-1
back: back:
# Use a custom driver which takes special options # Use a custom driver which takes special options
driver: my-custom-driver driver: custom-driver-2
driver_opts: driver_opts:
foo: "1" foo: "1"
bar: "2" bar: "2"
@ -135,8 +137,8 @@ Instead of (or as well as) specifying your own networks, you can also change the
networks: networks:
default: default:
# Use the overlay driver for multi-host communication # Use a custom driver
driver: overlay driver: custom-driver-1
## Using a pre-existing network ## Using a pre-existing network

View File

@ -60,7 +60,7 @@ recreating any services which `web` depends on.
You can use Compose to deploy an app to a remote Docker host by setting the You can use Compose to deploy an app to a remote Docker host by setting the
`DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables `DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables
appropriately. For tasks like this, appropriately. For tasks like this,
[Docker Machine](https://docs.docker.com/machine/) makes managing local and [Docker Machine](/machine/overview) makes managing local and
remote Docker hosts very easy, and is recommended even if you're not deploying remote Docker hosts very easy, and is recommended even if you're not deploying
remotely. remotely.
@ -69,14 +69,12 @@ commands will work with no further configuration.
### Running Compose on a Swarm cluster ### Running Compose on a Swarm cluster
[Docker Swarm](https://docs.docker.com/swarm/), a Docker-native clustering [Docker Swarm](/swarm/overview), a Docker-native clustering
system, exposes the same API as a single Docker host, which means you can use system, exposes the same API as a single Docker host, which means you can use
Compose against a Swarm instance and run your apps across multiple hosts. Compose against a Swarm instance and run your apps across multiple hosts.
Compose/Swarm integration is still in the experimental stage, and Swarm is still Compose/Swarm integration is still in the experimental stage, but if you'd like
in beta, but if you'd like to explore and experiment, check out the <a to explore and experiment, check out the [integration guide](swarm.md).
href="https://github.com/docker/compose/blob/master/SWARM.md">integration
guide</a>.
## Compose documentation ## Compose documentation

184
docs/swarm.md Normal file
View File

@ -0,0 +1,184 @@
<!--[metadata]>
+++
title = "Using Compose with Swarm"
description = "How to use Compose and Swarm together to deploy apps to multi-host clusters"
keywords = ["documentation, docs, docker, compose, orchestration, containers, swarm"]
[menu.main]
parent="workw_compose"
+++
<![end-metadata]-->
# Using Compose with Swarm
Docker Compose and [Docker Swarm](/swarm/overview) aim to have full integration, meaning
you can point a Compose app at a Swarm cluster and have it all just work as if
you were using a single Docker host.
The actual extent of integration depends on which version of the [Compose file
format](compose-file.md#versioning) you are using:
1. If you're using version 1 along with `links`, your app will work, but Swarm
will schedule all containers on one host, because links between containers
do not work across hosts with the old networking system.
2. If you're using version 2, your app should work with no changes:
- subject to the [limitations](#limitations) described below,
- as long as the Swarm cluster is configured to use the [overlay
driver](/engine/userguide/networking/dockernetworks.md#an-overlay-network),
or a custom driver which supports multi-host networking.
Read the [Getting started with multi-host
networking](/engine/userguide/networking/get-started-overlay.md) to see how to
set up a Swarm cluster with [Docker Machine](/machine/overview) and the overlay driver.
Once you've got it running, deploying your app to it should be as simple as:
$ eval "$(docker-machine env --swarm <name of swarm master machine>)"
$ docker-compose up
## Limitations
### Building images
Swarm can build an image from a Dockerfile just like a single-host Docker
instance can, but the resulting image will only live on a single node and won't
be distributed to other nodes.
If you want to use Compose to scale the service in question to multiple nodes,
you'll have to build it yourself, push it to a registry (e.g. the Docker Hub)
and reference it from `docker-compose.yml`:
$ docker build -t myusername/web .
$ docker push myusername/web
$ cat docker-compose.yml
web:
image: myusername/web
$ docker-compose up -d
$ docker-compose scale web=3
### Multiple dependencies
If a service has multiple dependencies of the type which force co-scheduling
(see [Automatic scheduling](#automatic-scheduling) below), it's possible that
Swarm will schedule the dependencies on different nodes, making the dependent
service impossible to schedule. For example, here `foo` needs to be co-scheduled
with `bar` and `baz`:
version: "2"
services:
foo:
image: foo
volumes_from: ["bar"]
network_mode: "service:baz"
bar:
image: bar
baz:
image: baz
The problem is that Swarm might first schedule `bar` and `baz` on different
nodes (since they're not dependent on one another), making it impossible to
pick an appropriate node for `foo`.
To work around this, use [manual scheduling](#manual-scheduling) to ensure that
all three services end up on the same node:
version: "2"
services:
foo:
image: foo
volumes_from: ["bar"]
network_mode: "service:baz"
environment:
- "constraint:node==node-1"
bar:
image: bar
environment:
- "constraint:node==node-1"
baz:
image: baz
environment:
- "constraint:node==node-1"
### Host ports and recreating containers
If a service maps a port from the host, e.g. `80:8000`, then you may get an
error like this when running `docker-compose up` on it after the first time:
docker: Error response from daemon: unable to find a node that satisfies
container==6ab2dfe36615ae786ef3fc35d641a260e3ea9663d6e69c5b70ce0ca6cb373c02.
The usual cause of this error is that the container has a volume (defined either
in its image or in the Compose file) without an explicit mapping, and so in
order to preserve its data, Compose has directed Swarm to schedule the new
container on the same node as the old container. This results in a port clash.
There are two viable workarounds for this problem:
- Specify a named volume, and use a volume driver which is capable of mounting
the volume into the container regardless of what node it's scheduled on.
Compose does not give Swarm any specific scheduling instructions if a
service uses only named volumes.
version: "2"
services:
web:
build: .
ports:
- "80:8000"
volumes:
- web-logs:/var/log/web
volumes:
web-logs:
driver: custom-volume-driver
- Remove the old container before creating the new one. You will lose any data
in the volume.
$ docker-compose stop web
$ docker-compose rm -f web
$ docker-compose up web
## Scheduling containers
### Automatic scheduling
Some configuration options will result in containers being automatically
scheduled on the same Swarm node to ensure that they work correctly. These are:
- `network_mode: "service:..."` and `network_mode: "container:..."` (and
`net: "container:..."` in the version 1 file format).
- `volumes_from`
- `links`
### Manual scheduling
Swarm offers a rich set of scheduling and affinity hints, enabling you to
control where containers are located. They are specified via container
environment variables, so you can use Compose's `environment` option to set
them.
# Schedule containers on a specific node
environment:
- "constraint:node==node-1"
# Schedule containers on a node that has the 'storage' label set to 'ssd'
environment:
- "constraint:storage==ssd"
# Schedule containers where the 'redis' image is already pulled
environment:
- "affinity:image==redis"
For the full set of available filters and expressions, see the [Swarm
documentation](/swarm/scheduler/filter.md).