Update docs for 1.2.0

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
This commit is contained in:
Aanand Prasad 2015-04-17 16:02:57 +01:00
parent ed549155b3
commit 43af1684c1
6 changed files with 495 additions and 38 deletions

364
docs/extends.md Normal file
View File

@ -0,0 +1,364 @@
page_title: Extending services in Compose
page_description: How to use Docker Compose's "extends" keyword to share configuration between files and projects
page_keywords: fig, composition, compose, docker, orchestration, documentation, docs
## Extending services in Compose
Docker Compose's `extends` keyword enables sharing of common configurations
among different files, or even different projects entirely. Extending services
is useful if you have several applications that reuse commonly-defined services.
Using `extends` you can define a service in one place and refer to it from
anywhere.
Alternatively, you can deploy the same application to multiple environments with
a slightly different set of services in each case (or with changes to the
configuration of some services). Moreover, you can do so without copy-pasting
the configuration around.
### Understand the extends configuration
When defining any service in `docker-compose.yml`, you can declare that you are
extending another service like this:
```yaml
web:
extends:
file: common-services.yml
service: webapp
```
This instructs Compose to re-use the configuration for the `webapp` service
defined in the `common-services.yml` file. Suppose that `common-services.yml`
looks like this:
```yaml
webapp:
build: .
ports:
- "8000:8000"
volumes:
- "/data"
```
In this case, you'll get exactly the same result as if you wrote
`docker-compose.yml` with that `build`, `ports` and `volumes` configuration
defined directly under `web`.
You can go further and define (or re-define) configuration locally in
`docker-compose.yml`:
```yaml
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
```
You can also write other services and link your `web` service to them:
```yaml
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
links:
- db
db:
image: postgres
```
For full details on how to use `extends`, refer to the [reference](#reference).
### Example use case
In this example, youll repurpose the example app from the [quick start
guide](index.md). (If you're not familiar with Compose, it's recommended that
you go through the quick start first.) This example assumes you want to use
Compose both to develop an application locally and then deploy it to a
production environment.
The local and production environments are similar, but there are some
differences. In development, you mount the application code as a volume so that
it can pick up changes; in production, the code should be immutable from the
outside. This ensures its not accidentally changed. The development environment
uses a local Redis container, but in production another team manages the Redis
service, which is listening at `redis-production.example.com`.
To configure with `extends` for this sample, you must:
1. Define the web application as a Docker image in `Dockerfile` and a Compose
service in `common.yml`.
2. Define the development environment in the standard Compose file,
`docker-compose.yml`.
- Use `extends` to pull in the web service.
- Configure a volume to enable code reloading.
- Create an additional Redis service for the application to use locally.
3. Define the production environment in a third Compose file, `production.yml`.
- Use `extends` to pull in the web service.
- Configure the web service to talk to the external, production Redis service.
#### Define the web app
Defining the web application requires the following:
1. Create an `app.py` file.
This file contains a simple Python application that uses Flask to serve HTTP
and increments a counter in Redis:
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host=os.environ['REDIS_HOST'], port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.\n' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
This code uses a `REDIS_HOST` environment variable to determine where to
find Redis.
2. Define the Python dependencies in a `requirements.txt` file:
flask
redis
3. Create a `Dockerfile` to build an image containing the app:
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r
requirements.txt
CMD python app.py
4. Create a Compose configuration file called `common.yml`:
This configuration defines how to run the app.
web:
build: .
ports:
- "5000:5000"
Typically, you would have dropped this configuration into
`docker-compose.yml` file, but in order to pull it into multiple files with
`extends`, it needs to be in a separate file.
#### Define the development environment
1. Create a `docker-compose.yml` file.
The `extends` option pulls in the `web` service from the `common.yml` file
you created in the previous section.
web:
extends:
file: common.yml
service: web
volumes:
- .:/code
links:
- redis
environment:
- REDIS_HOST=redis
redis:
image: redis
The new addition defines a `web` service that:
- Fetches the base configuration for `web` out of `common.yml`.
- Adds `volumes` and `links` configuration to the base (`common.yml`)
configuration.
- Sets the `REDIS_HOST` environment variable to point to the linked redis
container. This environment uses a stock `redis` image from the Docker Hub.
2. Run `docker-compose up`.
Compose creates, links, and starts a web and redis container linked together.
It mounts your application code inside the web container.
3. Verify that the code is mounted by changing the message in
`app.py`&mdash;say, from `Hello world!` to `Hello from Compose!`.
Don't forget to refresh your browser to see the change!
#### Define the production environment
You are almost done. Now, define your production environment:
1. Create a `production.yml` file.
As with `docker-compose.yml`, the `extends` option pulls in the `web` service
from `common.yml`.
web:
extends:
file: common.yml
service: web
environment:
- REDIS_HOST=redis-production.example.com
2. Run `docker-compose -f production.yml up`.
Compose creates *just* a web container and configures the Redis connection via
the `REDIS_HOST` environment variable. This variable points to the production
Redis instance.
> **Note**: If you try to load up the webapp in your browser you'll get an
> error&mdash;`redis-production.example.com` isn't actually a Redis server.
You've now done a basic `extends` configuration. As your application develops,
you can make any necessary changes to the web service in `common.yml`. Compose
picks up both the development and production environments when you next run
`docker-compose`. You don't have to do any copy-and-paste, and you don't have to
manually keep both environments in sync.
### Reference
You can use `extends` on any service together with other configuration keys. It
always expects a dictionary that should always contain two keys: `file` and
`service`.
The `file` key specifies which file to look in. It can be an absolute path or a
relative one&mdash;if relative, it's treated as relative to the current file.
The `service` key specifies the name of the service to extend, for example `web`
or `database`.
You can extend a service that itself extends another. You can extend
indefinitely. Compose does not support circular references and `docker-compose`
returns an error if it encounters them.
#### Adding and overriding configuration
Compose copies configurations from the original service over to the local one,
**except** for `links` and `volumes_from`. These exceptions exist to avoid
implicit dependencies&mdash;you always define `links` and `volumes_from`
locally. This ensures dependencies between services are clearly visible when
reading the current file. Defining these locally also ensures changes to the
referenced file don't result in breakage.
If a configuration option is defined in both the original service and the local
service, the local value either *override*s or *extend*s the definition of the
original service. This works differently for other configuration options.
For single-value options like `image`, `command` or `mem_limit`, the new value
replaces the old value. **This is the default behaviour - all exceptions are
listed below.**
```yaml
# original service
command: python app.py
# local service
command: python otherapp.py
# result
command: python otherapp.py
```
In the case of `build` and `image`, using one in the local service causes
Compose to discard the other, if it was defined in the original service.
```yaml
# original service
build: .
# local service
image: redis
# result
image: redis
```
```yaml
# original service
image: redis
# local service
build: .
# result
build: .
```
For the **multi-value options** `ports`, `expose`, `external_links`, `dns` and
`dns_search`, Compose concatenates both sets of values:
```yaml
# original service
expose:
- "3000"
# local service
expose:
- "4000"
- "5000"
# result
expose:
- "3000"
- "4000"
- "5000"
```
In the case of `environment`, Compose "merges" entries together with
locally-defined values taking precedence:
```yaml
# original service
environment:
- FOO=original
- BAR=original
# local service
environment:
- BAR=local
- BAZ=local
# result
environment:
- FOO=original
- BAR=local
- BAZ=local
```
Finally, for `volumes`, Compose "merges" entries together with locally-defined
bindings taking precedence:
```yaml
# original service
volumes:
- /original-dir/foo:/foo
- /original-dir/bar:/bar
# local service
volumes:
- /local-dir/bar:/bar
- /local-dir/baz/:baz
# result
volumes:
- /original-dir/foo:/foo
- /local-dir/bar:/bar
- /local-dir/baz/:baz
```

View File

@ -5,6 +5,8 @@ page_keywords: documentation, docs, docker, compose, orchestration, containers
# Docker Compose
## Overview
Compose is a tool for defining and running complex applications with Docker.
With Compose, you define a multi-container application in a single file, then
spin your application up in a single command which does everything that needs to
@ -191,3 +193,31 @@ At this point, you have seen the basics of how Compose works.
[Rails](rails.md), or [Wordpress](wordpress.md).
- See the reference guides for complete details on the [commands](cli.md), the
[configuration file](yml.md) and [environment variables](env.md).
## Release Notes
### Version 1.2.0 (April 7, 2015)
For complete information on this release, see the [1.2.0 Milestone project page](https://github.com/docker/compose/wiki/1.2.0-Milestone-Project-Page).
In addition to bug fixes and refinements, this release adds the following:
* The `extends` keyword, which adds the ability to extend services by sharing common configurations. For details, see
[PR #972](https://github.com/docker/compose/pull/1088).
* Better integration with Swarm. Swarm will now schedule inter-dependent
containers on the same host. For details, see
[PR #972](https://github.com/docker/compose/pull/972).
## Getting help
Docker Compose is still in its infancy and under active development. If you need
help, would like to contribute, or simply want to talk about the project with
like-minded individuals, we have a number of open channels for communication.
* To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues).
* To talk about the project with people in real time: please join the `#docker-compose` channel on IRC.
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls).
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).

View File

@ -1,5 +1,5 @@
page_title: Installing Compose
page_description: How to intall Docker Compose
page_description: How to install Docker Compose
page_keywords: compose, orchestration, install, installation, docker, documentation
@ -23,6 +23,8 @@ To install Compose, run the following commands:
curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
> Note: If you get a "Permission denied" error, your `/usr/local/bin` directory probably isn't writable and you'll need to install Compose as the superuser. Run `sudo -i`, then the two commands above, then `exit`.
Optionally, you can also install [command completion](completion.md) for the
bash shell.
@ -31,7 +33,7 @@ Compose can also be installed as a Python package:
$ sudo pip install -U docker-compose
No further steps are required; Compose should now be successfully installed.
No further steps are required; Compose should now be successfully installed.
You can test the installation by running `docker-compose --version`.
## Compose documentation

View File

@ -1,5 +1,7 @@
- ['compose/index.md', 'User Guide', 'Docker Compose' ]
- ['compose/production.md', 'User Guide', 'Using Compose in production' ]
- ['compose/extends.md', 'User Guide', 'Extending services in Compose']
- ['compose/install.md', 'Installation', 'Docker Compose']
- ['compose/cli.md', 'Reference', 'Compose command line']
- ['compose/yml.md', 'Reference', 'Compose yml']

77
docs/production.md Normal file
View File

@ -0,0 +1,77 @@
page_title: Using Compose in production
page_description: Guide to using Docker Compose in production
page_keywords: documentation, docs, docker, compose, orchestration, containers, production
## Using Compose in production
While **Compose is not yet considered production-ready**, if you'd like to experiment and learn more about using it in production deployments, this guide
can help.
The project is actively working towards becoming
production-ready; to learn more about the progress being made, check out the
[roadmap](https://github.com/docker/compose/blob/master/ROADMAP.md) for details
on how it's coming along and what still needs to be done.
When deploying to production, you'll almost certainly want to make changes to
your app configuration that are more appropriate to a live environment. These
changes may include:
- Removing any volume bindings for application code, so that code stays inside
the container and can't be changed from outside
- Binding to different ports on the host
- Setting environment variables differently (e.g., to decrease the verbosity of
logging, or to enable email sending)
- Specifying a restart policy (e.g., `restart: always`) to avoid downtime
- Adding extra services (e.g., a log aggregator)
For this reason, you'll probably want to define a separate Compose file, say
`production.yml`, which specifies production-appropriate configuration.
> **Note:** The [extends](extends.md) keyword is useful for maintaining multiple
> Compose files which re-use common services without having to manually copy and
> paste.
Once you've got an alternate configuration file, make Compose use it
by setting the `COMPOSE_FILE` environment variable:
$ COMPOSE_FILE=production.yml
$ docker-compose up -d
> **Note:** You can also use the file for a one-off command without setting
> an environment variable. You do this by passing the `-f` flag, e.g.,
> `docker-compose -f production.yml up -d`.
### Deploying changes
When you make changes to your app code, you'll need to rebuild your image and
recreate your app's containers. To redeploy a service called
`web`, you would use:
$ docker-compose build web
$ docker-compose up --no-deps -d web
This will first rebuild the image for `web` and then stop, destroy, and recreate
*just* the `web` service. The `--no-deps` flag prevents Compose from also
recreating any services which `web` depends on.
### Running Compose on a single server
You can use Compose to deploy an app to a remote Docker host by setting the
`DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables
appropriately. For tasks like this,
[Docker Machine](https://docs.docker.com/machine) makes managing local and
remote Docker hosts very easy, and is recommended even if you're not deploying
remotely.
Once you've set up your environment variables, all the normal `docker-compose`
commands will work with no further configuration.
### Running Compose on a Swarm cluster
[Docker Swarm](https://docs.docker.com/swarm), a Docker-native clustering
system, exposes the same API as a single Docker host, which means you can use
Compose against a Swarm instance and run your apps across multiple hosts.
Compose/Swarm integration is still in the experimental stage, and Swarm is still
in beta, but if you'd like to explore and experiment, check out the
[integration guide](https://github.com/docker/compose/blob/master/SWARM.md).

View File

@ -173,8 +173,12 @@ env_file:
- /opt/secrets.env
```
Compose expects each line in an env file to be in `VAR=VAL` format. Lines
beginning with `#` (i.e. comments) are ignored, as are blank lines.
```
RACK_ENV: development
# Set Rails/Rack environment
RACK_ENV=development
```
### extends
@ -217,42 +221,10 @@ Here, the `web` service in **development.yml** inherits the configuration of
the `webapp` service in **common.yml** - the `build` and `environment` keys -
and adds `ports` and `links` configuration. It overrides one of the defined
environment variables (DEBUG) with a new value, and the other one
(SEND_EMAILS) is left untouched. It's exactly as if you defined `web` like
this:
(SEND_EMAILS) is left untouched.
```yaml
web:
build: ./webapp
ports:
- "8000:8000"
links:
- db
environment:
- DEBUG=true
- SEND_EMAILS=false
```
The `extends` option is great for sharing configuration between different
apps, or for configuring the same app differently for different environments.
You could write a new file for a staging environment, **staging.yml**, which
binds to a different port and doesn't turn on debugging:
```
web:
extends:
file: common.yml
service: webapp
ports:
- "80:8000"
links:
- db
db:
image: postgres
```
> **Note:** When you extend a service, `links` and `volumes_from`
> configuration options are **not** inherited - you will have to define
> those manually each time you extend it.
For more on `extends`, see the [tutorial](extends.md#example) and
[reference](extends.md#reference).
### net
@ -264,6 +236,16 @@ net: "none"
net: "container:[name or id]"
net: "host"
```
### pid
```
pid: "host"
```
Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers
launched with this flag will be able to access and manipulate other
containers in the bare-metal machine's namespace and vise-versa.
### dns