AutoClone persistent volumes are perhaps the easiest way to distribute content, configuration and datasets to containers in a multi-host container environment. Why? Provisioning is fast, resource utilization is nil, and it’s dead simple to manage versioning.
To showcase the flexibility of AutoClone, we put together a demo using the latest and greatest that Docker has to offer: Engine 1.9, Swarm 1.0, and Compose 1.5. The demo walks through deployment of a scalable multi-host load-balanced web infrastructure. We make use of Docker for orchestration and AutoClone for efficient data distribution using native docker management.
The demo implements a “publish & subscribe” workflow for content distribution using a standard filesystem (ie. ext4). There is a single repository where modifications are made and published. The publishing action is creation of a point in time snapshot. The Blockbridge Docker Volume driver, with AutoClone support, enables the web server containers to dynamically locate and create thin clones (ie writable snapshots) of published content via user-assigned metadata.
Here’s a breakdown of the technologies used in the Demo:
- Docker Swarm is used for distributed resource management and orchestration. Our swarm was constructed using Docker Machine. Since we have an OpenStack deployment in-house, we leverage the OpenStack driver support in Docker Machine to provision and configure swarm. Be advised: there are quite a few command line options that need to be supplied to Docker Machine. However, the end result was well worth it.
- For the backend web infrastructure we used NGINX: it’s fast, lightweight and requires minimal configuration. In the demo, we deploy 5 backend web servers. Swarm distributes these among our 3 machines.
- To balance load across the NGINX servers, we used HAProxy. To automate configuration of HAProxy, as NGINX servers are added and removed, we used Interlock. Interlock connects to the swarm master to listen for container events and configures HAProxy accordingly. It’s worth noting that it was a bit tricky to get Interlock to communicate with the swarm master. The solution for the demo was to use a placement constraint that allowed us to bind mount credentials into the Interlock container. Check out the compose file in the demo for more information.
- For the publishing workflow, we used the Blockbridge command line tools (for storage management), the Blockbridge Docker Volume Driver (for AutoClone functionality), and Blockbridge EPS (for programmable software-defined storage infrastructure).
- Last, and definitely not least, we made use of Docker Compose to orchestrate workflows that involve sets of containers. A major benefit of Compose is the ability to describe container configuration in YAML. This provides a more structured experience as compared to use of command line tools. Compose also provides some handy features for managing subsets of containers described within a single compose file.
Want to try it yourself? Here are some useful resources: