20.06.2016 Views

Docker and the Three Ways of DevOps

Xks9iQ

Xks9iQ

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

WHITEPAPER | DOCKER AND THE THREE WAYS OF DEVOPS<br />

<strong>Docker</strong> <strong>and</strong> <strong>the</strong> First Way<br />

Velocity<br />

Developer Flow<br />

Developers who use <strong>Docker</strong> typically create a <strong>Docker</strong> environment<br />

on <strong>the</strong>ir laptop to locally develop <strong>and</strong> test applications in containers.<br />

In this environment, <strong>the</strong>y can test a service stack made up<br />

<strong>of</strong> multiple <strong>Docker</strong> containers. Container instances spin up at an<br />

average <strong>of</strong> around 500 milliseconds. Multiple <strong>Docker</strong> containers<br />

converged as a single service stack, like a LAMP stack, can<br />

take seconds to instantiate. Convergence is <strong>the</strong> interconnection<br />

information that needs to be configured between <strong>the</strong> instances,<br />

for example a database connection info or load balancer IP’s.<br />

Contrast this to non-container virtual instances running as multiple<br />

hosts. An alternative non-containerized environment can take anywhere<br />

from 2 to 10 minutes to spin up <strong>and</strong> converge depending<br />

on <strong>the</strong> complexity. The end result with <strong>Docker</strong> is that developers<br />

experience less context switch time for testing <strong>and</strong> retesting <strong>and</strong><br />

resulting in significantly increased velocity.<br />

Integration Flow<br />

<strong>Docker</strong> can streamline a continuous integration (CI) with <strong>the</strong> use<br />

<strong>of</strong> <strong>Docker</strong>ized build slaves. A CI system can be designed such<br />

that multiple virtual instances each run as individual <strong>Docker</strong> hosts<br />

as multiple build slaves, for example. Some environments run a<br />

<strong>Docker</strong> host inside <strong>of</strong> a <strong>Docker</strong> host (<strong>Docker</strong>-in-<strong>Docker</strong>) for <strong>the</strong>ir<br />

build environments. This creates a clean isolation <strong>of</strong> <strong>the</strong> build out<br />

<strong>and</strong> breakdown environments for <strong>the</strong> services being tested. In this<br />

case, <strong>the</strong> original virtual instances are never corrupted because<br />

<strong>the</strong> embedded <strong>Docker</strong> host can be recreated for each CI slave<br />

instance. The inner <strong>Docker</strong> host is just a <strong>Docker</strong> image <strong>and</strong> instantiates<br />

at <strong>the</strong> same speed as any o<strong>the</strong>r <strong>Docker</strong> instance.<br />

Just like <strong>the</strong> developers’ laptop, <strong>the</strong> integrated services can run<br />

as <strong>Docker</strong> containers inside <strong>the</strong>se build slaves. Not only are <strong>the</strong>re<br />

exponential efficiencies in <strong>the</strong> spin up times <strong>of</strong> <strong>the</strong> tested service,<br />

<strong>the</strong>re are also <strong>the</strong> benefits <strong>of</strong> rebasing multiple testing scenarios.<br />

Multiple tests starting from <strong>the</strong> same baseline can be run over <strong>and</strong><br />

over in a matter <strong>of</strong> seconds. The sub-second <strong>Docker</strong> container<br />

spin-up times allow scenarios where thous<strong>and</strong>s <strong>of</strong> integration tests<br />

occur in minutes that would o<strong>the</strong>rwise take days to complete.<br />

<strong>Docker</strong> also increases velocity for CI pipelines with Union FileSystems<br />

<strong>and</strong> Copy on Write (COW). <strong>Docker</strong> images are created using<br />

a layered file system approach. Typically only <strong>the</strong> current (top)<br />

layer is a writable layer (COW). Advanced usage <strong>of</strong> baselining <strong>and</strong><br />

rebasing between <strong>the</strong>se layers can also increase <strong>the</strong> lead time<br />

for getting s<strong>of</strong>tware through <strong>the</strong> pipeline. For example, a specific<br />

MySQL table could be initialized at a certain state <strong>and</strong> could be<br />

rebased back to <strong>the</strong> original state in <strong>the</strong> container image for each<br />

test <strong>the</strong>refore providing multiple tests with accelerated efficiencies.<br />

Deployment Flow<br />

To achieve increased velocity for Continuous Delivery (CD) <strong>of</strong><br />

s<strong>of</strong>tware, <strong>the</strong>re are a number <strong>of</strong> techniques that can be impacted<br />

by <strong>the</strong> use <strong>of</strong> <strong>Docker</strong>. A popular CD process called “Blue Green<br />

deploys” is <strong>of</strong>ten used to seamlessly migrate applications into<br />

production. One <strong>of</strong> <strong>the</strong> challenges <strong>of</strong> production deployments is<br />

ensuring seamless <strong>and</strong> timely changeover times (moving from one<br />

version to ano<strong>the</strong>r). A Blue Green deploy is a technique where<br />

one node <strong>of</strong> a cluster is updated at a time (i.e., <strong>the</strong> green node)<br />

while <strong>the</strong> o<strong>the</strong>r nodes are still untouched (<strong>the</strong> blue nodes). This<br />

technique requires a rolling process where one node is updated<br />

<strong>and</strong> tested at a time. The two key takeaways here are: 1) <strong>the</strong> total<br />

speed to update all <strong>the</strong> nodes needs to be timely <strong>and</strong> 2) if <strong>the</strong><br />

cluster needs to be rolled back this also has to happen in a timely<br />

fashion. <strong>Docker</strong> containers make <strong>the</strong> roll forward <strong>and</strong> roll back<br />

process more efficient. Also, because <strong>the</strong> application is isolated<br />

in a container, this process is much cleaner with far less moving<br />

parts involved during <strong>the</strong> changeover. O<strong>the</strong>r deployment techniques<br />

like dark launches <strong>and</strong> canarying can benefit from <strong>Docker</strong><br />

container isolation <strong>and</strong> speed, all for <strong>the</strong> same reasons described<br />

earlier.<br />

Variation<br />

A key benefits <strong>of</strong> using <strong>Docker</strong> images in a s<strong>of</strong>tware delivery pipeline<br />

is that both <strong>the</strong> infrastructure <strong>and</strong> <strong>the</strong> application can both be<br />

included in <strong>the</strong> container image. One <strong>of</strong> <strong>the</strong> core tenants <strong>of</strong> Java<br />

was <strong>the</strong> promise <strong>of</strong> “write once run anywhere”. However, since<br />

<strong>the</strong> Java artifact (typically a JAR, WAR or EAR) only included <strong>the</strong><br />

application <strong>the</strong>re was always a wide range <strong>of</strong> variation, depending<br />

on <strong>the</strong> Java runtime <strong>and</strong> <strong>the</strong> specific operating environment <strong>of</strong> <strong>the</strong><br />

deployment environment. With <strong>Docker</strong>, a developer can bundle<br />

<strong>the</strong> actual infrastructure (i.e. <strong>the</strong> base OS, middleware, runtime<br />

<strong>and</strong> <strong>the</strong> application) in <strong>the</strong> same image. This converged isolation<br />

lowers <strong>the</strong> potential variation at every stage <strong>of</strong> <strong>the</strong> delivery pipeline<br />

(dev, integration <strong>and</strong> production deployment). If a developer tests<br />

a set <strong>of</strong> <strong>Docker</strong> images as a service on a laptop, those same services<br />

can be exactly <strong>the</strong> same during <strong>the</strong> integration testing <strong>and</strong><br />

<strong>the</strong> production deployment. The image (i.e., <strong>the</strong> converged artifact)<br />

is a binary. There should be little or no variation <strong>of</strong> a service<br />

stack at any <strong>of</strong> <strong>the</strong> stages <strong>of</strong> <strong>the</strong> pipeline when <strong>Docker</strong> containers<br />

are used as <strong>the</strong> primary deployment mechanism. Contrast this to<br />

environments that have to be built at each stage <strong>of</strong> <strong>the</strong> pipeline.<br />

Alternatively, configuration management <strong>and</strong> release engineer<br />

scripting are <strong>of</strong>ten used at each stage <strong>of</strong> <strong>the</strong> pipeline to build<br />

out <strong>the</strong> service. Although most automation mechanisms are far<br />

more efficient than a checklist built service, <strong>the</strong>y still run <strong>the</strong> risk <strong>of</strong><br />

higher variation than binay <strong>Docker</strong> service stacks. These alternatives<br />

to <strong>Docker</strong> containers can yield wider variances <strong>of</strong> stability<br />

<strong>the</strong>refore increasing variation. A <strong>Docker</strong>ized pipeline approach delivers<br />

converged artifacts as binaries <strong>and</strong> <strong>the</strong>refore are immutable<br />

starting from <strong>the</strong> commit.<br />

WHITE PAPER / 5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!