emacs.d/clones/abseil.io/resources/swe-book/html/ch23.html

567 lines
78 KiB
HTML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Software Engineering at Google</title>
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"> </script>
<link rel="stylesheet" type="text/css" href="theme/html/html.css">
</head>
<body data-type="book">
<section xmlns="http://www.w3.org/1999/xhtml" data-type="chapter" id="continuous_integration">
<h1>Continuous Integration</h1>
<p class="byline">Written by Rachel Tannenbaum</p>
<p class="byline">Edited by Lisa Carey</p>
<p><em>Continuous Integration</em>, or CI, is generally <a contenteditable="false" data-primary="continuous integration (CI)" data-type="indexterm" id="ix_CI">&nbsp;</a>defined as “a software development practice where members of a team integrate their work frequently [...] Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.”<sup><a data-type="noteref" id="ch01fn236-marker" href="ch23.html#ch01fn236">1</a></sup> Simply put, the fundamental goal of CI is to automatically catch problematic changes as early as possible.</p>
<p>In practice, what does “integrating work frequently” mean for the modern, distributed application? Todays systems have many moving pieces beyond just the latest versioned code in the repository. In fact, with the recent trend toward microservices, the changes that break an application are less likely to live inside the projects immediate codebase and more likely to be in loosely coupled microservices on the other side of a network call. Whereas a traditional continuous build tests changes in your binary, an extension of this might test changes to upstream microservices. The dependency is just shifted from your function call stack to an HTTP request or Remote Procedure Calls (RPC).</p>
<p>Even further from code dependencies, an application might periodically ingest data or update machine learning models. It might execute on evolving operating systems, runtimes, cloud hosting services, and devices. It might be a feature that sits on top of a growing platform or be the platform that must accommodate a growing feature base. All of these things should be considered dependencies, and we should aim to “continuously integrate” their changes, too. Further complicating things, these changing components are often owned by developers outside our team, organization, or company and deployed on their own schedules.</p>
<p>So, perhaps a better definition for CI in todays world, particularly when developing at scale, is the following:</p>
<blockquote>
<p><em>Continuous Integration (2)</em>: the continuous assembling and testing of our entire complex and rapidly evolving ecosystem.</p>
</blockquote>
<p>It is natural to conceptualize CI in terms of testing because the two are tightly coupled, and well do so throughout this chapter. In previous chapters, weve discussed a comprehensive range of testing, from unit to integration, to larger-scoped systems.</p>
<p>From a testing perspective, CI is a paradigm<a contenteditable="false" data-primary="testing" data-secondary="continuous integration and" data-type="indexterm" id="id-58SMH4sO">&nbsp;</a> to inform the following:</p>
<ul>
<li>
<p><em>Which</em> tests to run <em>when</em> in the development/release workflow, as code (and other) changes are continuously integrated into it</p>
</li>
<li>
<p><em>How</em> to compose the system under test (SUT) at each point, balancing concerns like fidelity and setup cost</p>
</li>
</ul>
<p>For example, which tests do we run on presubmit, which do we save for post-submit, and which do we save even later until our staging deploy? Accordingly, how do we represent our SUT at each of these points? As you might imagine, requirements for a presubmit SUT can differ significantly from those of a staging environment under test. For example, it can be dangerous for an application built from code pending review on presubmit to talk to real production backends (think security and quota vulnerabilities), whereas this is often acceptable for a staging environment.</p>
<p>And <em>why</em> should we try to optimize this often-delicate balance of testing “the right things” at “the right times” with CI? Plenty of prior work has already established the benefits of CI to the engineering organization and the overall business alike.<sup><a data-type="noteref" id="ch01fn237-marker" href="ch23.html#ch01fn237">2</a></sup> These outcomes are driven by a powerful guarantee: verifiable—and timely—proof that the application is good to progress to the next stage. We dont need to just hope that all contributors are very careful, responsible, and thorough; we can instead guarantee the working state of our application at various points from build throughout release, thereby improving confidence and quality in our products and productivity of our teams.</p>
<p>In the rest of this chapter, well introduce some key CI concepts, best practices and challenges, before looking at how we manage CI at Google with an introduction to our continuous build tool, TAP, and an in-depth study of one applications CI <span class="keep-together">transformation.</span></p>
<section data-type="sect1" id="ci_concepts">
<h1>CI Concepts</h1>
<p>First, lets begin by looking at<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-type="indexterm" id="ix_CIcpt">&nbsp;</a> some core concepts of CI.</p>
<section data-type="sect2" id="fast_feedback_loops">
<h2>Fast Feedback Loops</h2>
<p>As discussed in <a data-type="xref" href="ch11.html#testing_overview">Testing Overview</a>, the cost of a bug <a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-tertiary="fast feedback loops" data-type="indexterm" id="ix_CIcptfdbk">&nbsp;</a>grows almost <a contenteditable="false" data-primary="feedback" data-secondary="fast feedback loops in CI" data-type="indexterm" id="ix_fdbkCI">&nbsp;</a>exponentially the later it is caught. <a data-type="xref" href="ch23.html#life_of_a_code_change">Figure 23-1</a> shows all the places a problematic code change might be caught in its lifetime.</p>
<figure id="life_of_a_code_change"><img alt="Life of a code change" src="images/seag_2301.png">
<figcaption><span class="label">Figure 23-1. </span>Life of a code change</figcaption>
</figure>
<p>In general, as issues progress to the "right" in our diagram, they become costlier for the following reasons:</p>
<ul>
<li>
<p>They must be triaged by an engineer who is likely unfamiliar with the problematic code change.</p>
</li>
<li>
<p>They require more work for the code change author to recollect and investigate the change.</p>
</li>
<li>
<p>They negatively affect others, whether engineers in their work or ultimately the end user.</p>
</li>
</ul>
<p>To minimize the cost of bugs, CI encourages us to use <em>fast feedback loops.</em><sup><a data-type="noteref" id="ch01fn238-marker" href="ch23.html#ch01fn238">3</a></sup> Each time we integrate a code (or other) change into a testing scenario and observe the results, we get a new <em>feedback loop</em>. Feedback can take many forms; following are some common ones (in order of fastest to slowest):</p>
<ul>
<li>
<p>The edit-compile-debug loop of local development</p>
</li>
<li>
<p>Automated test results to a code change author on presubmit</p>
</li>
<li>
<p>An integration error between changes to two projects, detected after both are submitted and tested together (i.e., on post-submit)</p>
</li>
<li>
<p>An incompatibility between our project and an upstream microservice dependency, detected by a QA tester in our staging environment, when the upstream service deploys its latest changes</p>
</li>
<li>
<p>Bug reports by internal users who are opted in to a feature before external users</p>
</li>
<li>
<p>Bug or outage reports by external users or the press</p>
</li>
</ul>
<p><em>Canarying</em>—or deploying to a small percentage<a contenteditable="false" data-primary="canarying" data-type="indexterm" id="id-w0SlhwuptaH0">&nbsp;</a> of production first—can help minimize issues that do make it to production, with a subset-of-production initial feedback loop preceding all-of-production. However, canarying can cause problems, too, particularly around compatibility between deployments when multiple versions are deployed at once. This is sometimes known as <em>version skew</em>, a state of a distributed system in which it contains multiple incompatible versions of code, data, and/or configuration. Like many issues we look at in this book, version skew is another example of a challenging problem that can arise when trying to develop and manage software over time.</p>
<p><em>Experiments</em> and <em>feature flags</em> are extremely powerful feedback loops.<a contenteditable="false" data-primary="experiments and feature flags" data-type="indexterm" id="id-WqSrtRU0tRHd">&nbsp;</a> They reduce deployment risk by isolating changes within modular components that can be dynamically toggled in production.<a contenteditable="false" data-primary="feature flags" data-type="indexterm" id="id-JoSNcqUet9HD">&nbsp;</a> Relying heavily on feature-flag-guarding is a common paradigm for Continuous Delivery, which we explore further in <a data-type="xref" href="ch24.html#continuous_delivery-id00035">Continuous Delivery</a>.</p>
<section data-type="sect3" id="accessible_and_actionable_feedback">
<h3>Accessible and actionable feedback</h3>
<p>Its also important that feedback from CI be widely accessible. In addition to our open culture around code visibility, we feel similarly about our test reporting. We have a unified test reporting system in which anyone can easily look up a build or test run, including all logs (excluding user Personally Identifiable Information [PII]), whether for an individual engineers local run or on an automated development or staging build.</p>
<p>Along with logs, our test reporting system provides a detailed history of when build or test targets began to fail, including audits of where the build was cut at each run, where it was run, and by whom. We also have a system for flake classification, which uses statistics to classify flakes at a Google-wide level, so engineers dont need to figure this out for themselves to determine whether their change broke another projects test (if the test is flaky: probably not).</p>
<p>Visibility into test history empowers engineers to share and collaborate on feedback, an essential requirement for disparate teams to diagnose and learn from integration failures between their systems. Similarly, bugs (e.g., tickets or issues) at Google are open with full comment history for all to see and learn from (with the exception, again, of customer PII).</p>
<p>Finally, any feedback from CI tests should not just be accessible but actionable—easy to use to find and fix problems. Well look at an example of improving user-unfriendly feedback in our case study later in this chapter. By improving test output readability, you automate the understanding of feedback.<a contenteditable="false" data-primary="feedback" data-secondary="fast feedback loops in CI" data-startref="ix_fdbkCI" data-type="indexterm" id="id-xbSkHWfas7tYH4">&nbsp;</a><a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-startref="ix_CIcptfdbk" data-tertiary="fast feedback loops" data-type="indexterm" id="id-2ESWhkf0sAtqHw">&nbsp;</a></p>
</section>
</section>
<section data-type="sect2" id="automation">
<h2>Automation</h2>
<p>Its well known that <a href="https://oreil.ly/UafCh">automating development-related tasks saves engineering resources</a> in the long run.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-tertiary="automation" data-type="indexterm" id="ix_CIcptauto">&nbsp;</a><a contenteditable="false" data-primary="automation" data-secondary="in continous integration" data-type="indexterm" id="ix_autoCI">&nbsp;</a> Intuitively, because we automate processes by defining them as code, peer review when changes are checked in will reduce the probability of error. Of course, automated processes, like any other software, will have bugs; but when implemented effectively, they are still faster, easier, and more reliable than if they were attempted manually by engineers.</p>
<p>CI, specifically, automates the <em>build</em> and <em>release</em> processes, with a Continuous Build and Continuous Delivery. Continuous testing is applied throughout, which well look at in the next section.</p>
<section data-type="sect3" id="continuous_build">
<h3>Continuous Build</h3>
<p>The <em>Continuous Build</em> (CB) integrates the latest code changes at head<sup><a data-type="noteref" id="ch01fn239-marker" href="ch23.html#ch01fn239">4</a></sup> and runs an automated build and test. <a contenteditable="false" data-primary="continuous build (CB)" data-type="indexterm" id="id-BvSytBhDcAc7Hz">&nbsp;</a>Because the CB runs tests as well as building code, “breaking the build” or “failing the build” includes breaking tests as well as breaking <span class="keep-together">compilation.</span></p>
<p>After a change is submitted, the CB should run all relevant tests. If a change passes all tests, the CB marks it passing or “green,” as it is often displayed in user interfaces (UIs). This process effectively introduces two different versions of head in the repository: <em>true head</em>, or the latest change that was committed, and <em>green head,</em> or the latest change the CB has verified. Engineers are able to sync to either version in their local development. Its common to sync against green head to work with a stable environment, verified by the CB, while coding a change but have a process that requires changes to be synced to true head before submission.</p>
</section>
<section data-type="sect3" id="continuous_delivery">
<h3>Continuous Delivery</h3>
<p>The first step in Continuous Delivery (CD; discussed more fully in <a data-type="xref" href="ch24.html#continuous_delivery-id00035">Continuous Delivery</a>) is <em>release automation</em>, which continuously assembles the latest code and configuration from head into release candidates. <a contenteditable="false" data-primary="continuous delivery (CD)" data-type="indexterm" id="id-w0SrtAh5fqc2H3">&nbsp;</a>At Google, most teams cut these at green, as opposed to true, head.</p>
<blockquote>
<p><em>Release candidate</em> (RC): A cohesive, deployable unit created by an automated process,<sup><a data-type="noteref" id="ch01fn240-marker" href="ch23.html#ch01fn240">5</a></sup> assembled of code, configuration, and other dependencies that have passed the continuous build.</p>
</blockquote>
<p>Note that we include configuration in release candidates—this is extremely important, even though it can slightly vary between environments as the candidate is promoted. Were not necessarily advocating you compile configuration into your binaries—actually, we would recommend dynamic configuration, such as experiments or feature flags, for many scenarios.<sup><a data-type="noteref" id="ch01fn241-marker" href="ch23.html#ch01fn241">6</a></sup></p>
<p>Rather, we are saying that any static configuration you <em>do</em> have should be promoted as part of the release candidate so that it can undergo testing along with its corresponding code. Remember, a large percentage of production bugs are caused by “silly” configuration problems, so its just as important to test your configuration as it is your code (and to test it along <em>with</em> the same code that will use it). Version skew is often caught in this release-candidate-promotion process. This assumes, of course, that your static configuration is in version control—at Google, static configuration is in version control along with the code, and hence goes through the same code review process.</p>
<p>We then define CD as follows:</p>
<blockquote>
<p><em>Continuous Delivery</em> (CD): a continuous assembling of release candidates, followed by the promotion and testing of those candidates throughout a series of environments—sometimes reaching production and sometimes not.</p>
</blockquote>
<p>The promotion and deployment process often depends on the team. Well show how our case study navigated this process.</p>
<p>For teams at Google that want continuous feedback from new changes in production (e.g., Continuous Deployment), its usually infeasible to continuously push entire binaries, which are often quite large, on green. For that reason, doing a <em>selective</em> Continuous Deployment, through experiments or feature flags, is a common strategy.<sup><a data-type="noteref" id="ch01fn242-marker" href="ch23.html#ch01fn242">7</a></sup></p>
<p>As an RC progresses through environments, its artifacts (e.g., binaries, containers) ideally should not be recompiled or rebuilt. Using containers such as Docker helps enforce consistency of an RC between environments, from local development onward. Similarly, using orchestration tools like Kubernetes (or in our case, usually <a href="https://oreil.ly/89yPv">Borg</a>), helps enforce consistency between deployments. By enforcing consistency of our release and deployment between environments, we achieve higher-fidelity earlier testing and fewer surprises in production.<a contenteditable="false" data-primary="automation" data-secondary="in continous integration" data-startref="ix_autoCI" data-type="indexterm" id="id-eaS1hGsbfocAHp">&nbsp;</a><a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-startref="ix_CIcptauto" data-tertiary="automation" data-type="indexterm" id="id-0KSkt1snfpczH2">&nbsp;</a></p>
</section>
</section>
<section data-type="sect2" id="continuous_testing">
<h2>Continuous Testing</h2>
<p>Lets look at<a contenteditable="false" data-primary="testing" data-secondary="continuous testing in CI" data-type="indexterm" id="ix_tstCI">&nbsp;</a> how CB and CD fit in as we apply <a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-tertiary="continuous testing" data-type="indexterm" id="ix_CIcptCT">&nbsp;</a>Continuous Testing (CT) to a code change throughout its lifetime, as shown <a data-type="xref" href="ch23.html#life_of_a_code_change_with_cb_and_cd">Figure 23-2</a>.</p>
<figure id="life_of_a_code_change_with_cb_and_cd"><img alt="Life of a code change with CB and CD" src="images/seag_2302.png">
<figcaption><span class="label">Figure 23-2. </span>Life of a code change with CB and CD</figcaption>
</figure>
<p>The rightward arrow shows the progression of a single code change from local development to production. Again, one of our key objectives in CI is determining <em>what</em> to test <em>when</em> in this progression. Later in this chapter, well introduce the different testing phases and provide some considerations for what to test in presubmit versus post-submit, and in the RC and beyond. Well show that, as we shift to the right, the code change is subjected to progressively larger-scoped automated tests.</p>
<section data-type="sect3" id="why_presubmit_isnapostrophet_enough">
<h3>Why presubmit isnt enough</h3>
<p>With the objective to catch problematic changes as soon as possible and the ability to run automated tests on presubmit, you might <a contenteditable="false" data-primary="presubmits" data-secondary="continuous testing and" data-type="indexterm" id="id-BvSBHBhJf3f7Hz">&nbsp;</a>be wondering: why not just run all tests on presubmit?</p>
<p>The main reason is that its too expensive. Engineer productivity is extremely valuable, and waiting a long time to run every test during code submission can be severely disruptive. Further, by removing the constraint for presubmits to be exhaustive, a lot of efficiency gains can be made if tests pass far more frequently than they fail. For example, the tests that are run can be restricted to certain scopes, or selected based on a model that predicts their likelihood of detecting a failure.</p>
<p>Similarly, its expensive for engineers to be blocked on presubmit by failures arising from instability or flakiness that has nothing to do with their code change.</p>
<p>Another reason is that during the time we run presubmit tests to confirm that a change is safe, the underlying repository might have changed in a manner that is incompatible with the changes being tested. That is, it is possible for two changes that touch completely different files to cause a test to fail. We call this a mid-air collision, and though generally rare, it happens most days at our scale. CI systems for smaller repositories or projects can avoid this problem by serializing submits so that there is no difference between what is about to enter and what just did.</p>
</section>
<section data-type="sect3" id="presubmit_versus_postsubmit">
<h3>Presubmit versus post-submit</h3>
<p>So, which tests <em>should</em> be run on presubmit? <a contenteditable="false" data-primary="presubmits" data-secondary="versus postsubmit" data-type="indexterm" id="id-47S5hXhGCafQH9">&nbsp;</a>Our general rule of thumb is: only fast, reliable ones. You can accept some loss of coverage on presubmit, but that means you need to catch any issues that slip by on post-submit, and accept some number of rollbacks. On post-submit, you can accept longer times and some instability, as long as you have proper mechanisms to deal with it.</p>
<div data-type="note" id="id-wBsrtyC5faH0"><h6>Note</h6>
<p>Well show how TAP and our case study handle failure management in <a data-type="xref" href="ch23.html#ci_at_google">CI at Google</a>.</p>
</div>
<p>We dont want to waste valuable engineer productivity by waiting too long for slow tests or for too many tests—we typically limit presubmit tests to just those for the project where the change is happening. We also run tests concurrently, so there is a resource decision to consider as well. Finally, we dont want to run unreliable tests on presubmit, because the cost of having many engineers affected by them, debugging the same problem that is not related to their code change, is too high.</p>
<p>Most teams at Google run their small tests (like unit tests) on presubmit<sup><a data-type="noteref" id="ch01fn243-marker" href="ch23.html#ch01fn243">8</a></sup>—these are the obvious ones to run as they tend to be the fastest and most reliable. Whether and how to run larger-scoped tests on presubmit is the more interesting question, and this varies by team. For teams that do want to run them, hermetic testing is a proven approach to reducing their inherent instability. Another option is to allow large-scoped tests to be unreliable on presubmit but disable them aggressively when they start failing.</p>
</section>
<section data-type="sect3" id="release_candidate_testing">
<h3>Release candidate testing</h3>
<p>After a code change has passed the CB (this might take <a contenteditable="false" data-primary="release candidate testing" data-type="indexterm" id="id-47SvHXh1IafQH9">&nbsp;</a>multiple cycles if there were failures), it will soon encounter CD and be included in a pending release candidate.</p>
<p>As CD builds RCs, it will run larger tests against the entire candidate. We test a release candidate by promoting it through a series of test environments and testing it at each deployment. This can include a combination of sandboxed, <span class="keep-together">temporary</span> environments and shared test environments, like dev or staging. Its common to include some manual QA testing of the RC in shared environments, too.</p>
<p>There are several reasons why its important to run a comprehensive, automated test suite against an RC, even if it is the same suite that CB just ran against the code on post-submit (assuming the CD cuts at green):</p>
<dl>
<dt>As a sanity check</dt>
<dd>We double check that nothing strange happened when the code was cut and recompiled in the RC.</dd>
<dt>For auditability</dt>
<dd>If an engineer wants to check an RCs test results, they are readily available and associated with the RC, so they dont need to dig through CB logs to find them.</dd>
<dt>To allow for cherry picks</dt>
<dd>If you apply a cherry-pick fix to an RC, your source code has now diverged from the latest cut tested by the CB.</dd>
<dt>For emergency pushes</dt>
<dd>In that case, CD can cut from true head and run the minimal set of tests necessary to feel confident about an emergency push, without waiting for the full CB to pass.</dd>
</dl>
</section>
<section data-type="sect3" id="production_testing">
<h3>Production testing</h3>
<p>Our continuous, automated testing process goes all the way to the final deployed environment: production.<a contenteditable="false" data-primary="production" data-secondary="testing in" data-type="indexterm" id="id-WqSKHoh4u9fMHw">&nbsp;</a> We should run the same suite of tests against production (sometimes called <em>probers</em>) that we did against the release candidate earlier on to verify: 1) the working state of production, according to our tests, and 2) the relevance of our tests, according to production.</p>
<p>Continuous testing at each step of the applications progression, each with its own trade-offs, serves as a reminder of the value in a “defense in depth” approach to catching bugs—it isnt just one bit of technology or policy that we rely upon for quality and stability, its many testing approaches combined.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-startref="ix_CIcptCT" data-tertiary="continuous testing" data-type="indexterm" id="id-JoSEHQtpubfdHP">&nbsp;</a><a contenteditable="false" data-primary="testing" data-secondary="continuous testing in CI" data-startref="ix_tstCI" data-type="indexterm" id="id-mAS5hGtnudf3HN">&nbsp;</a><a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="core concepts" data-startref="ix_CIcpt" data-type="indexterm" id="id-xbSBtqtyuafYH4">&nbsp;</a></p>
<aside data-type="sidebar" id="ci_is_alerting">
<h5>CI Is Alerting</h5>
<p class="byline">Titus Winters</p>
<p>As with responsibly running production<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="alerting" data-type="indexterm" id="ix_CIalrt">&nbsp;</a> systems, sustainably maintaining software systems also requires continual automated monitoring. Just as we use a monitoring and alerting system to understand how production systems respond to change, CI reveals how our software is responding to changes in its environment. Whereas production monitoring relies on passive alerts and active probers of running systems, CI uses unit and integration tests to detect changes to the software before it is deployed. Drawing comparisons between these two domains lets us apply knowledge from one to the other.</p>
<p>Both CI and alerting serve the same overall purpose in the developer workflow—to identify problems as quickly as reasonably possible. CI emphasizes the early side of the developer workflow, and catches problems by surfacing test failures. Alerting focuses on the late end of the same workflow and catches problems by monitoring metrics and reporting when they exceed some threshold. Both are forms of “identify problems automatically, as soon as possible.”</p>
<p>A well-managed alerting system helps to ensure that your Service-Level Objectives (SLOs) are being met. A good CI system helps to ensure that your build is in good shape—the code compiles, tests pass, and you could deploy a new release if you needed to. Best-practice policies in both spaces focus a lot on ideas of fidelity and actionable alerting: tests should fail only when the important underlying invariant is violated, rather than because the test is brittle or flaky. A flaky test that fails every few CI runs is just as much of a problem as a spurious alert going off every few minutes and generating a page for the on-call. If it isnt actionable, it shouldnt be alerting. If it isnt actually violating the invariants of the SUT, it shouldnt be a test failure.</p>
<p>CI and alerting share an underlying conceptual framework. For instance, theres a similar relationship between localized signals (unit tests, monitoring of isolated statistics/cause-based alerting) and cross-dependency signals (integration and release tests, black-box probing). The highest fidelity indicators of whether an aggregate system is working are the end-to-end signals, but we pay for that fidelity in flakiness, increasing resource costs, and difficulty in debugging root causes.</p>
<p>Similarly, we see an underlying connection in the failure modes for both domains. Brittle cause-based alerts fire based on crossing an arbitrary threshold (say, retries in the past hour), without there necessarily being a fundamental connection between that threshold and system health as seen by an end user. Brittle tests fail when an arbitrary test requirement or invariant is violated, without there necessarily being a fundamental connection between that invariant and the correctness of the software being tested. In most cases these are easy to write, and potentially helpful in debugging a larger issue. In both cases they are rough proxies for overall health/correctness, failing to capture the holistic behavior. If you dont have an easy end-to-end probe, but you do make it easy to collect some aggregate statistics, teams will write threshold alerts based on arbitrary statistics. If you dont have a high-level way to say, “Fail the test if the decoded image isnt roughly the same as this decoded image,” teams will instead build tests that assert that the byte streams are identical.</p>
<p>Cause-based alerts and brittle tests can still have value; they just arent the ideal way to identify potential problems in an alerting scenario. In the event of an actual failure, having more debug detail available can be useful. When SREs are debugging an outage, it can be useful to have information of the form, “An hour ago users, started experiencing more failed requests. Around the same, time the number of retries started ticking up. Lets start investigating there.” Similarly, brittle tests can still provide extra debugging information: “The image rendering pipeline started spitting out garbage. One of the unit tests suggests that were getting different bytes back from the JPEG compressor. Lets start investigating there.”</p>
<p>Although monitoring and alerting are considered a part of the SRE/production management domain, where the insight of “Error Budgets” is well understood,<sup><a data-type="noteref" id="ch01fn244-marker" href="ch23.html#ch01fn244">9</a></sup> CI comes from a perspective that still tends to be focused on absolutes. Framing CI as the “left shift” of alerting starts to suggest ways to reason about those policies and propose better best practices:</p>
<ul>
<li>
<p>Having a 100% green rate on CI, just like having 100% uptime for a production service, is awfully expensive. If that is <em>actually</em> your goal, one of the biggest problems is going to be a race condition between testing and submission.</p>
</li>
<li>
<p>Treating every alert as an equal cause for alarm is not generally the correct approach. If an alert fires in production but the service isnt actually impacted, silencing the alert is the correct choice. The same is true for test failures: until our CI systems learn how to say, “This test is known to be failing for irrelevant reasons,” we should probably be more liberal in accepting changes that disable a failed test. Not all test failures are indicative of upcoming production issues.</p>
</li>
<li>
<p>Policies that say, “Nobody can commit if our latest CI results arent green” are probably misguided. If CI reports an issue, such failures should definitely be <em>investigated</em> before letting people commit or compound the issue. But if the root cause is well understood and clearly would not affect production, blocking commits is unreasonable.</p>
</li>
</ul>
<p>This “CI is alerting” insight is new, and were still figuring out how to fully draw parallels. Given the higher stakes involved, its unsurprising that SRE has put a lot of thought into best practices surrounding monitoring and alerting, whereas CI has been viewed as more of a luxury feature.<sup><a data-type="noteref" id="ch01fn245-marker" href="ch23.html#ch01fn245">10</a></sup> For the next few years, the task in software engineering will be to see where existing SRE practice can be reconceptualized in a CI context to help reformulate the testing and CI landscape—and perhaps where best practices in testing can help clarify goals and policies on monitoring and alerting.</p>
</aside>
</section>
</section>
<section data-type="sect2" id="ci_challenges">
<h2>CI Challenges</h2>
<p>Weve discussed some of the established best <a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="alerting" data-tertiary="CI challenges" data-type="indexterm" id="id-dzSxHlhNCAHv">&nbsp;</a>practices in CI and have introduced some of the challenges involved, such as the potential disruption to engineer productivity of unstable, slow, conflicting, or simply too many tests at presubmit. Some common additional challenges when implementing CI include the following:</p>
<ul>
<li>
<p><em>Presubmit optimization</em>, including <em>which</em> tests to run at presubmit time given the potential issues weve already described, and <em>how</em> to run them.<a contenteditable="false" data-primary="presubmits" data-secondary="optimization of" data-type="indexterm" id="id-w0SxcNHWH3tyC4HV">&nbsp;</a></p>
</li>
<li>
<p><em>Culprit finding</em> and <em>failure isolation</em>: Which <a contenteditable="false" data-primary="culprit finding and failure isolation" data-type="indexterm" id="id-w0SrtNH9h3tyC4HV">&nbsp;</a>code or<a contenteditable="false" data-primary="failures" data-secondary="culprit finding and failure isolation" data-type="indexterm" id="id-47SocdHrhRt1CPHd">&nbsp;</a> other change caused the problem, and which system did it happen in? “Integrating upstream microservices" is one approach to failure isolation in a distributed architecture, when you want to figure out whether a problem originated in your own servers or a backend. In this approach, you stage combinations of your stable servers along with upstream microservices new servers. (Thus, you are integrating the microservices latest changes into your testing.) This approach can be particularly challenging due to version skew: not only are these environments often incompatible, but youre also likely to encounter false positives—problems that occur in a particular staged combination that wouldnt actually be spotted in production.</p>
</li>
<li>
<p><em>Resource constraints</em>: Tests need resources to run, and large tests can be very expensive.<a contenteditable="false" data-primary="resource constraints, CI and" data-type="indexterm" id="id-w0SlhNHpt3tyC4HV">&nbsp;</a> In addition, the cost for the infrastructure for inserting automated testing throughout the process can be considerable.</p>
</li>
</ul>
<p>Theres also the challenge of <em>failure management—</em>what to do when tests fail. Although smaller problems can usually be fixed quickly, many of our teams find that its extremely difficult to have a consistently green test suite when large end-to-end tests are involved. They inherently become broken or flaky and are difficult to debug; there needs to be a mechanism to temporarily disable and keep track of them so that the release can go on. A common technique at Google is to use bug “hotlists” filed by an on-call or release engineer and triaged to the appropriate team. Even better is when these bugs can be automatically generated and filed—some of our larger products, like Google Web Server (GWS) and Google Assistant, do this. These hotlists should be curated to make sure any release-blocking bugs are fixed immediately. Nonrelease blockers should be fixed, too; they are less urgent, but should also be prioritized so the test suite remains useful and is not simply a growing pile of disabled, old tests. Often, the problems caught by end-to-end test failures are actually with tests rather than code.</p>
<p>Flaky tests pose another problem to this process.<a contenteditable="false" data-primary="flaky tests" data-type="indexterm" id="id-BvSBHQfzCoHl">&nbsp;</a> They erode confidence similar to a broken test, but finding a change to roll back is often more difficult because the failure wont happen all the time. Some teams rely on a tool to remove such flaky tests from presubmit temporarily while the flakiness is investigated and fixed. This keeps confidence high while allowing for more time to fix the problem.</p>
<p><em>Test instability</em> is another significant challenge that weve already looked at in the context of presubmits.<a contenteditable="false" data-primary="test instability" data-type="indexterm" id="id-47S5h6CGCJHv">&nbsp;</a> One tactic for dealing with this is to allow multiple attempts of the test to run. This is a common test configuration setting that teams use. Also, within test code, retries can be introduced at various points of specificity.</p>
<p>Another approach that helps with test instability (and other CI challenges) is hermetic testing, which well look at in the next section.</p>
</section>
<section data-type="sect2" id="hermetic_testing">
<h2>Hermetic Testing</h2>
<p>Because talking to a live backend is unreliable, we<a contenteditable="false" data-primary="hermetic testing" data-type="indexterm" id="id-nBSYHVhxIBHX">&nbsp;</a> often use <a href="https://oreil.ly/-PbRM">hermetic backends</a> for larger-scoped tests.<a contenteditable="false" data-primary="testing" data-secondary="hermetic" data-type="indexterm" id="id-BvSytBhpIoHl">&nbsp;</a><a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="alerting" data-tertiary="hermetic testing" data-type="indexterm" id="id-w0SxcAhkIaH0">&nbsp;</a> This is particularly useful when we want to run these tests on presubmit, when stability is of utmost importance. In <a data-type="xref" href="ch11.html#testing_overview">Testing Overview</a>, we introduced the concept of hermetic tests:</p>
<blockquote>
<p><em>Hermetic tests</em>: tests run against a test environment (i.e., application servers and resources) that is entirely self-contained (i.e., no external dependencies like production <span class="keep-together">backends</span>).</p>
</blockquote>
<p>Hermetic tests have two important properties: greater determinism (i.e., stability) and isolation. Hermetic servers are still prone to some sources of nondeterminism, like system time, random number generation, and race conditions. But, what goes into the test doesnt change based on outside dependencies, so when you run a test twice with the same application and test code, you should get the same results. If a hermetic test fails, you know that its due to a change in your application code or tests (with a minor caveat: they can also fail due to a restructuring of your hermetic test environment, but this should not change very often). For this reason, when CI systems rerun tests hours or days later to provide additional signals, hermeticity makes test failures easier to narrow down.</p>
<p>The other important property, isolation, means that problems in production should not affect these tests. We generally run these tests all on the same machine as well, so we dont have to worry about network connectivity issues. The reverse also holds: problems caused by running hermetic tests should not affect production.</p>
<p>Hermetic test success should not depend on the user running the test. This allows people to reproduce tests run by the CI system and allows people (e.g., library developers) to run tests owned by other teams.</p>
<p>One type of hermetic backend is a fake.<a contenteditable="false" data-primary="faking" data-secondary="fake hermetic backend" data-type="indexterm" id="id-WqSKHOIzIRHd">&nbsp;</a> As discussed in <a data-type="xref" href="ch13.html#test_doubles">Test Doubles</a>, these can be cheaper than running a real backend, but they take work to maintain and have limited fidelity.</p>
<p>The cleanest option to achieve a presubmit-worthy integration test is with a fully hermetic setup—that is, starting up the entire stack sandboxed<sup><a data-type="noteref" id="ch01fn246-marker" href="ch23.html#ch01fn246">11</a></sup>—and Google provides out-of-the-box sandbox configurations for popular components, like databases, to make it easier.<a contenteditable="false" data-primary="sandboxing" data-secondary="hermetic testing and" data-type="indexterm" id="id-mAS5hPuOIRHy">&nbsp;</a> This is more feasible for smaller applications with fewer components, but there are exceptions at Google, even one (by DisplayAds) that starts about four hundred servers from scratch on every presubmit as well as continuously on post-submit. Since the time that system was created, though, record/replay has emerged as a more popular paradigm for larger systems and tends to be cheaper than starting up a large sandboxed stack.</p>
<p>Record/replay (see <a data-type="xref" href="ch14.html#larger_testing">Larger Testing</a>) systems <a contenteditable="false" data-primary="record/replay systems" data-type="indexterm" id="id-xbSXhQUYIpHX">&nbsp;</a>record live backend responses, cache them, and replay them in a hermetic test environment. Record/replay is a powerful tool for reducing test instability, but one<a contenteditable="false" data-primary="brittle tests" data-secondary="record/replay systems causing" data-type="indexterm" id="id-2ESAtaU9IvHm">&nbsp;</a> downside is that it leads to brittle tests: its difficult to strike a balance between the following:</p>
<dl>
<dt>False positives</dt>
<dd>The test passes when it probably shouldnt have because we are hitting the cache too much and missing problems that would surface when capturing a new response.</dd>
<dt>False negatives</dt>
<dd>The test fails when it probably shouldnt have because we are hitting the cache too little. This requires responses to be updated, which can take a long time and lead to test failures that must be fixed, many of which might not be actual problems. This process is often submit-blocking, which is not ideal.</dd>
</dl>
<p>Ideally, a record/replay system should detect only problematic changes and cache-miss only when a request has changed in a meaningful way. In the event that that change causes a problem, the code change author would rerun the test with an updated response, see that the test is still failing, and thereby be alerted to the problem. In practice, knowing when a request has changed in a meaningful way can be incredibly difficult in a large and ever-changing system.</p>
<aside data-type="sidebar" id="the_hermetic_google_assistant">
<h5>The Hermetic Google Assistant</h5>
<p>Google Assistant<a contenteditable="false" data-primary="Google Assistant" data-type="indexterm" id="id-0KSQHvhBT7IzH2">&nbsp;</a> provides a framework for engineers to run end-to-end tests, including a <a contenteditable="false" data-primary="hermetic testing" data-secondary="Google Assistant" data-type="indexterm" id="id-b6SBhmhGTrIeHe">&nbsp;</a>test fixture with functionality for setting up queries, specifying whether to simulate on a phone or a smart home device, and validating responses throughout an exchange with Google Assistant.</p>
<p>One of its greatest success stories was making its test suite fully hermetic on presubmit. When the team previously used to run nonhermetic tests on presubmit, the tests would routinely fail. In some days, the team would see more than 50 code changes bypass and ignore the test results. In moving presubmit to hermetic, the team cut the runtime by a factor of 14, with virtually no flakiness. It still sees failures, but those failures tend to be fairly easy to find and roll back.</p>
<p>Now that nonhermetic tests have been pushed to post-submit, it results in failures accumulating there instead. Debugging failing end-to-end tests is still difficult, and some teams dont have time to even try, so they just disable them. Thats better than having it stop all development for everyone, but it can result in production failures.</p>
<p>One of the teams current challenges is to continue to fine-tuning its caching mechanisms so that presubmit can catch more types of issues that have been discovered only post-submit in the past, without introducing too much brittleness.</p>
<p>Another is how to do presubmit testing for the decentralized Assistant given that components are shifting into their own microservices. Because the Assistant has a large and complex stack, the cost of running a hermetic stack on presubmit, in terms of engineering work, coordination, and resources, would be very high.</p>
<p>Finally, the team is taking advantage of this decentralization in a clever new post-submit failure-isolation strategy. For each of the <em>N</em> microservices within the Assistant, the team will run a post-submit environment containing the microservice built at head, along with production (or close to it) versions of the other <em>N</em> 1 services, to isolate problems to the newly built server. This setup would normally be <em>O</em>(<em>N</em><sup>2</sup>) cost to facilitate, but the team leverages a cool feature called <em>hotswapping</em> to cut this cost to <em>O</em>(<em>N</em>). Essentially, hotswapping allows a request to instruct a server to “swap” in the address of a backend to call instead of the usual one. So only <em>N</em> servers need to be run, one for each of the microservices cut at head—and they can reuse the same set of prod backends swapped in to each of these <em>N</em> “environments.”</p>
</aside>
<p>As weve seen in this section, hermetic testing can both reduce instability in larger-scoped tests and help isolate failures—addressing two of the significant CI challenges we identified in the previous section. However, hermetic backends can also be more expensive because they use more resources and are slower to set up. Many teams use combinations of hermetic and live backends in their test environments.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="alerting" data-startref="ix_CIalrt" data-type="indexterm" id="id-0KSQHXFYIQHV">&nbsp;</a></p>
</section>
</section>
<section data-type="sect1" id="ci_at_google">
<h1>CI at Google</h1>
<p>Now lets look in more detail at how CI is implemented at Google.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="implementation at Google" data-type="indexterm" id="ix_CIGoo">&nbsp;</a> First, well look at our global continuous build, TAP, used by the vast majority of teams at Google, and how it enables some of the practices and addresses some of the challenges that we looked at in the previous section. Well also look at one application, Google Takeout, and how a CI transformation helped it scale both as a platform and as a service.</p>
<aside data-type="sidebar" id="tap_googleapostrophes_global_continuous">
<h5>TAP: Googles Global Continuous Build</h5>
<p class="byline">Adam Bender</p>
<p>We run a massive continuous build, called the Test Automation Platform (TAP), of our entire codebase.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="implementation at Google" data-tertiary="TAP, global continuous build" data-type="indexterm" id="ix_CIGooTAP">&nbsp;</a><a contenteditable="false" data-primary="Test Automation Platform (TAP)" data-type="indexterm" id="ix_TAP">&nbsp;</a> It is responsible for running the majority of our automated tests. As a direct consequence of our use of a monorepo, TAP is the gateway for almost all changes at Google. Every day it is responsible for handling more than 50,000 unique changes <em>and</em> running more than four billion individual test cases.</p>
<p>TAP is the beating heart of Googles development infrastructure. Conceptually, the process is very simple. When an engineer attempts to submit code, TAP runs the associated tests and reports success or failure. If the tests pass, the change is allowed into the codebase.</p>
<h3>Presubmit optimization</h3>
<p>To catch issues quickly<a contenteditable="false" data-primary="Test Automation Platform (TAP)" data-secondary="presubmit optimization" data-type="indexterm" id="id-A1SEHrCXtYhQ">&nbsp;</a> and consistently, it is important to ensure that tests are run against every change. <a contenteditable="false" data-primary="presubmits" data-secondary="optimization of" data-type="indexterm" id="id-BvSYhnCYtrhl">&nbsp;</a>Without a CB, running tests is usually left to individual engineer discretion, and that often leads to a few motivated engineers trying to run all tests and keep up with the failures.</p>
<p>As discussed earlier, waiting a long time to run every test on presubmit can be severely disruptive, in some cases taking hours. To minimize the time spent waiting, Googles CB approach allows potentially breaking changes to land in the repository (remember that they become immediately visible to the rest of the company!). All we ask is for each team to create a fast subset of tests, often a projects unit tests, that can be run before a change is submitted (usually before it is sent for code review)—the presubmit. Empirically, a change that passes the presubmit has a very high likelihood (95%+) of passing the rest of the tests, and we optimistically allow it to be integrated so that other engineers can then begin to use it.</p>
<p>After a change has been submitted, we use TAP to asynchronously run all potentially affected tests, including larger and slower tests.</p>
<p>When a change causes a test to fail in TAP, it is imperative that the change be fixed quickly to prevent blocking other engineers. We have established a cultural norm that strongly discourages committing any new work on top of known failing tests, though flaky tests make this difficult. Thus, when a change is committed that breaks a teams build in TAP, that change may prevent the team from making forward progress or building a new release. As a result, dealing with breakages quickly is imperative.</p>
<p>To deal with such breakages, each team has a “Build Cop.” The Build Cops responsibility is keeping all the tests passing in their particular project, regardless of who breaks them. When a Build Cop is notified of a failing test in their project, they drop whatever they are doing and fix the build. This is usually by identifying the offending change and determining whether it needs to be rolled back (the preferred solution) or can be fixed going forward (a riskier proposition).</p>
<p>In practice, the trade-off of allowing changes to be committed before verifying all tests has really paid off; the average wait time to submit a change is around 11 minutes, often run in the background. Coupled with the discipline of the Build Cop, we are able to efficiently detect and address breakages detected by longer running tests with a minimal amount of disruption.</p>
<h3>Culprit finding</h3>
<p>One of the problems we face with large test suites at Google is finding the specific change that broke a test. <a contenteditable="false" data-primary="culprit finding and failure isolation" data-secondary="using TAP" data-type="indexterm" id="id-xbSkHVFQtwhX">&nbsp;</a><a contenteditable="false" data-primary="Test Automation Platform (TAP)" data-secondary="culprit finding" data-type="indexterm" id="id-2ESWhBFJt1hm">&nbsp;</a>Conceptually, this should be really easy: grab a change, run the tests, if any tests fail, mark the change as bad. Unfortunately, due to a prevalence of flakes and the occasional issues with the testing infrastructure itself, having confidence that a failure is real isnt easy. To make matters more complicated, TAP must evaluate so many changes a day (more than one a second) that it can no longer run every test on every change. Instead, it falls back to batching related changes together, which reduces the total number of unique tests to be run. Although this approach can make it faster to run tests, it can obscure which change in the batch caused a test to break.</p>
<p>To speed up failure identification, we use two different approaches. First, TAP automatically splits a failing batch up into individual changes and reruns the tests against each change in isolation. This process can sometimes take a while to converge on a failure, so in addition, we have created culprit finding tools that an individual developer can use to binary search through a batch of changes and identify which one is the likely culprit.</p>
<h3>Failure management</h3>
<p>After a breaking change has been isolated, it is important to fix it as quickly as possible.<a contenteditable="false" data-primary="Test Automation Platform (TAP)" data-secondary="failure management" data-type="indexterm" id="id-0KSQHphRtohV">&nbsp;</a><a contenteditable="false" data-primary="failures" data-secondary="failure management with TAP" data-type="indexterm" id="id-b6SBhBhVtXhb">&nbsp;</a> The presence of failing tests can quickly begin to erode confidence in the test suite. As mentioned previously, fixing a broken build is the responsibility of the Build Cop. The most effective tool the Build Cop has is the <em>rollback</em>.</p>
<p>Rolling a change back is often the fastest and safest route to fix a build because it quickly restores the system to a known good state.<sup><a data-type="noteref" id="ch01fn248-marker" href="ch23.html#ch01fn248">12</a></sup> In fact, TAP has recently been upgraded to automatically roll back changes when it has high confidence that they are the culprit.</p>
<p>Fast rollbacks work hand in hand with a test suite to ensure continued productivity. Tests give us confidence to change, rollbacks give us confidence to undo. Without tests, rollbacks cant be done safely. Without rollbacks, broken tests cant be fixed quickly, thereby reducing confidence in the system.</p>
<h3>Resource constraints</h3>
<p>Although engineers can run tests locally, most test executions<a contenteditable="false" data-primary="Test Automation Platform (TAP)" data-secondary="resource constraints and" data-type="indexterm" id="id-Q0S8HECAtGhW">&nbsp;</a> happen in a distributed build-and-test system called <em>Forge</em>. <a contenteditable="false" data-primary="Forge" data-type="indexterm" id="id-qWSptNCAt3h8">&nbsp;</a>Forge allows engineers to run their builds and tests in our datacenters, which maximizes parallelism. At our scale, the resources required to run all tests executed on-demand by engineers and all tests being run as part of the CB process are enormous. Even given the amount of compute resources we have, systems like Forge and TAP are resource constrained. To work around these constraints, engineers working on TAP have come up with some clever ways to determine which tests should be run at which times to ensure that the minimal amount of resources are spent to validate a given change.</p>
<p>The primary mechanism for determining which tests need to be run is an analysis of the downstream dependency graph for every change. Googles distributed build tools, Forge and Blaze, maintain<a contenteditable="false" data-primary="Blaze" data-secondary="global dependency graph" data-type="indexterm" id="id-KWSVHBI1tmhl">&nbsp;</a> a near-real-time version of the global dependency graph and make it available to TAP. As a result, TAP can quickly determine which tests are downstream from any change and run the minimal set to be sure the change is safe.</p>
<p>Another factor influencing the use of TAP is the speed of tests being run. TAP is often able to run changes with fewer tests sooner than those with more tests. This bias encourages engineers to write small, focused changes. The difference in waiting time between a change that triggers 100 tests and one that triggers 1,000 can be tens of minutes on a busy day. Engineers who want to spend less time waiting end up making smaller, targeted changes, which is a win for everyone.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="implementation at Google" data-startref="ix_CIGooTAP" data-tertiary="TAP, global continuous build" data-type="indexterm" id="id-qWSOH5uAt3h8">&nbsp;</a><a contenteditable="false" data-primary="Test Automation Platform (TAP)" data-startref="ix_TAP" data-type="indexterm" id="id-EdSnh8u6tVh3">&nbsp;</a></p>
</aside>
<section data-type="sect2" id="ci_case_study_google_takeout">
<h2>CI Case Study: Google Takeout</h2>
<p>Google Takeout started out as a data backup and download product in 2011. Its founders<a contenteditable="false" data-primary="Google Takeout case study" data-type="indexterm" id="ix_GooTkcs">&nbsp;</a> pioneered the <a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="implementation at Google" data-tertiary="case study, Google Takeout" data-type="indexterm" id="ix_CIGoocs">&nbsp;</a>idea of “data liberation”—that users should be able to easily take their data with them, in a usable format, wherever they go. They began by integrating Takeout with a handful of Google products themselves, producing archives of users photos, contact lists, and so on for download at their request. However, Takeout didnt stay small for long, growing as both a platform and a service for a wide variety of Google products. As well see, effective CI is central to keeping any large project healthy, but is especially critical when applications rapidly grow.</p>
<section data-type="sect3" id="scenario_hashone_continuously_broken_de">
<h3>Scenario #1: Continuously broken dev deploys</h3>
<p><strong>Problem:</strong> As Takeout gained a reputation as a powerful Google-wide data fetching, archiving, and download tool, other teams at the company began to turn to it, requesting APIs so that their own applications could provide backup and download functionality, too, including Google Drive (folder downloads are served by Takeout) and Gmail (for ZIP file previews). All in all, Takeout grew from being the backend for just the original Google Takeout product, to providing APIs for at least 10 other Google products, offering a wide range of functionality.</p>
<p>The team decided to deploy each of the new APIs as a customized instance, using the same original Takeout binaries but configuring them to work a little differently. For example, the environment for Drive bulk downloads has the largest fleet, the most quota reserved for fetching files from the Drive API, and some custom authentication logic to allow non-signed-in users to download public folders.</p>
<p>Before long, Takeout faced “flag issues.” Flags added for one of the instances would break the others, and their deployments would break when servers could not start up due to configuration incompatibilities. Beyond feature configuration, there was security and ACL configuration, too. For example, the consumer Drive download service should not have access to keys that encrypt enterprise Gmail exports. Configuration quickly became complicated and led to nearly nightly breakages.</p>
<p>Some efforts were made to detangle and modularize configuration, but the bigger problem this exposed was that when a Takeout engineer wanted to make a code change, it was not practical to manually test that each server started up under each configuration. They didnt find out about configuration failures until the next days deploy. There were unit tests that ran on presubmit and post-submit (by TAP), but those werent sufficient to catch these kinds of issues.</p>
<section data-type="sect4" id="what_the_team_did">
<h4>What the team did</h4>
<p>The team created temporary, sandboxed mini-environments for each of these instances that ran on presubmit and tested that all servers were healthy on startup. Running the temporary environments on presubmit prevented 95% of broken servers from bad configuration and reduced nightly deployment failures by 50%.</p>
<p>Although these new sandboxed presubmit tests dramatically reduced deployment failures, they didnt remove them entirely. In particular, Takeouts end-to-end tests would still frequently break the deploy, and these tests were difficult to run on presubmit (because they use test accounts, which still behave like real accounts in some respects and are subject to the same security and privacy safeguards). Redesigning them to be presubmit friendly would have been too big an undertaking.</p>
<p>If the team couldnt run end-to-end tests in presubmit, when could it run them? It wanted to get end-to-end test results more quickly than the next days dev deploy and decided every two hours was a good starting point. But the team didnt want to do a full dev deploy this often—this would incur overhead and disrupt long-running processes that engineers were testing in dev. Making a new shared test environment for these tests also seemed like too much overhead to provision resources for, plus culprit finding (i.e., finding the deployment that led to a failure) could involve some undesirable manual work.</p>
<p>So, the team reused the sandboxed environments from presubmit, easily extending them to a new post-submit environment. Unlike presubmit, post-submit was compliant with security safeguards to use the test accounts (for one, because the code has been approved), so the end-to-end tests could be run there. The post-submit CI runs every two hours, grabbing the latest code and configuration from green head, creates an RC, and runs the same end-to-end test suite against it that is already run in dev.</p>
</section>
<section data-type="sect4" id="lesson_learne">
<h4>Lesson learned</h4>
<p>Faster feedback loops prevent problems in dev deploys:</p>
<ul>
<li>
<p>Moving tests for different Takeout products from “after nightly deploy” to presubmit prevented 95% of broken servers from bad configuration and reduced nightly deployment failures by 50%.</p>
</li>
<li>
<p>Though end-to-end tests couldnt be moved all the way to presubmit, they were still moved from “after nightly deploy” to “post-submit within two hours.” This effectively cut the “culprit set” by 12 times.</p>
</li>
</ul>
</section>
</section>
<section data-type="sect3" id="scenario_hashtwo_indecipherable_test_lo">
<h3>Scenario #2: Indecipherable test logs</h3>
<p><strong>Problem:</strong> As Takeout incorporated more Google products, it grew into a mature platform that allowed product teams to insert plug-ins, with product-specific data-fetching code, directly into Takeout's binary. For example, the Google Photos plug-in knows how to fetch photos, album metadata, and the like. Takeout expanded from its original “handful” of products to now integrate with more than <em>90</em>.</p>
<p>Takeouts end-to-end tests dumped its failures to a log, and this approach didnt scale to 90 product plug-ins. As more products integrated, more failures were introduced. Even though the team was running the tests earlier and more often with the addition of the post-submit CI, multiple failures would still pile up inside and were easy to miss. Going through these logs became a frustrating time sink, and the tests were almost always failing.</p>
<section data-type="sect4" id="what_the_team_di">
<h4>What the team did</h4>
<p>The team refactored the tests into a dynamic, configuration-based suite (using a <a href="https://oreil.ly/UxkHk">parameterized test runner</a>) that reported results in a friendlier UI, clearly showing individual test results as green or red: no more digging through logs. They also made failures much easier to debug, most notably, by displaying failure information, with links to logs, directly in the error message. For example, if Takeout failed to fetch a file from Gmail, the test would dynamically construct a link that searched for that files ID in the Takeout logs and include it in the test failure message. This automated much of the debugging process for product plug-in engineers and required less of the Takeout teams assistance in sending them logs, as demonstrated in <a data-type="xref" href="ch23.html#the_teamapostrophes_involvement_in_debu">Figure 23-3</a>.</p>
<figure id="the_teamapostrophes_involvement_in_debu"><img alt="The teams involvement in debugging client features" src="images/seag_2303.png">
<figcaption><span class="label">Figure 23-3. </span>The teams involvement in debugging client failures</figcaption>
</figure>
</section>
<section data-type="sect4" id="lesson_learned">
<h4>Lesson learned</h4>
<p>Accessible, actionable feedback from CI reduces test failures and improves productivity. These initiatives reduced the Takeout teams involvement in debugging client (product plug-in) test failures by 35%.</p>
</section>
</section>
<section data-type="sect3" id="scenario_hashthree_debugging_quotation">
<h3>Scenario #3: Debugging “all of Google”</h3>
<p><strong>Problem:</strong> An interesting side effect of the Takeout CI that the team did not anticipate was that, because it verified the output of 90-some odd end-userfacing products, in the form of an archive, they were basically testing “all of Google” and catching issues that had nothing to do with Takeout. This was a good thing—Takeout was able to help contribute to the quality of Googles products overall. However, this introduced a problem for their CI processes: they needed better failure isolation so that they could determine which problems were in their build (which were the minority) and which lay in loosely coupled microservices behind the product APIs they called.</p>
<section data-type="sect4" id="what_the_team_did-id00141">
<h4>What the team did</h4>
<p>The teams solution was to run the exact same test suite continuously against production as it already did in its post-submit CI. This was cheap to implement and allowed the team to isolate which failures were new in its build and which were in production; for instance, the result of a microservice release somewhere else “in Google.”</p>
</section>
<section data-type="sect4" id="lesson_learned-id00056">
<h4>Lesson learned</h4>
<p>Running the same test suite against prod and a post-submit CI (with newly built binaries, but the same live backends) is a cheap way to isolate failures.</p>
</section>
<section data-type="sect4" id="remaining_challenge">
<h4>Remaining challenge</h4>
<p>Going forward, the burden of testing “all of Google” (obviously, this is an exaggeration, as most product problems are caught by their respective teams) grows as Takeout integrates with more products and as those products become more complex. Manual comparisons between this CI and prod are an expensive use of the Build Cops time.</p>
</section>
<section data-type="sect4" id="future_improvement-id00045">
<h4>Future improvement</h4>
<p>This presents an interesting opportunity to try hermetic testing with record/replay in Takeouts post-submit CI. In theory, this would eliminate failures from backend product APIs surfacing in Takeouts CI, which would make the suite more stable and effective at catching failures in the last two hours of Takeout changes—which is its intended purpose.</p>
</section>
</section>
<section data-type="sect3" id="scenario_hashfour_keeping_it_green">
<h3>Scenario #4: Keeping it green</h3>
<p><strong>Problem:</strong> As the platform supported more product plug-ins, which each included end-to-end tests, these tests would fail and the end-to-end test suites were nearly always broken. The failures could not all be immediately fixed. Many were due to bugs in product plug-in binaries, which the Takeout team had no control over. And some failures mattered more than others—low-priority bugs and bugs in the test code did not need to block a release, whereas higher-priority bugs did. The team could easily disable tests by commenting them out, but that would make the failures too easy to forget about.</p>
<p>One common source of failures: tests would break when product plug-ins were rolling out a feature. For example, a playlist-fetching feature for the YouTube plug-in might be enabled for testing in dev for a few months before being enabled in prod. The Takeout tests only knew about one result to check, so that often resulted in the test needing to be disabled in particular environments and manually curated as the feature rolled out.</p>
<section data-type="sect4" id="what_the_team_did-id00142">
<h4>What the team did</h4>
<p>The team came up with a strategic way to disable failing tests by tagging them with an associated bug and filing that off to the responsible team (usually a product plug-in team). When a failing test was tagged with a bug, the teams testing framework would suppress its failure. This allowed the test suite to stay green and still provide confidence that everything else, besides the known issues, was passing, as illustrated in <a data-type="xref" href="ch23.html#achieving_greenness_through_left_parent">Figure 23-4</a>.</p>
<figure id="achieving_greenness_through_left_parent"><img alt="Achieving greenness through (responsible) test disablement" src="images/seag_2304.png">
<figcaption><span class="label">Figure 23-4. </span>Achieving greenness through (responsible) test disablement</figcaption>
</figure>
<p>For the rollout problem, the team added capability for plug-in engineers to specify the name of a feature flag, or ID of a code change, that enabled a particular feature along with the output to expect both with and without the feature. The tests were equipped to query the test environment to determine whether the given feature was enabled there and verified the expected output accordingly.</p>
<p>When bug tags from disabled tests began to accumulate and were not updated, the team automated their cleanup. The tests would now check whether a bug was closed by querying our bug systems API. If a tagged-failing test actually passed and was passing for longer than a configured time limit, the test would prompt to clean up the tag (and mark the bug fixed, if it wasnt already). There was one exception for this strategy: flaky tests. For these, the team would allow a test to be tagged as flaky, and the system wouldnt prompt a tagged “flaky” failure for cleanup if it passed.</p>
<p>These changes made a mostly self-maintaining test suite, as illustrated in <a data-type="xref" href="ch23.html#mean_time_to_close_bugcomma_after_fix_s">Figure 23-5</a>.</p>
<figure id="mean_time_to_close_bugcomma_after_fix_s"><img alt="Mean time to close bug, after fix submitted" src="images/seag_2305.png">
<figcaption><span class="label">Figure 23-5. </span>Mean time to close bug, after fix submitted</figcaption>
</figure>
</section>
<section data-type="sect4" id="lessons_learned">
<h4>Lessons learned</h4>
<p>Disabling failing tests that cant be immediately fixed is a practical approach to keeping your suite green, which gives confidence that youre aware of all test failures. Also, automating the test suites maintenance, including rollout management and updating tracking bugs for fixed tests, keeps the suite clean and prevents technical debt. In DevOps parlance, we could call the metric in <a data-type="xref" href="ch23.html#mean_time_to_close_bugcomma_after_fix_s">Figure 23-5</a> MTTCU: mean time to clean up.</p>
</section>
<section data-type="sect4" id="future_improvement">
<h4>Future improvement</h4>
<p>Automating the filing and tagging of bugs would be a helpful next step. This is still a manual and burdensome process. As mentioned earlier, some of our larger teams already do this.</p>
</section>
<section data-type="sect4" id="further_challenges">
<h4>Further challenges</h4>
<p>The scenarios weve described are far from the only CI challenges faced by Takeout, and there are still more problems to solve. For example, we mentioned the difficulty of isolating failures from upstream services in <a data-type="xref" href="ch23.html#ci_challenges">CI Challenges</a>. This is a problem that Takeout still faces with rare breakages originating with upstream services, such as when a security update in the streaming infrastructure used by Takeouts “Drive folder downloads” API broke archive decryption when it deployed to production. The upstream services are staged and tested themselves, but there is no simple way to automatically check with CI if they are compatible with Takeout after they're launched into production. An initial solution involved creating an “upstream staging” CI environment to test production Takeout binaries against the staged versions of their upstream dependencies. However, this proved difficult to maintain, with additional compatibility issues between staging and production <span class="keep-together">versions.</span><a contenteditable="false" data-primary="Google Takeout case study" data-startref="ix_GooTkcs" data-type="indexterm" id="id-0KSktvhYIXCbc5hO">&nbsp;</a><a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="implementation at Google" data-startref="ix_CIGoocs" data-tertiary="case study, Google Takeout" data-type="indexterm" id="id-b6S3cmh1I2CQcBh2">&nbsp;</a></p>
</section>
</section>
</section>
<section data-type="sect2" id="but_i_canapostrophet_afford_ci">
<h2>But I Cant Afford CI</h2>
<p>You might be thinking thats all well and good, but you have neither the time nor money to build any of this. We certainly acknowledge that Google might have more resources to implement CI than the typical startup does. Yet many of our products have grown so quickly that they didnt have time to develop a CI system either (at least not an adequate one).</p>
<p>In your own products and organizations, try and think of the cost you are already paying for problems discovered and dealt with in production. These negatively affect the end user or client, of course, but they also affect the team. Frequent production fire-fighting is stressful and demoralizing. Although building out CI systems is expensive, its not necessarily a new cost as much as a cost shifted left to an earlier—and more preferable—stage, reducing the incidence, and thus the cost, of problems occurring too far to the right. CI leads to a more stable product and happier developer culture in which engineers feel more confident that “the system” will catch problems, and they can focus more on features and less on fixing.<a contenteditable="false" data-primary="continuous integration (CI)" data-secondary="implementation at Google" data-startref="ix_CIGoo" data-type="indexterm" id="id-nBSYHYtbfEhX">&nbsp;</a>&nbsp;</p>
</section>
</section>
<section data-type="sect1" id="conclusion-id00027">
<h1>Conclusion</h1>
<p>Even though weve described our CI processes and some of how weve automated them, none of this is to say that we have developed perfect CI systems. After all, a CI system itself is just software and is never complete and should be adjusted to meet the evolving demands of the application and engineers it is meant to serve. Weve tried to illustrate this with the evolution of Takeouts CI and the future areas of improvement we point out.</p>
</section>
<section data-type="sect1" id="tlsemicolondrs-id00129">
<h1>TL;DRs</h1>
<ul>
<li>
<p>A CI system decides what tests to use, and when.</p>
</li>
<li>
<p>CI systems become progressively more necessary as your codebase ages and grows in scale.</p>
</li>
<li>
<p>CI should optimize quicker, more reliable tests on presubmit and slower, less deterministic tests on post-submit.</p>
</li>
<li>
<p>Accessible, actionable feedback allows a CI system to become <a contenteditable="false" data-primary="continuous integration (CI)" data-startref="ix_CI" data-type="indexterm" id="id-nBSYHJHncNh1cQ">&nbsp;</a>more efficient.</p>
</li>
</ul>
</section>
<div data-type="footnotes"><p data-type="footnote" id="ch01fn236"><sup><a href="ch23.html#ch01fn236-marker">1</a></sup><a href="https://www.martinfowler.com/articles/continuousIntegration.html"><em class="hyperlink">https://www.martinfowler.com/articles/continuousIntegration.html</em></a></p><p data-type="footnote" id="ch01fn237"><sup><a href="ch23.html#ch01fn237-marker">2</a></sup>Forsgren, Nicole, et al. (2018). Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations. IT Revolution.</p><p data-type="footnote" id="ch01fn238"><sup><a href="ch23.html#ch01fn238-marker">3</a></sup>This is also sometimes called “shifting left on testing.”</p><p data-type="footnote" id="ch01fn239"><sup><a href="ch23.html#ch01fn239-marker">4</a></sup><em>Head</em> is the latest versioned code in our monorepo. In other workflows, this is also referred to as <em>master</em>, <em>mainline</em>, or <em>trunk</em>. Correspondingly, integrating at head is also known as <em>trunk-based development</em>.</p><p data-type="footnote" id="ch01fn240"><sup><a href="ch23.html#ch01fn240-marker">5</a></sup>At Google, release automation is managed by a separate system from TAP. We wont focus on <em>how</em> release automation assembles RCs, but if youre interested, we do refer you to <a href="https://landing.google.com/sre/books"><em>Site Reliability Engineering</em></a> (O'Reilly) in which our release automation technology (a system called Rapid) is discussed in detail.</p><p data-type="footnote" id="ch01fn241"><sup><a href="ch23.html#ch01fn241-marker">6</a></sup>CD with experiments and feature flags is discussed further in <a data-type="xref" href="ch24.html#continuous_delivery-id00035">Continuous Delivery</a>.</p><p data-type="footnote" id="ch01fn242"><sup><a href="ch23.html#ch01fn242-marker">7</a></sup>We call these “mid-air collisions” because the probability of it occurring is extremely low; however, when this does happen, the results can be quite surprising.</p><p data-type="footnote" id="ch01fn243"><sup><a href="ch23.html#ch01fn243-marker">8</a></sup>Each team at Google configures a subset of its projects tests to run on presubmit (versus post-submit). In reality, our continuous build actually optimizes some presubmit tests to be saved for post-submit, behind the scenes. We'll further discuss this later on in this chapter.</p><p data-type="footnote" id="ch01fn244"><sup><a href="ch23.html#ch01fn244-marker">9</a></sup>Aiming for 100% uptime is the wrong target. Pick something like 99.9% or 99.999% as a business or product trade-off, define and monitor your actual uptime, and use that “budget” as an input to how aggressively youre willing to push risky releases.</p><p data-type="footnote" id="ch01fn245"><sup><a href="ch23.html#ch01fn245-marker">10</a></sup>We believe CI is actually critical to the software engineering ecosystem: a must-have, not a luxury. But that is not universally understood yet.</p><p data-type="footnote" id="ch01fn246"><sup><a href="ch23.html#ch01fn246-marker">11</a></sup>In practice, its often difficult to make a <em>completely</em> sandboxed test environment, but the desired stability can be achieved by minimizing outside dependencies.</p><p data-type="footnote" id="ch01fn248"><sup><a href="ch23.html#ch01fn248-marker">12</a></sup>Any change to Googles codebase can be rolled back with two clicks!</p></div></section>
</body>
</html>