Mark Mansour, a senior manager of software development at Amazon Web Services, writes about continuous delivery at AWS.
Over 10 years ago, we undertook a project at Amazon to understand how quickly our teams were turning ideas into high-quality production systems. This led us to measure our software throughput so that we could improve the speed of our execution. We discovered that it was taking, on average, 16 days from code check-in to production. At Amazon, teams started with an idea, and then typically took a day and a half to write code to bring the idea to life. We spent less than an hour to build and deploy the new code. The remainder of the time, almost 14 days, we spent waiting for team members to start a build, to perform deployments, and run tests. At the end of the project, we recommended automating our post check-in processes to improve our speed of execution. The goal was to eliminate delays while maintaining or even improving quality.
Amazon had already built software development tools to make our software engineers more productive. We created our own centralized and hosted build system, Brazil, that executes a series of commands on a server with the goal of generating an artifact that could be deployed. At this point Brazil didn’t listen to source code changes. A person had to initiate a build. We also had our own deployment system, Apollo, which required a build artifact to be uploaded to it as part of initiating a deployment. Inspired by the interest in continuous delivery occurring in the industry, we built our own system, Pipelines, to automate the software delivery process between Brazil and Apollo.
Read the full white paper here:
Download NowThis complimentary resource is offered by AWS, an AWS Community Partner. This content first appeared here.