One of the key advantages of cloud-native applications is the ability to quickly push updates into production for specific microservices—giving developers the ability to respond rapidly to changing business requirements without having to upgrade the entire application. Successfully doing this requires far more than just leveraging a continuous delivery platform. Everything from the culture of the development group to the type of testing required to the underlying platform it will run on is critical.
Based on applying DevOps principles across several projects to date, I have found that development teams can benefit from following these seven key strategies before launching continuous delivery and deployment initiatives. By doing so, you’ll increase your opportunity for success.
Culture
Just having the right tools and technology isn’t enough in today’s enterprises. Successfully adopting an agile, continuous delivery and deployment process will require a culture shift within most development shops. Developers are no longer responsible only for their code; they must take full ownership of the development, testing, automation tools and, most importantly, building operational capabilities into the pipeline so the move to production is easier. Getting developers and the operations team on board with this may take some time and training, but the effort is critical, and your projects won’t succeed without this.
Automation Platform
Carefully consider the tool you’ll use for your continuous delivery pipeline because the current tools on the market have different optimal use cases. For example, if you need to do A/B testing you’ll want a tool that has capabilities for phasing the rollout based on the test results. If your application requires multiple updates in a short time span, you’ll need a tool that has advanced release management capabilities including good pipeline visualization and support for the management of cloud environments.
App Architecture
Design your applications so they are cloud-native. This is typically associated with microservices and containers and will enable you to independently and frequently update each service independently. The typical stages of a deployment pipeline are build, unit testing, integration testing (including functional testing), API testing, performance and staging, then, finally, on to production. However, with microservice architectures and advanced deployment techniques, more and more customers are bypassing certain stages and testing in production, albeit very contained production environments.
Each stage also can include the creation of dynamic environments in the cloud, enabling the stage itself to be provisioned, executed and decommissioned as a part of the pipeline. If you choose to do this, look for a tool that has a number of built-in cloud plugins, so creating and destroying environments in AWS, Google Cloud and other clouds can be done easily.
The application and infrastructure stack for each microservice also should be specified. This could include the application Jar file, a Docker file, or server image such as an AMI; any related services such as AWS Lambda functions; and all configurations and policies. These specifications all should be managed in an infrastructure-as-code manifest, which ensures versioning, consistency, change auditing and testing. It also ensures that the build resources, such as baked images, JARs and AWS Lambda functions, are consistently passed to each stage of the pipeline, from testing to production, eliminating the risk of imposing inconsistent environments and configurations. An AWS CloudFormation template is an example of an infrastructure-as-code manifest.
Security
Security is a necessity in the world of increasing threats that we live in; however, the common approach of waiting to test security right before the push to production is the wrong approach. Security should be built in to the pipeline itself and automated as part of the DevOps CI/CD process. It’s essential to validate the building blocks without slowing down the process. For example, leading companies are automating code scanning, template checking and other security tests in within the phases of the pipeline itself.
Insight
Visualizing and tracking pipelines, environment variables, test results, versions, issues and other artifacts allows developers and the operations team to share a common view throughout the project life cycle. This improves collaboration and traceability.
Delivery Strategies
Automating tests is critical to pipeline design and delivery. Without comprehensive, high-quality, fully automated tests of the deployment pipeline, production failures will occur. Code coverage no longer is a good representation of code quality. In the cloud-native world, test cases and test suites need to be mapped to functionality and business outcomes desired to ensure that the test team gives the green light and then everything goes red in production. In situations where time limitations prevent a comprehensive approach, first automate the critical capabilities and flows, then assess the cost-benefit of automating additional tests.
Be sure that the data and environment for testing and automating purposes mirrors the production data and environment as closely as possible. Most pipelines have a “pre-production” stage before pushing software into production. You may consider building multiple stages across multiple AWS regions before pushing the software to production.
If your goal is to update software in production very frequently and without any downtime, make sure your pipeline tool can easily manage multi-region deployments, such as blue/green and canary deployments for server groups and clusters across AWS and other public infrastructure-as-a-service (IaaS) clouds.
Monitoring and Feedback
Finally, it is essential to get continuous feedback on the user experience, response times, performance of key business services, and the technical metrics from all stages of the life cycle—development, QA and production. Only through this continuous feedback can you ensure that no degradation occurs as you push out releases with increasing frequency. This too needs to be built into each phase of the pipeline to understand performance and improve things earlier in the process, where it costs much less.
There’s no doubt that a successful continuous development and deployment process can deliver significant benefits to users, who constantly require new and enhanced capabilities and can’t wait the weeks or months that a traditional development cycle requires By carefully considering these seven strategies, your projects will benefit from reduced application errors, less downtime and much happier users.
About the Author / David Cramer
David Cramer is vice president of Product Management for the Cloud/DCA business unit at BMC. Prior to BMC, David was head of product management for CA Technologies. During his tenure at CA, David was responsible for application delivery, cloud management, virtualization and Infrastructure automation solutions. Before joining CA, David held executive positions at AlterPoint, Motive, NetSolve and Nortel Networks. David’s focus is on continuous delivery, cloud management and web-scale IT. David received his MBA from Southern Methodist University and a BS in Finance from Georgia State University. Connect with him on LinkedIn and Twitter.