Continuous deployment and continuous monitoring have grown up in DevOps, and microservices have grown up as a delivery platform. While all are relatively stable concepts at this point, there are still a fair number of practitioners that are struggling with the merging of these concepts to create a holistic environment that allows for deep understanding and easy deployments.
So I thought we could take a look at what’s out there, how it’s going, and even mention a resource or two for you.
Continuous Deployment
While continuous delivery (CD) and continuous integration (CI) are well in hand, continuous deployment has lagged a bit behind. There are reasons—both good and bad—why this is true, but true it is. Continuous deployment is the act of actually deploying the results of the CI/CD cycle. The whole point of continuous deployment is to speed the release of new features to market by releasing them as soon as they pass the CI/CD muster, or soon after, or for some industries, on demand. Continuous delivery is a little different from CI/CD in that the rest of the organization—sales, support, marketing, accounting, etc.—have to be ready for the release, so in the slowest case, the point is that IT is not the hold-up to releasing new features under the continuous deployment model.
Microservices
Simply put, microservices are decomposition of an application into smaller parts that can be run independently. There are a lot of different ideas for what “smaller parts” means, with some shops breaking their application down to the functional level similar to serverless functions, so an API named calculateSalesTax would do that and only that, given location and dollar information, while others prefer a more broad definition that includes entire groupings of APIs. In this scenario, one company split all database work into a single microservice that was scalable. Either way, you will have more instances (normally containers, but sometimes VMs or even physical servers) running to build the application than historically would be the case.
Why Containers
As mentioned above, normally a microservices architecture is built on containers. The portability of a container definition is one of the major reasons why companies lean this direction. The ability to say, “This container can run on our hardware, on our private cloud, on a public cloud, in a public cloud VPC or on a VM,” is a way to ensure that you can target the system to the environment most suited to the organization’s needs.
Containers are also far lighter than VMs, meaning faster spin-up times, and an easier time expanding the microservices architecture to meet changing user needs.
Container Management
Speaking of containers, container management systems are used to make certain that containers are alive and responding. In advanced cases, they can be made to check that a container is doing what it is supposed to do also. While I am a fan of Apache Mesos, the world seems to have gone for Kubernetes, so we’ll focus on it, without further reference to the others out there.
On top of making certain the correct number of a given container are running, and taking steps to spin up new ones if one disappears, Kubernetes does things such as manage secrets for containers, perform load balancing between instances in a group (these groups are generally called a service in container management speak), scaling up/down of instances as directed, and provide rolling updates to services so the service is not down while being updated.
This makes a container management tool an important part of any continuous delivery and continuous monitoring effort.
Taken together
That gives us tools including Jenkins for CI and CD, and often for release management, though other tools specialize in the release portion, too. Combine Jenkins with an infrastructure-as-code solution such as Teraform or any of the others we’ve discussed in the past. Add in a tool such as Prometheus for gathering monitoring data, and a tool such as Grafana for visualizing the data, and you’re on the road to continuous delivery and continuous monitoring.
Of course, it isn’t that simple, I’m overviewing here. While researching this blog, the people at Infostretch shared some information with me, and they too would see this as the 50,000-foot view. But that’s where we start. For a more complete picture, here is a simple diagram of continuous deployment that Infostretch shared with me:
And as long as we’re discussing the information that Infostretch shared with me, let’s also look at a sample of what the company monitors. You’ll see it isn’t “everything under the sun,” but focused points that give them actionable intelligence. A few application-specific monitoring points from within the application, and this turns into an application performance gold mine.
Classification | Metrics |
Cluster Metrics | Overall Cluster Memory Utilization |
Overall Cluster CPU Utilization | |
Overall Cluster FS Utilization | |
Multi-series graph for individual service CPU Utilization | |
Multi-series graph for individual service Memory Utilization | |
Multi-series graph for individual service FS Utilization | |
Application Metrics
| Service Request rate |
Service Error rate | |
Service latency |
There is so much more
There really simply isn’t enough room in a single blog to cover all that is required and possible in continuous delivery and continuous monitoring. I’ll continue to try continuous blogging to help you out, and the other authors here on staging-devopsy.kinsta.cloud have good things to share on the topics also.