Fully Automated CI/CD Pipeline: Your business needs it right away!
The blog emphasizes the need for a fully automated CI/CD pipeline. The upliftment of virtualization of the infrastructure has made it a lot easier to come up with automated solutions, especially in the DevOps sphere. We will talk about the challenges and finally approach towards building one such solution, leveraging a range of tools.
Introduction
A seamless delivery of any technology based end product is one of the primary business goals. The cornerstone of any web or mobile based application is its ability to provide its customers, a healthy environment. Of course, healthy, here, means maximum uptime and a smooth release of new, frequently added features and updates.
Now, to be able to deliver these end products, the product owners or CXOs will have to ensure that there are no pitfalls when it comes to the continuous integration and delivery of the application.
Traditionally, developers and DevOps have been pushing code, running tests, generating PRs and deploying code manually. This approach has many caveats like a non-singular point of control and monitoring, dependency upon multiple tools or platforms, etc. Apart from meeting business goals, what’s essential is to relieve the workforce off of the manual build steps and rule out the possibility of human error. This can be achieved by introducing a fully automated CI/CD solution in the application’s ecosystem. Having said that, building such a solution has a plethora of challenges at each stage.
In this blog, we will broadly see what a fully automated CI/CD pipeline has to offer, what are the underlying challenges and how can they be dealt with.
Features of a Fully Automated CI/CD Pipeline: There are challenges too!
Ideally, a fully automated CI/CD pipeline is automated to the extent that a developer will commit a change to the SCM and in few minutes the change will be deployed and visible.
Concisely, a CI/CD pipeline can be broken into these stages:
Commit: Code is pushed to a source code manager whenever the developers finish a change.
Build: The latest change is checked out on a server, and the artifact is created for the application to be able to come up.
Tests: Unit, security or system tests are performed to ensure the application does not have vulnerabilities and is performing as expected.
Deploy: The build version is deployed to the environment.
A fully automated CI/CD pipeline needs no human interventions. It has a sequential flow of actions. You get a centralized managerial view of the progress across all the stages. Setting up these stages may have a challenge attached to them. Let’s see what these stages and challenges are.
– The pipeline is triggered as soon as the code is pushed to SCM.
– The latest change is checked out on a central CI/CD server as the pipeline kicks-off.
– Right after the checkout, the build starts, and the artifacts are created.
– The application is brought up, and tests are run.
– Pipeline fails and notifies the administrators in case tests fail.
– If the tests are successful, the build is deployed to the application servers.
– Application status and availability are checked.
– If application breaks, administrators are notified, and the deployment is rolled back to the last stable version.
Building the Solution: Fitting all the pieces of the puzzle together
Let’s start with defining the approach towards building a fully automated CI/CD pipeline. First things first, we are going to use the following set of tools and services:
- Bitbucket
- Jenkins 2.0
- Nginx
- Node
- Docker
- AWS EC2, S3, ECS
– Jenkins is the powerhouse of this solution. We use the pipeline feature of Jenkins 2.0 which is built on top of Groovy, where we write down all the stages of CI/CD end to end.
– We use the webhooks in the Bitbucket repository to notify a specific pipeline job in Jenkins.
– Nginx web server with Lua scripting is used on Jenkins side, which processes the webhooks request and validates the branch on which the code was pushed. It further forwards the request to the Jenkins pipeline which in turns triggers the pipeline for that particular project or application. At this point, the administrators are also notified that the deployment has started.
One important part of this whole mechanism is a ‘tag’ wich is associated with each commit being made to the SCM. This tag is fetched at the CI/CD server and remains throughout the lifecycle of the pipeline. The tag associated with the commit is captured as an environment variable.
In this example solution, we are building and deploying a node application. Node dependencies are installed, and the application is built in the next stage.
– We run the node application and tests are run against it.
– If the tests are successful, then the application Dockerfile is downloaded from an S3 bucket and Docker image is built with all the node packages installed inside of it.
– We have the Docker image now. Let’s push it to the AWS ECR which is a powerful in-house Docker repository by AWS. We use the commit tag that we captured earlier to version our Docker image.
– Once this Docker image is in place, we update the AWS ECS task definition to point to the latest Docker image version using AWS APIs. The deployment of containers starts at the target EC2 instances. Simultaneously, we also capture the current task definition template which can be used in case of deployment rollback.
– As the new Docker containers come up, we use AWS ECS APIs to constantly check the health of the related service for a specific amount of time. Once, it is ensured that the containers are up and healthy, the pipeline notifies the administrators that the deployment was successful.
– If the AWS ECS APIs returns an unstable state of service, the pipeline notifies the administrators about it. Next, we use the formerly captured task definition template to update the service again. This ends with a notification that the deployment was unsuccessful and had to be rolled back.
Summing it Up: It is Challenging, Efficient and a Life-Saver!
These were the different stages of the CI/CD pipeline. This pipeline is capable enough to serve your production environment, just that certain considerations need to be taken care of while setting it up.
So, we can agree that the need for a fully automated one-click CI/CD pipeline can not be underestimated. Of course, there is a cost, and challenges around the setup but, it is really worthy of your application’s ecosystem. Start using it right away and offload your DevOps personnel overhead of manual and tedious executions.