How to Build a Robust Microservice Architecture? Continuous Delivery and Other Best Practices – Part II
Microservices have really redefined the way traditional and legacy applications were built. In part I, we have provided an overview of microservices and key benefits of microservice architecture. Unlike the monolithic architectural style, microservice architecture helps to bring business agility and improve time to market. Microservices break the larger service into different smaller functions and create “mini-APIs” for each of these which can be combined to create a larger suite of APIs that are easy to test, deploy and portable at the same time. These are really better than one single API. Most companies have started following the new architectural pattern owing to its various benefits such as:
- Loosely coupled service components
- Increased availability, fault isolation and resilience
- No long term commitment to technology stack as there are no dependencies
- Immutable artifacts and easy to understand code
- Organized around business capabilities
- Project agnostic
- Decentralized data management
- Quick and shorter deployment cycles
Apart from instilling business agility and making the architecture scalable, microservices also enable DevOps automation with continuous delivery and deployment.
The concept of Continuous Delivery
Continuous Delivery aims to get all the new features, changes, bug fixes and releases into production quickly, safely and continuously. With continuous delivery, deployment becomes more predictable and could be performed on demand. The code is in the releasable state always even in a distributed development model when thousands of developers are pushing their changes every minute from various locations.
Some of the key benefits that companies achieve if they practice continuous delivery include:
- Prevent risky releases – Preventing a risky release is crucial as a release with bug could hamper both the user experience and profitability of the software. Continuous delivery makes deployment process swift and accurate mitigating the risks drastically thereby impacting downtime and performance.
- Improve time to market – The traditional software delivery cycle was long as development, testing and fixing bugs used to be sequential. With the help of regression and pipeline automation, both dev and QA teams can really leverage Agile, work on the roadblocks and improve the build quality.
- Improved user experience – Automation helps developers to discover the regressions quickly and fix them continuously without delay. The build quality is superior to the delivery process with an automated deployment pipeline. This leads to better software, zero downtime and improved the user experience.
- Reduced costs – Companies that invest in test automation, environment provisioning and automation of delivery pipeline prevent risky deployments and also reduce the costs of delivering incremental changes to the product by eliminating various fixes costs associated with shipping of releases.
Continuous delivery also helps to remain iterative throughout the process. Early and continuous feedback throughout the product lifecycle means a better working software, always. Moreover, some of the Agile product companies also go by the hypothesis-driven product development approach where the ideas are put to test even before they are developed.
Continuous Delivery and Microservices
Setting up the continuous delivery pipeline and automation testing is extremely challenging in monolithic architectural styles. This is mainly because of dependencies. Services aren’t independent the entire logic is within the pipeline.
With microservices, software teams can test separate components and prevent risky deployments. In microservice architecture, the entire logic is transferred from the pipeline to endpoints making services independent. Moreover, without microservices, a lot of dependencies have to be met to prepare various environments for testing. The microservice architecture enables development teams to set up these environments quickly, deploy fast and automate complete cycle.
In monolithic architecture, the entire code is being pushed to production often which increases the chances of errors and time to debug the code. Microservices improve the traditional delivery approach and ships release specific to services and components making the entire release safe, cost-effective and quick to debug. As microservice architecture helps to create a resilient system, even when one of the services fail the system is still running. The impact of a broken deployment release is comparatively less with microservices. Apart from this, software teams work with a “You Build, You Run’ approach. Teams are able to independently run the software and maintain it.
Outlined below are the few points to consider while setting up delivery in microservices:
- Services should not be dependent on each other so that testing separate components is possible
- All the problems that are related should be grouped together
- If there are two services and they are tightly coupled and both have changes in them, unify them into one if possible. This again helps to test quickly and single aspect of an application
- Ensure semantic versioning to build and deploy against correct versions of services; especially when loose coupling is difficult
- Support multiple database deployments in the test/staging environment as each service is based on its own database.
- As some of the services might share databases, it is critical to check data dependencies and test the dependent services when there is a change in the shared data schema.
- Enable microservice architecture by container technologies such as Docker.
- Setup the CI/CD for each service on the same building blocks with similar endpoints viz. ‘build’, ‘test’, ‘deploy’ and ‘promote’.
There are various best practices that are to be followed while building products using microservice architecture. Outlined here are some of these best practices:
7 Best Practices for building a robust microservice architecture
1. Keep them micro and loosely coupled – Services should be loosely coupled, developed and deployed independently and each service should have its separate private data. Moreover, these services should be small and focused but at the same time, they should be at least that big that they minimize interservice communication. It is advisable to have a clear usage documentation of each service and documentation for all the services should be placed at a common place.
2. Develop service templates – Services do a lot more than just accomplishing business logic such as monitoring, client side load balancing and much more. Developing service templates for common tasks can save a lot of time. Moreover, standardizing communication method between services is important with growing number of services. Use HTTP for synchronous communication and message broker or webhooks for asynchronous communication.
3. Take care of development environments – With multiple smaller services to be managed, ensure that development environments are packed with virtual machines. This will save a lot of time of development teams. Also, avoid shared environments among development teams as that might create a lot of confusion and chaos.
4. Integrate the code quickly – Make sure that the code integration to the main branch is done as soon as the changes are made to the code. Updates in the main branch should trigger automated builds and in turn, automates tests to verify the build quality. If all the tests pass, CI system should ship the release to deployment. If feature flags are used to release features over the period of time, make sure the period if defined and it is short. Long feature flags are harder to debug and test.
5. Standardize the deployment package – This will enable quick automation. Make sure the deployment package is deployable to any environment without changes. Also, ensure to isolate each deployed service so that there is a fair amount of resource consumption across services. Try to keep the business logic specific to services outside of shared libraries as there is a little control on the deployment of updates.
6. Provision for failures – With multiple small set of services running constantly, it is difficult to find a failure if multiple services fail. In such times, use short timeouts on downstream API calls as this would prevent multiple services from slowing down. It would be easy to detect one service that is failing as other services point at it with timeouts in place. Make sure using centralized monitoring tools so that it becomes easy to identify whether the failure is encountered across services or in the instance of a particular service. Some of the things that should be monitored include memory usage, network operations and request latency.
7. Update services with zero downtime – While updating one of the services, make sure that there is no downtime. This could be achieved by either using additional resources temporarily or with an approach called rolling restart.
Product companies and software teams do not just want to leverage technology, but also go to market faster with minimum iterations. With traditional monolithic architecture, this is really difficult to achieve as it is time consuming and challenging to maintain as well as continuously integrate one big monolith. A single monolithic deployment artifact is too big to understand for software teams because of its complexity and size.
Microservice architecture has resolved this complexity and helped product companies build loosely coupled services that could be changed, scaled and easily managed. With entire application divided in various small set of services, each running on its own but communicating with each other through APIs, understanding application becomes a lot easier. Microservices also enable DevOps and help companies to focus on zero-downtime, continuous integration and deployment. With so many benefits, microservices are gaining traction and here to stay for long.