Dos and Don’ts to keep in mind while migrating to Graviton
About AWS Graviton
AWS Graviton is the ARM-based processor designed by AWS to provide high performance at a lower cost compared to the Intel x64-based processor. As you have experienced with software like WhatsApp, we have often received upgrade notifications that bring various improvements such as enhanced performance, new features, and bug fixes from the previous versions. Similarly, AWS offers three generations of Graviton processors, each building on the success of previous ones. The first generation, Graviton, introduced limited resource support. We saw an upgraded version, Graviton2, which brought special Neoverse cores, a huge cache, and a hardware accelerator. This upgrade significantly boosted the performance. Many AWS services like AWS Fargate, AWS lambda, and AWS EMR now support AWS Graviton Processors. Now, let’s talk about our latest Graviton3 offering, which offers up to 25% better computing performance and 2 times faster than our previous offering.
Why are Organizations planning to switch their workload to AWS Graviton?
Finding 1: As you can see below, I have checked the hourly cost of our current workload instance type i.e. (m3.xlarge), and compared it with the graviton alternate Instance type, We have found a cheap price in the graviton instance. The tool used to access the cost difference is → https://instances.vantage.sh/
Test Case 1: I have launched two instances, one using t4g.micro and the other using t2.micro, in order to evaluate CPU Usage and the performance difference between the graviton-based instance and the intel-based one. Ruined the below script to increase the CPU load on both instances, Before running make sure you have installed stress-ng in your system.
On the Debian distribution, run the below command:
– apt-get install stress-ng
#!/bin/bash # Define the number of CPU stress workers CPU_WORKERS=$(nproc) # Generate CPU load stress-ng --cpu $CPU_WORKERS --timeout 60s # Clean up – To Stop Stress stress-ng --cpu 0 --timeout 10s
Test Case 2: We tested load testing on t3.micro and t4g.micro instances, which have the same CPU and Memory. Over the past hour, we found graviton instances perform better, even though they cost about 20% less. So, if we don’t need more CPU and memory, we can save money by switching to graviton instances with similar performance. Below is the graph of CPU utilization for the last 1 hour for reference.
Summing Up Test Results and Findings
After executing the bash script, we noticed that the t4g.micro instance performed better with 2 cores compared to the t2.micro instance core. The CPU Utilization graph clearly illustrates this after load testing with stress-ng. What’s more, there’s a 20% cost difference between both instances. These results show that AWS Graviton Instances provide better performance at a lower cost and in the second test case, the Graviton instance(t4g.micro) performs slightly better. It costs 20% less than t3.micro with a similar CPU and memory. Switching to Graviton is a cost-effective choice for comparable performance. That’s why Organizations are now moving their workload to AWS Graviton. Now let’s discuss what we must remember while migrating our workloads to AWS Graviton.
Perform Load Testing of your Workload
Dos :
- As you can observe, I conducted a CPU load-testing using stress-ng to monitor the behavior and performance of the graviton instance. Same when you are planning to migrate workload to graviton make sure you do the load testing of the environment using tools like Apache Benchmark and Jmeter. This will help you to determine whether opting for a more compute-oriented instance type is necessary or not and how the graviton instances handle the traffic load. Then decide whether to proceed with graviton or not.
Don’t:
- blindly migrate your infrastructure without assessing your workload load, especially when planning to roll this out on your production workload. You need to be extra cautious, and if instances are unable to deliver the expected performance, you should have a rollback plan ready. In a production environment, we don’t have much time to test and troubleshoot issues.
Monitor your Workload
Dos:
- After migrating your workload to Graviton, keep an eye on performance metrics such as CPU Utilization, Memory Utilization, DiskI/O, and Network Traffic using a Monitoring tool like AWS Cloudwatch or Newrelic.
Don’t:
- Benchmarking your workload must not be avoided. While It may promise cost savings and good performance, your workload’s performance may vary.
Do Cross Compile of your Dockerfile
Dos:
- Choose a multiplatform base image that supports both x86 and Arm64 architectures. Build on both types of architecture. As technology evolves, we see new types of processors like AWS’s Graviton which are cheaper and faster. To make our work easier, we can use multi-platform docker images. These images work on different types of processors without changing our Dockerfile each time. It is like having one-size-fits-all containers that can run on different types of OS so build and test on arm and x86 type of processor.
Don’t :
- Skip updating the dockerfile, as it’s essential for compatibility. adopting a multi-platform Dockerfile approach will modify the way we build our Dockerfile, so be sure to update your build strategy in your existing deployment scripts.
Do Optimize your code
Dos:
- Create a Documentation or checklist to verify code package compatibility with ARM-based processors and note any issue you have encountered during the transition to graviton processors.
Don’t:
- Continuous optimization must not be ignored. Regularly revisit your code and fine-tune your code as a new ARM-Processor and optimization techniques emerge.
Do Consider Cost Savings
Dos:
- Calculate potential cost savings based on your workload performance requirements to see if graviton instances are cost-effective choices.
Don’t:
- Avoid compromising your workload’s performance solely for cost saving. As demonstrated in our CPU load testing and observed a sudden spike at a current point in time. We found that Graviton initially exhibited higher utilization compared to x86, but after some time, they performed better under normal workload than x86 instances. Therefore, It’s advisable to monitor performance over several days to determine if graviton processors are suitable for your workload or not.
Check the compatibility of running software
When migrating your current workload to AWS Graviton instances, especially when you have a mix of microservices and standalone servers running various applications like Pritunl, Jenkins, ELK, and Grafana. It’s essential to approach compatibility assessment carefully.
Dos:
- Conduct a comprehensive assessment of your application landscape. Categorize applications into microservices and standalone servers, identifying their specific compatibility requirements.
- Keep detailed documents of steps taken to migrate your standalone application to Graviton standalone servers. Including version compatibility, and steps involved in setting up applications onto graviton instances. Also, mention the challenges you have faced while setting arm-based applications. Such as configuring the pritunl server or grafana instances on the graviton server and specifying compatibility requirements.
Don’t:
- Don’t forget to backup your standalone applications before migrating to Graviton. This precaution is essential in case unexpected issues arise after migrating the production workload to Graviton.
Conclusion
Migrating workload to AWS Graviton can be a strategic move to achieve cost savings and improved performance. It’s crucial to recognize that some services are easier to migrate than others. For instance, migrating a database to Graviton provides cost benefits without much effort. You can also see not much cost difference in graviton smaller instances but when you are planning to switch to higher instances type, it will provide you 100% cost saving and better performance. Verify flawless operation, confirm dependencies, and adapt for ARM-64. While Graviton offers cost-efficiency and performance, diligence in balancing cost and compatibility is crucial.