The Azure Well-Architected Framework is a set of guidelines spanning five key pillars that can be used to optimise your workloads. In the previous blogs, we covered Reliability, Security, Cost Optimisation and more recently Operational Excellence. This time we will focus on Performance Efficiency, which is the fifth and final pillar of the framework.
Overview of Performance Efficiency
Prior to the age of cloud computing, measuring and scaling performance was an extremely important factor in managing applications and workloads. To ensure sites and services could handle increases in load and traffic, it was very common to overprovision hardware in order to handle the spike in demands. Although this would ensure business requirements could be met, it wasn’t a very cost-effective solution. However, since the start of Cloud Computing was of the biggest drivers for adopting cloud solutions is its ability to scale on demand whilst keeping costs down. Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner.
Although many cloud services offer some degree of Performance Efficiency out of the box, as with on-premises systems you still have to manage, test and monitor your workloads to get the best out of the solutions available.
A Well-Architected workload viewed through the lens of Performance Efficiency is a workload that is designed in a way that improves performance whilst ensuring it can scale to meet users’ demands. Design patterns and possible trade-offs against security, cost and operability also need to be considered.
Specific to Performance Efficiency, at a high level you should be thinking about the following areas and processes:
- Review your workload using the performance efficiency checklist
- Understand Performance Principals to assist with your strategy
- Design for performance
- Plan for growth and consider scalability
- Use the correct design pattern to build a performant workload
- Consider trade-offs such as security, cost, efficiency and operability.
Performance Efficiency Principals
When designing for Operational Excellence in Azure, there are a set of principles covered in the Framework that you must think about, those principles include:
- Design for horizontal scaling by understanding business requirements, service demands, tooling and cloud service options. Horizontal scaling allows for elasticity. Instances are added (scale-out) or removed (scale-in) in response to changes in load. Scaling out can improve resiliency by building redundancy. Scaling in can help reduce costs by shutting down excess capacity. Ensure you apply performance strategies early in design. Define a capacity model that you’re your business requirements then go on to test applications at the upper demand limits. Utilise Azure PaaS offerings that allow you to take advantage of automatic scaling features and reduce management effort.
- Test early and test often to catch issues in the design process. Stress tests and load tests are great ways to measure an application’s performance under a specific load or even maximum loads. It’s important you establish performance baselines by understanding the current efficiency of the application and its supporting infrastructure. Use continuous performance testing throughout any development effort to ensure codebase changes don’t affect performance.
- Continuously monitor performance in production by observing the workload as a whole to understand the overall health of the solution. A workload is only as strong as its weakest part, this is why it’s very important to monitor the health of the entire solution and not just specific parts or services. Measure infrastructure, applications and dependant services against scalability and resiliency. Ensure you re-evaluate the needs of the workloads continuously to identify improvement opportunities.
Performance Efficiency Recommendations & Tips
Some of the best tips or recommendations for Performance Efficiency are as follows:
- Autoscale – Use Azure services that can scale automatically or based on a schedule before looking to create custom scaling workloads and services.
- Avoid Client Affinity – By avoiding cloud affinity, you ensure requests are routed to any instance. This means the number of instances is irrelevant and scaling will be simpler.
- Offload Intensive Tasks – Using worker roles or background jobs you can take a resource-heavy process and offload it to a separate task. This enables the service to continue receiving requests and remain responsive
- Data Partitioning – Maximise performance and allow simpler scaling by splitting data across databases and servers. Understand and implement the correct data partitioning technique including horizontal, vertical and functional.
- Use Caching – Use caching wherever possible to reduce the load on resources and services that generate or deliver data. Caching is typically suited to data that is relatively static, or that requires considerable processing to obtain.
- Capacity Planning – Load can be impacted by world events, such as political, economic, or weather changes. Test variations of load prior to events, including unexpected ones, to ensure that your application can scale.
Over the last five blog posts, we have covered the Azure Well-Architected framework including its five pillars and principles and shared some useful tips along the way.
As mentioned previously, a great place to further your understanding of the framework whilst reviewing a current workload is the Well-Architected Review located here alongside Microsoft Learn documentation.
For a more in-depth Architecture Review or a specific Performance Efficiency review feel free to reach out to the Transparity Azure Cloud Services team.