How Can You Save Money on your AWS Cloud Infrastructure?
Cloud computing is continuing to take the digital world by storm, and is predicted to grow exponentially over the coming five years. According to Tech Jury, the public Cloud computing market will be worth $800 billion by 2025, whereas by 2024, enterprise cloud spending will make up 14% of IT revenue globally. And it’s easy to understand why. We’ve already previously written about the many benefits of moving your business to the Cloud, including resource scalability, resilience and security.
But adoption is just the first part of the process, and it’s important to look at how companies can avoid AWS Cloud costs from spiraling out of control. Our DevOps Team offers you some top tips on what you can do to improve your financial efficiency when it comes to the Cloud.
1. Budget AWS Cloud Services Carefully
Cloud services require many IT departments to come up with a new way of budgeting. Unlike traditional infrastructure costs, the majority of them are subscription-based. The licenses are usually granted on either a usage or per-user basis, which means that it’s important to keep tabs on the amount of staff members who need access at any given time. You can then use this as a basis for creating an estimation of monthly costs.
One of the most common reasons for surpassing your original budget is a lack of understanding around the demand for a particular service. This is why building flexibility into a Cloud infrastructure budget is particularly important. It’s also worth conducting a monthly audit of which users no longer need a license - this can provide you with a quick win when it comes to small cost savings.
2. Remove Unused Resources
A key way of optimizing AWS Cloud costs is to look for unused or unattached resources. There are often situations in which a particular service user might forget to turn off a server once they’ve completed a job. In another common use case, the administrator may forget to remove storage attached to instances they terminate. The result is that an organization’s AWS and Azure bills will include charges for resources they once purchased, but are no longer using. This is why its so important to identify unused and unattached resources and to remove them.
3. Identify and Consolidate Idle Resources
The next step is to address idle resources, as these can cause significant waste. Imagine a computing instance which has a utilization level of less than 5%, but which is billed for 100% usage. What’s the answer? Well, it all lies in consolidating computer jobs into fewer instances.
In the days of data centers, administrators often wanted to operate at low utilization so they would have headroom for a spike in traffic or a busy season. It’s difficult, expensive and inefficient to add new resources in the data center. Instead, the cloud offers autoscaling, load balancing, and on-demand capabilities that allow you to scale up your computing power at any time.
4. Invest in AWS Reserved Instances (RIs)
If you’re a company that’s committed to using the Cloud in the long-term, it’s worth investing in RIs. These are larger discounts based on upfront payment and time commitment. RI savings can reach up to 75%, so this is a must for cloud cost optimization. Since RIs can be purchased for one or three years, it is important to analyze your past usage and properly prepare for the future. To purchase RIs, see Microsoft’s Azure Reserved VM Instances (RIs) purchasing guide or follow instructions in the AWS Management Console.
5. Lower your data transfer costs
It’s important to check that your Object Storage and Compute Services are in the same region because data transfer is free when this is the case. For example, AWS charges $0.02/GB to download a given file from another AWS region.
If you do a lot of cross-region transfers it may be cheaper to replicate your Object Storage bucket to a different region than download between regions each time.
Here’s an example, using AWS S3’s:
1GB data in us-west-2 is anticipated to be transferred 20 times to EC2 in us-east-1. If you initiate inter-region transfer, you will pay $0.20 for data transfer (20 * 0.02). However, if you first download it to mirror S3 bucket in us-east-1 then you just pay $0.02 for transfer and $0.03 for storage over a month. It is 75% cheaper. This feature is built into S3 called cross-region replication. Alongside saving money, you’ll also get better performance.
6. Apply Workload Right-Sizing
Right-sizing a workload involves re-assessing the true amount of storage and compute power that it needs. To determine this, you need to monitor workload demand over a period of time to determine the average and peak compute resource consumption.
Kubernetes schedules pods based on resource requests and other restrictions without impairing availability. The scheduler uses CPU and memory resource requests to schedule the workloads in the right nodes, control which pod works on which node and if multiple pods can schedule together on a single node.
Every node type has its own allocatable CPU and memory capacities. Assigning high/unneeded CPU or memory resource requests can end up running underutilized pods on each node, which leads to underutilized nodes.
7. Use Spot Instances
Spot Instances work very differently to RIs, but can also help you reduce your AWS or Azure spend. Spot Instances are available for auction and, if the price is right, can be purchased for immediate use. However, opportunities to buy Spot Instances can disappear quickly. That means they are best suited for particular computing cases such as batch jobs and jobs that can be terminated quickly. Jobs like this are common in large organizations, so Spot Instances should be part of all cloud cost optimization strategies.
8. Choose The Right Worker Nodes
Each Kubernetes cluster has its own special workload utilization. Some clusters use memory more than CPU (e.g. database and caching workloads), while others use CPU more than memory (e.g. user-interactive and batch-processing workloads)
Cloud providers such as GCP and AWS offer various node types that you can choose from.
Choosing the wrong node size for your cluster can end up costing you. For instance, choosing high CPU-to-memory ratio nodes for workloads that use memory extensively can starve for memory easily and trigger auto node scale-up, wasting more CPUs that we don’t need.
Calculating the right ratio of CPU-to-memory isn’t easy; you will need to monitor and know your workloads well. For example, GCP offers general purpose, compute-optimized, memory-optimized with various CPU and memory count and ratios.
9. Use tools for infrastructure cost visualization
There are a number of tools currently on the market which allow you to get an at-a-glance view of your Cloud costs.
Prometheus and Grafana help you create pretty detailed dashboards through which you can visualize your infrastructure costs. Resources can generally be classified into three groups: compute, memory and storage.
Azure Cost Management tracks resource usage and manages costs across all your clouds with a single, unified view, and access rich operational and financial insights to make informed decisions.
AWS Compute Optimizer recommends optimal AWS Compute resources for workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
OpsCompass is an enterprise-ready cloud security management software that drives multi-cloud operational control, visibility, and security to Microsoft Azure, AWS, and Google Cloud Platform. Its UI is designed to provide clear data visualization for resource management, remediation and more.
Take the time to optimize your Cloud infrastructure costs and reap the rewards later
As you can see above, cost-optimization can be conducted through a number of methods. Each of them requires some time and energy input - but collectively, they can lead to significant cost savings. And if you need support with this process, 10Clouds is here to help.