Cloud migration — producing a resilient, cost-effective infrastructure
15.03.2021 | 6 min read
There’s no doubt that Cloud services are taking the world of business by storm. We need only to look at the adoption statistics to see that the future lies in the Cloud.
According to IDG, 92% of organizations say their IT environment (infrastructure, applications, data, analytics etc) is in the cloud to some extent today, and is expected to grow to 95% by the middle of this year.
The pandemic has certainly played a part in escalating this growth, but it's also worth noting that the move to the cloud has a long-running tailwind in terms of demand. Flexible consumption models, also known as everything as a service or XaaS, have become an increasingly important strategic shift for enterprises across all industries.
But let’s go back to the beginning and take a closer look at the benefits of moving to the Cloud and the ways in which Cloud infrastructure can be made more resilient, safe and cost-effective.
What are the benefits of moving to the Cloud?
1. Resource scalability
The Cloud allows for dynamic provisioning of resources based on demand. Result? No unexpected slowdowns and a cost friendly infrastructure that meets user demand.
Infrastructures created in the Cloud can be easily self-healing and spread across multiple physical locations which helps keep your applications running.
Cloud vendors keep business data secure by handling security issues proactively and updating mechanisms regularly. Studies suggest that data stored in the Cloud is more secure than the data stored in onsite data centers.
4. Reduced complexity
Expanding businesses' operations to introduce new products can be difficult in an onsite IT infrastructure. Cloud migration helps businesses to run apps and store data securely offsite, while keeping their infrastructures simple and maintainable.
5. Seamless Employee Collaboration
Cloud migration helps businesses to operate in distributed work environments by facilitating seamless communication and collaboration through the use of Cloud-based tools.
6. Faster application deployments
The solutions provided by leading Cloud vendors help businesses to avail servers and other computing resources required to deploy an application/service on demand.
Making your Cloud infrastructure more resilient, safe and cost-effective
So you’ve made the decision to make the move to the Cloud. But how do you make sure that the infrastructure you build serves its purpose. And what qualities do you need to watch out for? The answer lies in the key pillars outlined in the AWS Well-Architected Framework.
1. Operational excellence
“The ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value.”
This pillar has a lot in common with the broad definition of DevOps that is about removing barriers between Developers (Dev), who create services for the end user, and Admins (Ops), who create space for Devs’ code in the infrastructure.
“The security pillar encompasses the ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security”
In short, you need to ensure the highest possible security for the system and services connected with it, while taking the best possible advantage of the Cloud
“The ability of a workload to perform its intended function correctly and consistently when it’s expected to, this includes the ability to operate and test the workload through its total lifecycle”
Your infrastructure must ensure maximum reliability and must give you the sense of stability even in the case of serious failures. This reliability is based on scaling and self-healing.
4. Performance efficiency
“It includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve”
You need to make sure that your resources used in Cloud have the correct size to meet your current demands, and that they can be easily scaled when needed.
5. Cost optimization
“It includes the ability to to run systems to deliver business value at the lowest price point”
It may seem strange to some, but cloud services are paid, and sometimes these costs can be considerable due to not thinking through the technologies used. Therefore, it is important to be aware of the expenditures, as well as the relation between the technologies selected, efficiency, availability, and price.
You need to make sure that your resources used in the Cloud are selected in such a way to avoid unnecessary costs, and yet to meet your demands.
Cloud cost optimization best practices
The easiest way to optimize cloud costs is to look for unused or unattached resources. Often an administrator or developer might “spin up” a temporary server to perform a function, and forget to turn it off when the job is done. In another common use case, the administrator may forget to remove storage attached to instances they terminate.
This happens frequently in IT departments across the company. The result is that an organization’s Cloud bills will include charges for resources they once purchased, but are no longer using. A cloud cost optimization strategy should start by identifying unused and completely unattached resources and removing them.
Cutting data transfer costs
At 10Clouds, we help our clients cut their data transfer costs by:
- Making sure your Object Storage and Compute Services are in the same region because Data transfer in this scenario.
- If you do a lot of cross-region transfers replicating your Object Storage bucket to a different region rather than downloading each between regions each time.
- Using a content delivery network if there are a lot of downloads from the servers which are stored in object storage (e.g. images on consumer site).
- Using a CDN provider, you have a lot of static assets as it can give huge savings over any object storage, as just a tiny percent of original requests will hit your bucket origin.
Taking advantage of workload right-sizing
Kubernetes schedules pods based on resource requests and other restrictions without impairing availability. The scheduler uses CPU and memory resource requests to schedule the workloads in the right nodes, control which pod works on which node and if multiple pods can schedule together on a single node.
Every node type has its own allocatable CPU and memory capacities. Assigning high/unneeded CPU or memory resource requests can end up running underutilized pods on each node, which leads to underutilized nodes.
Choosing the right worker nodes
Every Kubernetes cluster has its own special workload utilization. Some clusters use memory more than CPU (e.g: database and caching workloads), while others use CPU more than memory (e.g: user-interactive and batch-processing workloads)
Cloud providers such as GCP and AWS offer various node types that you can choose from. Choosing the wrong node size for your cluster can end up costing you. Calculating the right ratio of CPU-to-memory isn’t easy; you will need to monitor and know your workloads well. But the good news is that 10Clouds can help!
Spot instances or low priority VMs
Spot Instances are very different than RIs, but they can help you save more on your AWS spend or Azure spend. Spot Instances are available for auction and, if the price is right, can be purchased for immediate use.
However, opportunities to buy Spot Instances can go away quickly. That means they are best suited for particular computing cases like batch jobs and jobs that can be terminated quickly. Jobs like this are common in large organizations, so Spot Instances should be part of all cloud cost optimization strategies.
Looking for Cloud migration solutions?
We’re here for you. Contact our DevOps team today and we’ll be happy to help you come up with an available, resilient and cost-effective solution for your business. Just drop us a line on email@example.com