6 new ways to reduce your AWS bill with little effort

6 new ways to reduce your AWS bill with little effort

  • May 19, 2019
Table of Contents

6 new ways to reduce your AWS bill with little effort

The last time we wrote about how to save AWS costs was at the end of 2015. AWS has changed a lot since then. AWS introduced AMD-powered EC2 instances that are 10% cheaper compared to the Intel-powered Instances.

They provide the same resources (CPU, memory, network bandwidth) and run the same AMIs. The following table shows a mapping from Intel to AMD instance families. You can switch to an AMD family by stopping your EC2 instance, changing the instance type, and starting the instance again.

Those steps will pay off quickly. AWS is also working on ARM-based EC2 instances. They are even cheaper (~40%), but the architecture is different and cannot run your Intel/AMD AMIs.

Many VPC architectures make use of private subnets (a subnet without a route to the Internet via an IGW). You can even run public websites in such a setup if your load balancer runs in public subnets as shown in the following figure. But we also see many EC2 based architectures to make use of AWS services such as SQS, S3, DynamoDB, and so on.

To use those services, we have to make calls to the AWS API over Internet. In private subnets, this was often done using NAT gateways (or more dated NAT instances) which increase your traffic costs. For S3 and DynamoDB, you can create a Gateway VPC Endpoint which is free and lets you communicate to S3 and DynamoDB from private subnets without natting.

For some AWS services, you can create an Interface VPC Endpoint which is cheaper than a NAT gateway. Run your workloads in public subnets and protect them with security groups.

Source: cloudonaut.io

Tags :
Share :
comments powered by Disqus

Related Posts

Amazon S3 Path Deprecation Plan

Amazon S3 Path Deprecation Plan

Last week we made a fairly quiet (too quiet, in fact) announcement of our plan to slowly and carefully deprecate the path-based access model that is used to specify the address of an object in an S3 bucket. I spent some time talking to the S3 team in order to get a better understanding of the plan. We launched S3 in early 2006.

Read More
When AWS Autoscale Doesn’t

When AWS Autoscale Doesn’t

The premise behind autoscaling in AWS is simple: you can maximize your ability to handle load spikes and minimize costs if you automatically scale your application out based on metrics like CPU or memory utilization. If you need 100 Docker containers to support your load during the day but only 10 when load is lower at night, running 100 containers at all times means that you’re using 900% more capacity than you need every night. With a constant container count, you’re either spending more money than you need to most of the time or your service will likely fall over during a load spike.

Read More
AWS App Mesh—Service Mesh for Microservices Running on AWS

AWS App Mesh—Service Mesh for Microservices Running on AWS

The idea of a “service mesh” has become increasingly popular over the last couple of years and the number of alternatives available has risen. There are multiple service mesh open-source projects: Istio, Linkerd, Envoy and Conduit which can be deployed on any Kubernetes environment. The AWS App Mesh can be used with microservices running on Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Kubernetes running on Amazon EC2.

Read More