Skip to main content

Amazon Web Services

AWS Cost Analysis Comparing Lambda, EC2, Fargate

Istock 1251136399

 

Choosing the appropriate compute is challenging when we have many good options from AWS.  Our clients are often excited about only paying for the milliseconds of usage rather than paying for idle cloud resources.  Let’s explore what this looks like with Lambda’s pricing model and compare it to other popular compute choices.  Let’s see some examples of how good Lambda is at saving money when the workload is idle.  If the workload changes so that it’s always active, let’s discover if Lambda could still be saving money. 

 

Use cases of Lambda / EC2 / Fargate 

Before comparing cost, we should note that we aren’t comparing apples to apples.   

Lambda – Function As A Service 

  • Deploy a function on a supported runtime (nodejs, python, ruby, java, go, .net).  You can use a CLI to package up your compiled codebase into a .zip to deploy.  No need to consider containers or maintaining operating systems. 
  • Deploy an executable built with any language (eg C++) by using the custom runtime mode. 
  • Deploy a docker container. 

EC2 – Virtual Machine Service 

  • You get a machine in the cloud that you can install any operating system and run anything you want without limitations. 
  • There are many types of machines with various pricing models. 

Fargate – Serverless compute for Containers 

  • Deploy a docker container.  You can have the container manually run once, scheduled cron, or running all the time. 
  • Fargate is a compute engine choice for EKS, or ECS.  Both are container orchestration services with similar capabilities.  Fargate can also be used for AWS Batch. 

 

Scenario1.  What’s the cost of an always running Lambda? 

In more exact terms, what is the cost of a lambda running for 1 second, every second of every day for 30 days, with concurrency 1.  Concurrency 1 would mean that no requests would overlap thereby preventing more than one lambda running at the same time.   

The documentation says that vCPUs are proportional to the selected memory.  Using these two guide posts (1769MB10GB) I filled out the table below, but you can fill it out even further. 

Memory vCPUs 30 Day - USD Cost 
512MB 0.29 $22.12 
1024MB 0.58 $43.72 
1769MB 1 $75.15 
3538MB 2 $149.78 
…. ... ... 
10240MB 6 $432.52 

Let’s say your Lambda would only be running half the day.  Divide the cost by 2.  How about if the scenario changed so that 2 requests were running for 1 second, overlapping every second of the day.  Multiply the cost by 2.  That’s the double edge sword of Lambda.  Ten requests per second is expensive with a 1024MB Lambda at $430/mo.  On the other hand, if it was only running a total of 5 minutes every day, then it’s cheap at $4.55/mo. 

 

Scenario2. What’s the cost of a similar performance ec2 instance? 

Looking at just the general-purpose instances there are three types: BurstBurst Unlimited or Fixed Performance.  Fixed Performance instances are easier to compare to lambda since lambda is also fixed performance.  Burst types have varied performance & price, but I can show two extremes to give a lower and upper bound.  Depending on how heavy the workload is on the CPU, burst instances may give you cost savings, or become more expensive than fixed performance. 

For fun, I’ve added the M6g instances into the list because they are custom ARM CPUs specifically built by AWS for the best performance to price.  Lambda and Fargate don’t yet have these CPUs. 

Instance Memory vCPU  30 Day – USD 
T3.nano  512MB 0.10 (b) - 2 (t) ~$3.75 - $75.75 
T3.micro  1024MB 0.20 (b) - 2 (t) ~$7.50 - $79.50 
T3.small 2048MB 0.40 (b) - 2 (t) ~$15.00 - $87.00 
T3.medium  4096MB 0.40 (b) - 2 (t) ~$30.00 - $102.00 
T3.large  8192MB 0.60 (b) - 2 (t) ~$60.10 - $132.10 
T3.xlarge  16384MB 1.60 (b) - 2 (t) ~$120.25 - $192.25 
M5.large  8192MB 2 (t) ~$69.10 
M5.xlarge  16384MB 4 (t) ~$138.25 
M6g.medium  4096MB 1 (c) ~$27.75 
M6g.large  8192MB 2 (c) ~$55.45 
M6g.xlarge  16384MB 4 (c) ~$110.90 
* T3 instances are burst instances, M5 & M6 are fixed performance.
* (b) – Burst baseline performance with depleted CPU credits
* (t) – CPU is provided as a thread of a core
* (c) – CPU is provided as a core
* Above prices are on demand costs.  By using spot instances, prices have ~70% savings.  By using reserved instances, prices have on average 30% to 60% savings.  By using savings plan, prices have ~27% savings. 

 

Scenario3. What’s the cost of a similar performance Fargate Task? 

Fargate has a few different modes as well to consider.  You can run a Fargate Task as always running, or you can run a one-off task from a cron schedule or a manual runtask API call.  For this article, I’ve decided to only consider Fargate in the always running mode because the one-off task’s minimum charge per run is 1 minute.  Even if your work item only takes 1 second to complete, the billing would be rounded up to 1 minute.  This can be great for long running work items, but I felt that is more of a specialized scenario than what we are trying to compare. 

Fargate has is the flexibility to separately configure memory from CPU therefore you can more often choose the right sized resources for your workload without paying for overprovisioned resources.  It’s pricing has flat costs of $3.20 per 1GB and $29.15 per 1vCPU. 

Memory vCPUs 30 Day - USD Cost 
512MB 0.25 ~$8.90 
1024MB 0.5 ~$17.75 
4096MB 1 ~$41.95 
8192MB 2 ~$83.90 
16384MB 4 ~$167.8 
* Above prices are on demand costs.  By using fargate spot, prices have ~30% savings.  By using savings plan, prices have ~20% savings. 

 

Scenario4. People Opportunity Cost 

These compute choices may impact your teams in a variety of ways.  Have the team understand the amount of work & risks involved in the initial setup, but also any periodic maintenance work.  Strike a balance for when to optimize cloud costs and when to have your team focus on growing the business with new feature work.  A cost-effective choice might be a more expensive cloud bill depending on your organization’s expertise and established infrastructure.   

A task list for your organization to consider when estimating the impact of infrastructure choices: 

  • Setting up multiple environments (eg. Test, Dev, Prod) 
  • CI/CD – How to deploy a tested change thru environments. 
  • Scale Orchestration – How and when to scale. 
  • Speed of Scaling – How responsive a service is to scaling events. 
  • Security Maintenance – Where can vulnerabilities exist, and how to deploy a fix. 
  • Coupling of Infrastructure choices – How much should infrastructure choices impact application code. 
  • Right Sizing – Are infrastructure choices based on current workloads or should we future proof against an upcoming business goal.  Create a dashboard & alarm to know when your choices might be outdated. 

 

What does it all mean? 

In the above scenarios, I’ve tried to be as factual as possible without making opinions.  Below I start to make opinions based on the above evidence, but these opinionated generalities might not line up for your workload or your organization.  It’s all about finding the right size and balance for your business while making choices flexible for future business growth goals. 

Our clients don’t want to pay for idle cloud resources, therefore let’s focus on two scenarios.  One scenario of the workload being mostly idle, and the other scenario of a workload being all the time active processing at least 1 request.  Keep in mind, Lambda, EC2, Fargate all have different supported size of memory and vCPUs.  Below I show a few examples of each service having the opportunity of a perfect fit memory & cpu constraint. 

For EC2 I recommend profiling your workload to discover the best fit performance / cost.  Here in this table, I’ve assumed that M5.large was the result of that investigation to show an example of how you can evaluate your situation. 

 

ScenarioLambda
EC2 (M5.large)
– Always Running
Fargate 
– Always Running
Always Active
minimum: 
8GB , 2 vCPU
cpu+
$346
( right sized )
$69.10
( right sized )
$83.90
Always Active
minimum:
2GB , 1 vCPU
cpu+
$86.92
mem+, cpu+
$69.10
( right sized )
$35.55
Always Active
minimum:
1769MB, 1 vCPU
( right sized )
$75.15
mem+, cpu+
$69.10
mem+
$35.55
1/2 Active
minimum:
8GB , 2 vCPU
cpu+
$173.06
( right sized )
$69.10
( right sized )
$83.90
1/2 Active
minimum:
2GB , 1 vCPU
( right sized )
$43.46
mem+, cpu+
$69.10
( right sized )
$35.55
1/2 Day Active
minimum:
1769MB, 1 vCPU
cpu+
$37.57
mem+, cpu+
$69.10
mem+
$35.55
1/4 Active
minimum: 
8GB , 2 vCPU
cpu+
$86.50
( right sized )
$69.10
( right sized )
$83.90
1/4 Active
minimum:
2GB , 1 vCPU
cpu+
$21.73
mem+, cpu+
$69.10
( right sized )
$35.55
1/4 Active
minimum:
1769MB, 1 vCPU
cpu+
$18.78
mem+, cpu+
$69.10
mem+
$35.55

* Prices are for a 30 day window.
* Green is the cheapest option for the scenario, while Yellow is 2nd cheapest, and Red is the most expensive.
* cpu+  (cpu is overprovisioned)
* mem+ (memory is overprovisioned)
* right sized (cpu and memory is an exact fit)
 

A few highlights emerge from the table. 

  1. When right sized with constraints, EC2 has the best cost. 
  2. When constraints are smaller than the smallest EC2 instance, then Fargate’s flexibility of rightsizing provides better cost. 
  3. Lambda starts saving money over EC2 once it runs half or less of the time. 
  4. Lambda saves money over Fargate once it runs a quarter or less of the time. 

  

Lambda 

Great for 

  • Workloads with long idle periods 
  • Minimizing your opportunity costs 
  • Isolating your security maintenance to only application code 
  • Scaling fast 

Lambda’s pricing model says you only pay for the time your lambda is running.  Therefore, your workload won’t cost anything when it’s idle.  In theory the idle periods will provide cost savings, but carefully measure the work because concurrent lambdas can easily become expensive.  To mitigate this risk, consider placing an API gateway in front of the lambda to enforce rate limiting.  Also note that choosing the cheapest CPU is non-trivial because the faster the CPU the shorter the duration you’ll be billed.  You might be surprised to find a higher CPU choice being cheaper. 

Lambda has the job of being one of the fastest scaling technologies in the cloud.  If you’re in the business of scaling very fast, maybe you don’t mind the cost that comes with it especially if your revenue grows faster because of it.  Note that you’ll want to consider optimizing for cold-starts because you will be billed for it.  It can be hard to optimize cold-starts so that a front-end user would not feel a sluggish backend.  Therefore, I would not recommend building something with Lambda that is end user facing such as an API that is UI blocking. 

Organizations will find Lambda easy to use and maintain.  Most of the work would be to setup the CI/CD pipeline.  There is no operating system to maintain, therefore security maintenance is minimal to your application code or docker container.   

My last bit of advice is to communicate with application engineers the risk / likelihood of this Lambda infra choice changing.  Consider in a year or two when traffic has changed, would it make sense to stay on Lambda.  If application engineers expect a change, then they might organize code in a more flexible way. 

 

EC2 

Great for 

  • Workloads that have little to no idle time. 
  • Workloads that would benefit from specialized CPUs or GPUs not yet available for Fargate / Lambda. 

EC2 is great for workloads that need to be always processing data.  If your EC2 only does a nightly job, then you’re still paying all day for it to be available.  There are ways to automatically turn off, or on your instance, but you’ll then be paying people / opportunity cost for your team to devise a solution.  Additionally, the time it takes for an EC2 to turn on is much longer than a start time of a Lambda. 

EC2 has a ton of innovation every year with new instance types to provide a better performance to price ratio across general compute and specialized compute scenarios that other serverless offerings typically don’t have access to.  The drawback is the average organization will find EC2 harder to use and maintain thereby having heavy people & opportunity cost.  Go thru the checklist with your team to determine how large is the impact.  Most importantly to note, EC2 has an operating system that your team will be responsible to patch, and upgrade as new vulnerabilities are discovered. 

 

Fargate  

Great for 

  • Workloads that have little to no idle time  
  • Minimize your opportunity costs 
  • Isolating your security maintenance to a docker image 
  • Scaling fast 

Fargate has similar pricing considerations as EC2 in that you will pay for idle resources.  With Fargate, you can choose 0.25, 0.5, 1, 2, or 4 vCPUs while EC2 doesn’t have offerings less than 1 CPU.  If your situation doesn’t need heavy CPUs then you can realize more cost savings than EC2 by choosing these lower vCPU options.  Fargate doesn’t support many CPU variants because the underlying virtual machine (Firecracker) only supports Intel processors, but that’s changing soon.  Firecracker will support more types of CPUs such as the graviton2 processor that is inside the m6g instances thereby increasing performance and reducing cost.   

Fargate can scale faster than EC2 thereby providing higher availability.  It’s also faster at scaling down when overprovisioned.  It only needs to spin up containers versus whole operating systems, therefore it requires less resources to spin up.  Whole operating systems might be creating resources that you don’t need for your application such as video memory.  I have not seen benchmarks comparing this to Lambda, but I do imagine Lambda scaling faster. 

Organizations will find Fargate easy to use and maintain like Lambda.  There is no operating system to maintain, therefore security maintenance is minimal to your docker container.  When ramping up a team to become familiar with the tech stack of fargate, I’ve found that Fargate is just a little more work than Lambda because there are more knobs to adjust.  However, if you take an infrastructure as code approach, the default patterns that come from the CDK are great, and you can reduce the complexity to a single line of code with the flexibility to customize those defaults. 

 

Conclusion 

Many of our business are driven by minimizing opportunity costs, minimizing risks, and maximizing growth potential.  Both Lambda and Fargate fit these goals well, but the cloud costs could be significantly different depending on the workload.  Consider using Lambda when your workload has a lot of idle time but have a plan if the workload changes.  Once the workload is high, consider switching to Fargate.  It’s easy to switch later if you are using infrastructure as code as I’ve described in a previous blog post of mine.  It’s just a couple lines of code to change from Lambda to Fargate.  

For general purpose compute with all the time running workloads, EC2 is the cheapest for the cloud bill.  However, this may be shifting costs into your organization in other and more significant ways.  EC2 could still be the right choice if the workload would greatly benefit from many specialized instances, and there isn’t much value in needing to quickly scale.  It has higher incurred opportunity costs, more security risk that you’ll be responsible for handling, and scales slower than Lambda/Fargate thereby having more availability issues. 

Perficient helps its clients find the right size and balance for your organization while making choices flexible for future business growth goals.  My team frequently works within product teams to help them build new cloud native applications while showing the benefits & risks of serverless.  Contact us to see how we can help your organization too. 

Thoughts on “AWS Cost Analysis Comparing Lambda, EC2, Fargate”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Quincy Mitchell

Quincy Mitchell is a software engineer in the Custom & Product Development team for Perficient. He loves building tools that help engineers do more than they can imagine.

More from this Author

Follow Us