https://www.bbva.com/en/economics-of-serverless/

Especial-Labs-Openshift-300x300.jpg

Our initial goal is to try to answer questions such as when it is more appropriate to use AWS Lambda instead of AWS EC2 services, and especially, which parameters affect this comparison the most.

Introduction

In this work we examine the pricing for the serverless service from Amazon Web Services (AWS), known as AWS Lambda. We compared its pricing strategy with the AWS Elastic Compute Cloud (EC2) service, examining in detail which assumptions need to be made in order to compare both services.

We found that, far from getting a black or white answer to these questions, a variety of service specific factors can determine the best choice, and we discuss and provide specific conclusions for a number of relevant case studies.

Our approach is based on theoretical models and inspired by typical real-world services simulations that give us an insight of what variables are important to model in our planning stage when using these technologies. Indeed, those simulators, which we have open sourced, can be extremely useful during the planning stage of a software service, where it is key to determine which technologies to leverage in order to save time and money.

We will discover that having a deep knowledge on how your application works is key in order to being able to optimize its costs, and ensuring that new features do not penalize them. We will also analyze the contribution of what we called the throughput factor, that introduces a comparison method between both architectures.

Essentially, a serverless (or Function-as-a-Service, FaaS) architecture provides compute power to run your application code without needing to provision, nor manage servers. The cloud provider executes the code (called function) only when needed, and scales automatically to meet the demand.

In previous posts, we discussed the Serverless architecture, as well as the main public cloud providers that offer this architecture. Additionally, we analysed how to deploy it in-house with Fission and RedHat’s OpenShift.

Functions deployed in this service (AWS Lambda) benefit from the integration with other services from the cloud provider. However, this increases the vendor lock-in, making your app potentially unable to be deployed in other clouds. It’s worth to mention that we can only execute code using provider’s supported languages.

It is clear, just by reading the service description, that the cloud provider is focusing on different use cases for each platform: if either your workload cannot be implemented in any of the supported languages by AWS Lambda, or it is a binary from a third party, or depends heavily on the use of local storage, any cloud instance (virtual machine) offering from public cloud providers like AWS, Google, IBM or Microsoft seems the right choice. Nevertheless, there is still a lot of applications and workloads that can run in AWS Lambda that exploits its benefits; we will explore some in this work.

In this study, we focus on AWS Lambda, as it’s one of the most used serverless platforms, and supports multiple general purpose languages, like Python, Java, Go, C# and Node.js.

Most papers we have found on the issue tend to focus solely on the benefits of not having to reserve computing capacity in advance, thus saving money. In our opinion a more in-depth analysis, considering realistic workloads, would allow us to better assess the impact of this architecture on cost.

Business assumptions

In order to perform simulations and study in depth the economic impact of the serverless architecture (AWS Lambda) compared with virtual machine instances (AWS EC2), we need to make some assumptions regarding how a business might behave or be modeled:

  1. We assume that the application code works seamlessly both in EC2 and Lambda services. This is needed for the sake of the comparison between the two services. Most times, legacy code needs to be transformed for its direct use in a serverless platform. Monolithic apps, or software that needs to access to low-level layers of the operating system are bad candidates to run on serverless architecture without heavy code refactor. Also, the serverless cloud provider can restrict the access to some packages considered potentially dangerous, limiting the compatibility of serverless functions with code intended for regular cloud instances.
  2. We assume that our application is able to auto-scale the cloud instances (VMs) in service, increasing their number as the requests grow beyond the limits of requests one single instance can process.
  3. Notably, we don’t account for the savings in IaaS-related administration costs. For sure this could represent the tipping point when costs for both services are within the same order of magnitude. Nevertheless, it’s almost impossible to establish an assumption here, as labor costs varies widely from one country to another.