Exciting News: we joined Dept, read more here

My Serverless Adventure

October 10, 2018 by Mateo Karadza

After several React Native posts, now it’s time for me to reflect on the experience I had building Serverless application, differences with building standard APIs with IaaS/PaaS with Express/Koa and utilizing FaaS service model.

You won’t need any special tooling if you’re building a single lambda function. However, if you decide to build a real application, you’re going have an easier time with tools for managing your code, packaging, and deployment. Our tool of choice was the Serverless framework.

FaaS service model

Before going into how we use serverless as a technology, let’s first see what FaaS stands for. FaaS (Function as a Service) is a category of cloud computing service models (along with Iaas, Paas, SaaS, etc.) that allows us to run the code without having to worry about the infrastructure, hardware requirements of a server, and anything that is out of development scope. When it comes to charging, you only pay for the computing time that your function used. This allows us to create projects and not having to worry about the cost of idle servers that we would have if we went with other service models.

AWS Lambda

AWS Lambda is Amazon’s FaaS product that was introduced in 2014 and today it holds around two-thirds of the FaaS market according to the article from July . With its early adoption of the concept, free monthly plan (worth around 6 - 7 dollars) and “unbeatable” pricing AWS will continue to stay strong on the FaaS market. That free monthly plan gives you 1 million requests (invocations) and 400,000 GB-seconds of compute time. For example, if you decide to use Lambda function with 128 MB memory, that means you’ll have around 33 days worth of computing time for free. That is more than enough for you to try it out and maybe even have a project or two running for free. Once you spend the free tier resources, you’ll be paying $0.00001667 for every GB-second.

Example of AWS Lambda function utilizing Node.js:

module.exports.hello = (event, context, callback) => {
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: 'Hello world!',
      input: event,
    }),
  }

  callback(null, response)
}

Structure of event object is connected with the event source, and in case of an HTTP request triggered by API Gateway, it’ll contain basic request headers, along with some API Gateway specifics such as the path and a method of the HTTP request.

Context object contains Lambda’s environmental values, where context.getRemainingTimeInMillis() function is the crucial one since it holds information on how many milliseconds we have left for the execution until we hit the timeout upon which our function will stop executing.

A callback is a function that accepts error and a success parameter, following Javascript’s popular callback pattern. In case we decide to use a newer version of Node (such as 8.10), we’ll be able to use async functions which do not use callback pattern - instead, they rely on using return statement for successful executions and using throw to indicate an issue with execution.

Timeout

By default, AWS Lambda has a 15-minute timeout limit (maximum execution duration per request). That’s the highest value for the timeout at this moment, and as developers, we can assign lower timeout values to reduce potential costs per function if it fails for some reason and decides to keep running until it timeouts. If you have a function that will last longer than 15 minutes (or anywhere near it) you should think about splitting it into several functions where each takes care of the specific step of the process. If you cannot split your function into smaller ones, look for a vendor that allows longer execution time or just choose a different service model for this functionality. Here is a list of AWS Lambda limits .

Update: Maximum execution duration used to be 5 minutes, recently got bumped to 15 minutes. Thanks reddit .

Keeping connections alive

When using functions for executing your code, make sure that you connect to other services sparingly, such as Redis, database, etc. Because some services have connection limits, such as some databases allowing 10 connections (can be increased) your serverless application will end up not being able to connect to the service. If your serverless application is a single function that connects to a database and renders HTML page, it doesn’t mean you’ll be using only one database connection. Single Lambda function takes care of 1 event only, and while it’s busy (executing code) it cannot accept other calls, instead, a new function will be created which will increase connection count towards the database. So when writing your code, try to utilize cache services such as Redis since it can have higher connection number and is much faster than doing a query on the database, and when you are done with utilizing a connection, close it as a soon as possible - no connections should stay alive.

Performance

Your code will be executed in a Linux environment managed by AWS, and you can impact the performance in two ways: the amount of assigned memory per function and the actual code of the function. When it comes to memory, you can assign from 128 MB up to 3 GB of memory to your function. While it increases the memory your function can use (if you go over assigned memory your function will be terminated) it also increases the computing power (CPU) of your function so your code will be executing faster. If you see your function is slower than it’s supposed to be, try to increase the assigned memory and try to find the balance in speed and money you are paying for it. Regarding the code of your function, as with every other software, if you write good and performant code that shouldn’t slow you down, but the size of deployed code can, especially if you are using environment which starting performance can be impacted with function package size. In order to avoid that, you can package your functions individually, and more importantly, you can use plugins that will help you with optimizations. Luckily, Node.js as the environment will not suffer a big impact with package size, but optimizations are always welcome.

Scaling

Depending on what is the use case for your function, AWS Lambdas are ideal because they can scale indefinitely (or until you hit the limit, check the link in Timeouts paragraph). If you have a function that is doing image optimizations, consuming messages from the queue. As I mentioned in Keeping connections alive part of this article, a single function can only consume one event.

Let’s take image optimization as an example and see how it’ll scale.

Example 1. User uploads an image and your function for optimizing gets triggered for the first time. AWS will create a new function, download the function .zip package (your code) from S3 and perform necessary operations so your function can start executing code. This is called cold start, and it’s a term/period used to indicate that your function was not active and that it had to be provisioned in order to react on the event that triggered it. It can last from 0.5s up to 5s, all depending on the infrastructure load, size of packaged code, environment, etc. and it’s not counted towards your compute time. Once it finishes optimizing the image and returns a success response, the function will remain warm for some time, depending on your region and load on infrastructure at that moment, can be from 5 to 15 minutes according to experience from other people. If the function gets triggered again while it’s warm, it’ll start executing it immediately and there is no cold start in this example.

Example 2. Function from our previous example is still warm, and we get two users uploading an image at the same time. We’ll get two event triggers, one will be assigned to the warm function from the previous example, while second will have to be assigned to a new function that AWS has to create in order to handle the event. Since it wasn’t warm, it will have a cold start and the image won’t be optimized immediately.

While we don’t have to do anything in order to scale our functions, we should analyze use case for our functions and see how many concurrent calls a single function can have at some point and does it impact the performance of our optimizations. If our function is performing image optimization, we shouldn’t worry about the cold start since it won’t affect our users. However, if we are using a Lambda function to serve a web page, and at some point in the day we have a huge spike in requests, we should prepare ourselves by warming up a few, tens or hundreds of functions in order to avoid the cold start which can impact the performance, every (milli)second counts when working towards good UX. How to know how many functions you need? Well, you’ll have to use some analytics and monitoring in your application in order to find that out. Dashbird.io seems like a good starting point since it offers monitoring, error logging, debugging and many other crucial functionalities and it has a free monthly plan so you can try it out without any obligations.

You can keep your functions warm by using plugins, or by scheduling Cloudwatch events which allows you to create many advanced solutions: invoke a function that reads analytics from previous days, performs calculations and depending on current use invokes X functions instead of blindly invoking Y functions.

I’ve also read that API Gateway (HTTP routing solution for your application) can add some delay to your application since it’s just another service that you depend on (tho I believe that the delay is minimal), and you could achieve that by directly invoking functions from your SPA or native application via the AWS SDK. Haven’t tried that myself but it does sound like a method of improving the performance if you notice that API Gateway is lazy.

Serverless framework

The framework takes care of, in my opinion, the most complex part of building a serverless application, and that is the configuration and deployment of such a project. In its early beings it was known as JAWS (Javascript Amazon Web Services) and as you can guess, it was a framework for Javascript AWS Lambda apps for AWS. With time, it grew in complexity and today it allows us a vendor-free experience with any language of choice.

Plugins

The serverless framework on its own offers basic functionalities, but if you need want to do advanced package optimizations, running your code locally, or set up specific event triggers on your functions you are lucky because there is a great community that contributed to this framework with their plugins. A must-have plugin for creating serverless apps is serverless-offline which mimics API Gateway functionalities and allows us to run the application locally with routing that we define in serverless.yml without having to manually invoke functions and supply HTTP-specific event details. Here is a list of other popular plugins .

Project structure

With its single entry point the serverless.yml file is most important file in our project that will hold essential project setup, such as choosing the vendor, defining security groups, resources, plugins used on the project, environment configuration and in the end list of functions that exist on the project, along with their specific configurations and events which can trigger them. A project can have global configurations, such as 256 MB memory, Node.js 8.10 as a runtime, and so forth, but it also allows us to specify configuration per function, which obviously overrides global ones.

Serverless architecture is also an event-driven architecture, where a single function can listen to several types of events. We are used to building APIs that listen to HTTP requests (either by our users using the application or having webhooks in place), but using serverless framework allows us to listen to and trigger the function on events such as changes in storage (S3), database (DynamoDB), queue (SQS), scheduled (CloudWatch), etc.

Here’s an example of the basic structure that can be used on a project, the actual code can be found in the GitHub repository .

We have a handlers folder for all functions, split into multiple folders grouped by functionality where every .js file contains one function only. I’m using this approach and it allows me to easily navigate through the project.

Folders middleware and utils contain reusable code (helper functions) used in the code, where middleware functions are trying to ‘mimic’ use of middleware pattern used while building Express/Koa powered APIs.

We have two files for environment shown in the structure, .env.example.yml is a file that contains structure of the environment that needs to exist on the project and variables that are used throughout the application via process.env object, and we have a .env.yml which has actual values for the environment, in my case local environment that is and should always be ignored by git so we don’t leak our environment details/secrets.

Things I learned

There are no global middlewares. When using Express or similar frameworks on the web we can easily configure middlewares that’ll be used by every route before using actual route-level middleware. Such a concept does not exist here since every function (route) is acting as a completely separate entity. So if you want to apply “middleware” to every function, you’ll have to specify it in each function file. Workaround around this would be to create a wrapper function that would be used by every function on the project and it would take care of the necessary setup. Example of such middleware would be the error middleware, body parsing middleware, setting up response headers, etc.

Carefully indent your serverless.yml file. The configuration file is a .yml file meaning it relies on indentation in order to group and structure values you pass in. I know I always have issues with setting up HTTP event for every function because to me, it looks like it requires two indentations there instead of one.

The stage needs to exist, use custom domains. When deploying a serverless application to AWS, API Gateway URL for your project will be structured like https://my-api-id.execute-api.region-id.amazonaws.com/stage-name/{resourcePath}, where stage name is by default dev. I find it much easier to work with a custom domain, since the base URL given by the Amazon can be obviously very long, and having the stage name after the TLD can be confusing.

Always stringify the body. At some point, I forgot to stringify the body and everything worked normally locally (local development was powered by the serverless-offline plugin), but when we deployed the app it just didn’t work. It took us a while to realize the stupid mistake I made since API Gateway was expecting a stringified body, and the route was sending a raw object, it would just throw a timeout error without the actual explanation.

Do not touch global Promise. Here’s another funny issue I ran into. When working with pg-promise it allows you to use a Promise library of your choice so you can utilize all Promise functionalities offered by the library, and we kind of just used to write global.Promise = require(‘bluebird'). That works without any issue in Express, Koa or any other framework, but however it seemed like Lambdas in their environment have a Promise library of their own, and changing the Promise library would mess it up so it would never give a response to API Gateway which leads to functions running for like 15 minutes, reaching the timeout and making you to pay for your mistake. :)

Use AWS Linux environment to build and package your project. Code that you write will be executed in a Linux environment setup by AWS, meaning that all code and modules that you ship with your function have to be runnable in such an environment. I am working on a MacBook Pro, so all the modules that worked locally were also working on Lambdas until I added a module for optimizing images. At that point you have two solutions: either configure the CI in a similar Linux environment and build the code there and deploy it to AWS, or to spin up a EC2 AWS Linux instance and install the image optimization module there, download it locally and then every time you do a deployment, ensure that copy of module from EC2 exists in your local node_modules folder, overwriting your OS-specific module.

Forget about sockets. If you were planning to implement Socket.io and handle connections with your serverless app, forget about it and find a service that will do it for you. Serverless functions were not created to be running forever, which is the biggest obstacle when implementing socket connections.

Know your limits. If you had the intention of building a serverless application with many functions, have in mind that single serverless application (service) can have up to 60 functions defined in the serverless.yml file, if you go over that number you’ll get an error while deploying. I ran into that issue while developing (last phase of the project) but luckily I had some unused functions that I had to delete in order to deploy the app. However, if you have a project that will have more than 60 functions, you will have to split them into separate services. In my opinion, that adds completely new complexity to your app, both for development (having to run several separate services) and deploying it to AWS, multiple API Gateways that will have to be configured via custom domains.

Google is your friend

As with every problem that you run into while creating software, Google really is your friend. Serverless framework CLI does offer you a lot of feedback when an error occurs, but when you deploy your application to a vendor of choice (in my case AWS) you’ll start running into errors. They’ll probably due to some AWS Limitations, or your CloudFormation stack being in UPDATE_ROLLBACK_FAILED state where you’ll actually have to use the web AWS interface to make your stack deployable again.

The serverless framework does support many different vendors and base functionalities should exist on all of them, but when running into issues you’ll probably be luckier if your choice was AWS since it has a bigger community which results in more people running into the same issue you have. Haven’t really tried other providers so I don’t know the community around them.

Conclusion

Function as a Service is indeed a huge step in cloud computing offering, especially if we are looking to create something that scales easily.

Just imagine what would happen if your product becomes featured on some popular site and crashes because it wasn’t able to hold the load. FaaS products come in handy because they scale well, but you’d also have to write your product in a way that it can handle such load.

I find it ideal for tasks that don’t involve users directly, so there is no impact on user experience if we have cold starts. If you decide to use it something that is front-facing, make sure to analyze usage and keep some functions warm in order to avoid a cold start. And if you really count on your service to be super fast, and just cannot accept any slowness - FaaS just isn’t the choice for you.

If you are interested in innovating and growing your product, let's do it together. Reach out to us today