Nothing excites the team quite as much as a new and unique technical challenge. So when we were presented with especially difficult backend obstacles while working on Flaggerade’s app, we seized the opportunity to innovate. After lots of hard work and creative problem solving, we built the Fyresite MySQL Data API Wrapper. Here’s how we got there.
The Tools of the Trade
Let’s start with a very brief summary of the AWS services and tools we’ll be using.
AWS Lambda is easily one of the most powerful Amazon services available. It allows developers to run code virtually without worrying about scaling, provisioning, or management. Just deploy the code as a Lambda Function and let Amazon handle the rest.
Amazon Relational Database Service (RDS)
Managing your own databases costs a lot of time, money, and effort. So let Amazon do all the work for you with RDS. This service allows developers to set up a fully-functioning database instance in just a few minutes. Meanwhile, all the complicated and tedious work that normally goes into setting up, provisioning, patching, and backing up a database falls on Amazon’s shoulders.
Each of these services and tools cleans up the backend infrastructure. However, getting all the pieces to work together can be pretty tricky. Here are some of the biggest challenges of using Sequelize on RDS with Lambda.
Slower Cold Starts
Lambda only runs functions on-demand, which saves tons of money and computing resources. However, when your code is executed for the first time in the past several minutes, it must be re-prepared to run. This process is called a cold start, and it results in a significantly higher latency. Every time your app has a spike in traffic, your functions may cold start.
A cold start always takes longer than usual, but it is especially long when you use Sequelize on RDS with Lambda. There are two reasons that the cold start is so slow: Sequelize Connections and VPC Deployment. We’ll get to those next.
Connecting an ORM, like Sequelize, to a database instance creates an aptly-named database connection. Each connection requires more time and resources to maintain. So when a lambda function cold starts, the system has more work to do before the code is ready. When the Lambda’s already warm, Sequelize mitigates some of these performance issues by reusing connections with pools. But overall, the connections still slow down cold starts.
Each database instance is tucked away in a secure, private section of the cloud called a VPC. Your VPC and everything in it–including the database–is completely isolated from the outside world. That way, the data is extremely secure.
However, when you add Lambda to the equation, new challenges start to pop up.
For the backend to function properly, Lambda must be on the same private network as the database. If Lambda is outside the VPC environment, it cannot communicate with the database and the app won’t work. But when Lambda is inside the VPC, cold starts can take as long as several seconds to complete.
Another drawback is that VPC has lots of connection limits (available on the AWS User Guide). The default maximum network interfaces per region is either 350 or five times your on-demand instance limit–whichever is greater. However, if you reach this limit, your functions will be throttled. To loosen your limits, you can either increase your on-demand instance limit or submit a request form directly to Amazon. Either way, connection limits are still a challenging issue to overcome.
Maximum Concurrent Execution Limits
Each Lambda function can only be executed so many times at once. That’s because Amazon uses a safety throttle to limit concurrent executions in each region. In 2017, Amazon raised that throttle to 1000 concurrent executions. You may be able to raise the number if you submit a request, but a limit is still a limit.
The private VPC is completely isolated from the rest of the cloud, but your infrastructure still needs to connect to the internet. That’s when you add a NAT gateway, which connects your Lambda functions to the internet. As the name suggests, a NAT gateway acts as a gate for traffic going in and out of the VPC.
While a NAT gateway is much more efficient than a NAT instance, you should still avoid it if possible. NAT gateways may be useful, but they have a pretty high fixed cost of roughly $40/month. An alternative infrastructure is necessary to save some money.
Amazon Data API
One of the best ways to get around these Lambda/VPC issues is to use Amazon’s Data API for Aurora Serverless MySQL. When you use the Data API, you can deploy functions in the normal Lambda environment instead of within the private VPC. This solution is extremely elegant: the API comes with a secure HTTP endpoint so you don’t need a constant connection to the database. It also integrates seamlessly with AWS SDKs.
Many of the challenges melt away once you implement the Data API. Since you don’t have to include Lambda in the VPC, cold start times become much faster. Even better, the issues surrounding NAT gateways, connection limitations, and concurrent execution limits go away.
Fyresite’s MySQL Data API Wrapper
That’s why Fyresite created the MySQL Data API Wrapper. This package enables the Sequelize package to easily use the AWS Data API. It works by mocking parts of the MySQL2 package interface in the MySQL dialect. While it’s only been tested for Sequelize v5, the Fyresite wrapper makes a huge impact on performance. Most notably, you get all of the benefits of using both the Data API and Sequelize combined. That means the entire system functions faster, cheaper, and more efficiently.
You can get the MySQL Data API Wrapper at these NPM and Github links. All the setup information is available there. This wrapper is just one of our many open-source projects. You can find future projects like this on our FireWare Github Org or by contacting us directly.