visit
Ever since software engineering became a profession, we have been trying to serve users all around the globe. With this comes the issue of scaling and how to solve it. Many times these thoughts of scaling up our software to unimaginable extents are premature and unnecessary. This has turned into something else altogether with the rise of serverless architectures and back-end-as-a-service providers. Now we’re not facing issues of how to scale up and out, but rather how to scale our database connections without creating heavy loads.
With the reduced insight we have about the underlying infrastructure, there’s not much we can do except for writing sturdy, efficient code and use to mitigate this issue.
Or is it? 🤔Going through this process every time a function runs would be incredibly inefficient and time-consuming. This is why serverless developers utilize a technique called connection pooling to only create the database connection on the first function call and re-use it for every consecutive call. Now you’re wondering how this is even possible?
The short answer is that a lambda function is, in all essence, a tiny container. It’s created and kept warm for an extended period of time, even though it is not running all the time. Only after it has been inactive for over 15 minutes will it be terminated.
This gives us a time frame of 15 to 20 minutes where our database connection is active and ready to be used without suffering any performance loss.
Once you take a better look at the code above you can see it makes sense. At the top, we’re requiring mongoose
and initializing an object called connection
. There's nothing more to it. We'll use the connection object as a cache to store whether the database connection exists or not.
The first time the db.js
file is required and invoked it will connect mongoose to the database connection string. Every consecutive call will re-use the existing connection.
Here’s what it looks like in the handler
which represents our lambda function.
This simple pattern will make your lambda functions cache the database connection and speed them up significantly. Pretty cool huh? 😊 All of this is amazing, but what if we hit the cap of connections our database can handle? Well, great question! Here’s a viable answer.
As you can see the pattern is very similar. We create a Stitch client connection and just re-use it for every consequent request. The lambda function itself looks almost the same as the example above.
Seems rather similar. I could get used to it. However, Stitch has some cool features out of the box like authentication and authorization for your client connections. This makes it really easy to secure your routes.
Here you can see it’s creating a new connection on the first invocation while re-using it on consecutive calls.
The service has a with a 14 day trial for unlimited features, so you can . if you want an extended trial or just . 😊
Or, take a look at a few of my other articles regarding serverless:
Hope you guys and girls enjoyed reading this as much as I enjoyed writing it. Hit that tiny heart for others to see this here on Medium. Until next time, be curious and have fun.
Originally published at .