The Top 5 Pitfalls of Serverless Computing and How to Overcome Them

Serverless first came onto the scene in 2014 when AWS Lambda was launched. It offers a dynamic cloud-computing execution model where the server is run by the cloud provider. As with any relatively recent technology, its novelty results in a steep learning curve, and it comes with its own set of benefits and drawbacks.

In this post, we’ll discuss the top five pitfalls of getting started with serverless and how best to avoid them. Common problems that new adopters run into include: loss of control, security, architectural complexity, and issues with monitoring and testing. Let’s take them one by one, see what challenges they present, and how we can best overcome them.

Loss of Control

One of the biggest criticisms of serverless is loss of control. What does this mean? In serverless applications, we tend to use a lot of services managed by third parties (called BaaS—backend as a service, in serverless slang) and a lot of function platforms (FaaS—functions as a service). These BaaS and FaaS are both developed and operated by third parties.

In the non-serverless world, we control the whole stack; we have control over which version of what software goes into each of the services. We write queues, databases, and authentication systems one at a time. The more we move to serverless, the more control we lose because we give up ownership of the software stack that our services use. The positive side of this is that we can focus on putting more time and energy into providing business value.

One important thing you should do if you are really concerned about losing control of your infrastructure is a risk assessment. This will allow you to analyze what’s most important for your business and how losing control would potentially affect this. For example, if you are in the business of selling cakes online, it may not make sense to spend a lot of time creating the authentication system that your e-commerce platform uses. By using a third-party authentication system, you can provide the same, or even more, value (as this is a tested and secure system) than writing it yourself. Remember that when you choose to write a system yourself, you need to own it, and maintain it, for the duration of its life.


One of the biggest risks of serverless is poorly configured applications. Poor configuration can lead to many issues, including (but not limited to) security-related issues. If you are using AWS, for example, it’s important to correctly configure the different permissions that your services will have for accessing other AWS services. When permissions are not very specific, a function, for example, can have more privileges than it needs, leaving room for a security breach. Another problem with security is that the security mechanisms inside the cloud cannot be transferred outside of the cloud, so if we are connecting to third-party services by HTTP, we need to make sure that those connections are safe and the data encrypted.

To overcome security issues, the most important thing is to give the correct permissions to your AWS resources, so that they can perform their tasks. So be sure to provide the exact permissions your functions need, as well as permissions per function, and be very strict about it. Then make sure that you encrypt all your data at rest. And for the third-party connections, make sure that the request takes place over a secure connection.

Architecture Complexity

When developing serverless applications, even the simplest application has a very complicated architecture diagram. In general, the code for functions tend to be very simple and to do only one task at a time, so this leads to a lot of functions per application. In addition, we use a lot of managed services for all kinds of tasks. When you combine these two things, the architecture tends to get complicated. The complexity of an application then moves from the code to the architectural level. You have to be careful here to follow solid architectural patterns or you can end up with a tangled architecture.

The simplest way to avoid this is to educate yourself on how to build distributed systems. Learn the most common architectural patterns for designing event-driven applications and become familiar with asynchronous messaging in applications. Developers and architects often think in synchronous communications, but in a distributed system, asynchronous messaging is more efficient. It is very important to understand how the different services integrate with each other, how much latency a request has, and where the bottlenecks are. Sometimes, by slightly rethinking the architecture, you can improve the performance of the entire set of applications. For example, if you see that one service is taking an inordinate amount of time to respond, you might ask yourself: Can I add a cache in front? Or, can we switch some of the synchronous messaging to asynchronous connections?

Difficult to Test

Because of its distributed nature, serverless applications tend to be hard to test. Developers, in general, like to perform local tests because they were accustomed to doing that before serverless was available. But in the serverless world, local tests are complicated, as we need to somehow mock the cloud services on our local machine. In non-serverless applications, one of the biggest risks tends to be in the code. However, in serverless applications, configurations and integrations are the greatest risks. So we need to make sure that we perform a decent amount of integration tests as well.

To overcome the difficulty of testing serverless applications, it is important to invest the time and effort upfront to architect your application correctly, so you can test all your business logic running unit tests, and then create good integration tests that run in the cloud.

Difficult to Monitor

Again, for the same reason that our system is difficult to test, it is also difficult to monitor. Monitoring serverless applications is very complex, and the tooling available is not yet well developed. In a traditional application, we usually focus on monitoring the execution of code, while in serverless applications, we also need to monitor the integrations between the different services and make sure that we can follow a request end to end in our distributed system.

In order to deal with this effectively, it is crucial that you find a good monitoring tool that works for your application. There are many out there and they all have different features. Best practice says you should have one that supports the ELK Stack, has features for visualizing the different logs and metrics (not only from your functions, but also from your resources), and, additionally, supports distributed tracing. The most advanced tools will have integrations to serverless services like Lambda already built-in.


To sum up, serverless has many problems, but what technology out there doesn’t? When choosing what to use in your next application, it’s important to do an analysis of what the most critical parts of your business are. If you need to enter the market quickly, have a very strict budget, or simply want to focus on your business logic, then serverless might be the right technology for you. Many of the problems mentioned here will be solved as better tooling appears and people become more familiar with distributed systems. We look forward to following and learning as the landscape continues to evolve.

Get started for free

Completely free for 14 days, no strings attached.