• Home
  • Blog
  • Serverless Security Explained

Serverless Security Explained

As the cloud ecosystem has developed and expanded, so have the ways in which users utilize their services in line with their operational and financial requirements as they seek out new ways to maximize the flexibility of the cloud environment. One such example is in the growth of serverless computing, which has generally been shortened to simply “serverless” for those who use it.

This form of popular cloud computing is being adopted by many organizations who are seeking to be more nimble and cost-effective. However, as organizations begin or continue to integrate serverless into their operations, they will need to think more seriously about how they practice cybersecurity in a serverless security if they hope to use it responsibly.

What is Serverless?

Serverless is the natural progression of cloud computing wherein organizations make use of a cloud provider’s rackspace in order to scale up their computing power. Essentially, the idea is that not all functions for an application are needed all the time, so why should you have to pay rent for a server that isn’t being constantly used? 

This makes sense when we think about it since we basically have a need for some of these services such as logic, databases, authentication, and others for short and specified activities like when a user makes a request to our web application. We are essentially using our serverless services for carrying out a range of functions, leading many to refer to serverless also as Function as a Service (FaaS), further adding to the (X)aaS alphabet soup. We invite you to kill a few minutes with your colleagues coming up with your own “X as a Service” combinations. C’mon, you know you want to. Alternatively, see how many you can name.

The term serverless is a bit of a misnomer here since there is still a server in the mix, even if it is only being used intermittently at someone else’s facility. However, even as turning to serverless does reduce our perceived ownership over what happens on that server, we are still going to be held responsible for any security issues which may arise.

Your Serverless Security Checklist

As plenty of companies have painfully learned in recent years, just because AWS or Azure is hosting your applications doesn’t mean that they are responsible for its security. Working with these cloud providers has plenty of security advantages, but your organization still needs to take basic precautions to work securely with serverless. In hopes of helping your team out, we have come up with the top five items that you should be considering when it comes to managing your organization’s serverless security practices.

#1 Stay Up to Date      

Even after you have broken up your application and spread its various functions out to the four corners of the world wide web, it is still your responsibility to make sure that your application itself is secure. 

One of the most effective ways to keep your app secure is to make sure that all of the components are up to date. Are you using third-party software that needs to be patched? Check and make sure. 

An oft-forgotten aspect to updating software is the need to update your component dependencies, especially when you are using open source components in your app. Given the fact that over 92% of modern applications use open source components for 60-80% of their code base, chances are that this is an issue for your organization. Working securely with open source comes with some key differences from proprietary and even commercial software, like the difficulty of tracking when a new vulnerability or fix have been released. Another is that we need to consider the dependencies that our components are built on top of. If there is a vulnerability in one of those, it can impact the security of our application and we might not be aware of it. We often lack the visibility into our open source component dependencies if we are working without automated tools that can identify all of the open source components and their dependencies in our libraries.

The best solution here then is to make sure that we update to the latest version of the component as the project managers will most commonly push out fixes to vulnerabilities found in their code. It is then up to us to make sure that we implement the necessary fix by updating.  

#2 Principle of Least Privilege

Setting out rules for who can access what is as important for maintaining serverless security as it is for working with the servers that you have in your office. Grant least privilege per function and use role-based authentication (IAM roles) to minimize your potential exposure from an attacker, foreign or domestic.

This principle is a big one since the more access a user has, the more potential damage they can inflict on your organization. The threat may come from a hacker successfully compromising Tim in accounting’s email account and gaining control over his credentials and cause their breach. To minimize the risk here, we should set Tim’s privileges to access only the functions or actions that say have to deal with the company’s finances. While it is scary to think of a scofflaw rummaging around in the accounting files, it would be far worse if they were able to move laterally into customer information or steal your company’s proprietary technologies.

There is also a concern that the person using Tim’s credentials to try and steal valuable information isn’t a far off hacker, but is actually Tim himself. Insider attacks are a growing concern for organizations, so limiting what the real Tim can access is still a smart move to make. We’re watching you Tim.

#3 You Gotta Keep Em Separated

Similar to the principle of limiting what a specific user can access with their given credentials, it is important to segregate networking and resource access per function as well.

Sometimes referred to as micro-segmentation, this is the idea that we need to set up barriers so that if an attacker is able to make it past our security protections, then the fox can’t take up the full run of the henhouse.

So just as we would keep our databases segmented one from the other, we want to isolate our functions as well so that if tragedy strikes, then we have limited the potential damage.  

#4 Keep Your Eyes on the Logs

Once you start using your serverless infrastructure, you may find that everything gets a bit noisy and fast. The sheer number of requests being sent to your serverless architecture can mean that functions with vulnerabilities can slip by without being noticed. 

Our first step towards addressing this serverless security challenge should be maintaining log analytics to identify and abnormalities in the execution logs.

Second, we should be scanning out functions with fully automated tools that include checks and monitoring for our open source components. Mend’s serverless integration enables you to scan and monitor deployed Lambda functions. Once a scan is initiated, Mend automatically identifies all the open source components and dependencies. It then checks them against its comprehensive database of open source repositories, for security vulnerabilities and licenses. Once detected, you can apply automatic policies, define workflows, and collaborate the information within your team.

#5 Compliance Matters

Data privacy concerns become extra sensitive when it comes to regulated industries such as banking, health, and others. Since we are running our applications and storing data on the cloud, there are always the risks that are associated with these assets being public-facing. 

AWS takes into account a variety of regulatory regimes that need to be supported and followed, creating some helpful guides on how to reach compliance.

You can look for your specific compliance program through this link as well as their Quick Start Guide to help get an idea of how to begin.

For those of you working in a HIPAA compliant industry, the good folks at AWS have this dedicated guide to help explain the architecture requirements to be on the right side of the laws.

Building Out Our Approach to Serverless Security Practices

Making the move to a serverless architecture comes with plenty of advantages as we have noted above. Cost savings, flexibility, and more make this a very appealing way to work. 

However, challenges of a wider threat surface with more HTTP endpoints being exposed, the risk of misconfigurations, and more, mean that we need to take the necessary precautions in our approach to serverless security. 

Hopefully, by taking these tips to heart and implementing them in your serverless architecture, you will be able to avoid the common serverless security pitfalls and keep your apps safe and secure.

Meet The Author

Adam Murray

Adam Murray is a content writer at Mend. He began his career in corporate communications and PR, in London and New York, before moving to Tel Aviv. He’s spent the last ten years working with tech companies like Amdocs, Gilat Satellite Systems, Allot Communications, and Sisense. He holds a Ph.D. in English Literature. When he’s not spending time with his wife and son, he’s preoccupied with his beloved football team, Tottenham Hotspur.

Subscribe to Our Blog