Staying Secure while Innovating Fast with AWS Serverless Compute

Staying Secure while Innovating Fast with AWS Serverless Compute

On this episode of the Six Five Webcast – AWS Serverless Series, Keith Townsend is joined by Amazon Web Services’ AWS Lambda Usman Khalid and Spencer Dillard for a conversation on leveraging AWS Serverless technologies to achieve rapid innovation without compromising security.

Their discussion covers:

  • The advantages of the Serverless operating model versus traditional application development
  • Common security challenges in modern application development and how AWS addresses these
  • The shared responsibility model for securing Serverless applications on AWS
  • Built-in protections provided by AWS Serverless services like AWS Lambda and Amazon ECS with AWS Fargate
  • How the ephemeral nature of Serverless resources contributes to security

Learn more at Amazon Web Services, and watch our other videos in this series:

Exploring the Future of AWS Serverless with Holly Mesrobian

Integration on AWS: Develop a Future-Proof Integration Strategy

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Keith Townsend: Welcome to another Six Five Media production. We’ve had AWS on talking about Serverless for a couple of times now, and we’re going to dive into a really important matter, and that’s security. Building secure serverless apps. But first, my two guests, Usman and Spencer from AWS. Usman, can you break down for us the difference between serverless applications and traditional applications, and where there’s security concerns or challenges around servers?

Usman Khalid: Serverless applications, by their very nature are loosely coupled. They’re very small atomic pieces of units that are acting on a specific piece of business logic. Unless you look at our applications that were 20, 30 years ago, which were very monolithic, really you have large application servers running on top of databases. They’re very naturally designed serverless applications, where by their very nature are very close to microservices or our support microservice-based architectures. Whether they work through APIs or events, they’re just naturally designed for how modern developers and modern applications are architected.

And so, along with microservices, you have CI/CD and continuous average would be continuous integration and continuous development deployment. And they’re very much naturally built in the cloud and born in the cloud. They’re your alpha, gamma test environments as well as your production environments are really all in the cloud. That’s one or a set of large differences that you would see between traditional applications versus serverless applications.

Now, where from the security side things become interesting is that customers who adopt serverless applications, they get really excited about the speed at which they can deliver results or the speed they can iterate just because of the very nature of the architecture. You’re not changing large pieces of code and testing large pieces of code together. You’re making atomic small changes and you’re able to then roll them out continuously into production very seamlessly. Is that the security challenge then becomes can security keep up with the pace firstly of innovation and the rapid pace of development? That’s one challenge that comes up.

Separately because there’s no infrastructure to manage, the classic thing, the classic place where security teams usually hook into enterprises, which would be like, oh, you need to provision a new infrastructure. That’s where we’ll catch you and make sure that it’s secure. Now, because there’s no infrastructure to manage, as in the case with serverless applications, the security teams have to come up with a new methodology and new sets of tools, how to plug in.

And then finally, because your footprint is increasing. You have a set of microservices or many, many microservices because you decompose your application. Making sure everything is secure if you’re not building serverlessly, becomes a real challenge for security teams. But if you’re obviously, and we’ll get more into the details there as we go on serverless, these things are secure by default, and we’ll get into the details in a second, but that’s some of the big differences in both in terms of how serverless applications are set up and security challenges around them.

Keith Townsend: We’re going to get into these details right away. If you follow the AWS builder’s journey at all of the past couple of decades, you know this term of shared responsibility. There is a part of the role for AWS. There is a role for the customer. AWS on the big side of it takes care of physical security, et cetera. And Spencer, I’ll let you get into a little bit more from a shared security model, serverless bid. But I wanted to follow up Usman’s comment on services. I get from a traditional perspective, if I wanted to secure my app, I’d say, “This IP address cannot talk to this physical IP address.”

But when we’re building modern applications, this doesn’t work too well. How do I make sure this Lambda function doesn’t have access to this ECS instance or this process running in Fargate? Two-part question. Can you break down to me AWS’s role in shared responsibility and then how AWS is securing this service level from the shared responsibility model?

Spencer Dillard: Yeah, definitely. If you look at it from a cake diagram perspective, regardless the infrastructure security doesn’t change. We’re responsible for the data center security, all the things you alluded to, physical security, et cetera. Those are still there regardless of the approach you take. But as you move towards using EC2 instances directly, you take on a fair amount of that responsibility as a customer for maintaining the operating system, the network controls, all of those things. And as you move up and up the stack to serverless solutions that allow you more flexibility around what to delegate to AWS versus what you take on yourself, you have more choices and you can think about things a little bit differently.

Rather than worrying about, for example, in Lambda, every IP address, you worry about service identity and controls that are more logical layer as opposed to the physical IP address analog layer. And similarly with Fargate, you worry about what role in the service mesh does this task play? What is it allowed to access and talk to? And so as you think about these things, it becomes worrying less at an individual machine level and more as a service identity level. Which in many ways frees up companies to not worry about quite as many things because we take those on as our responsibility to ensure that we’re enforcing those things. But it lets you operate as a customer at a little higher level without having to worry about some of those lower level details.

Keith Townsend: Usman, let’s talk about from a service to service perspective. I get this idea from the abstract that if I have a Lambda function running, I can implement controls from the AWS side that says, “This Lambda function can’t talk to this other first party AWS service.” But it’s hard for me to shift my old school configuration management headset. These things in the enterprise are stood up. I have this database that’s always running. It is a physical asset I can guard, I can put my guard rails around it. These things are ephemeral. While I’m saying logically, “This AWS Lambda function can’t talk to this AWS Lambda function,” these things are ephemeral. How does the mindset change?

Usman Khalid: I think the mindset from a security perspective specifically is actually amazing. A lot of people think about serverless, they think about time to market, they think about speed of development. What they actually don’t realize is how much of an attack surface area shrinks when the resources are there, especially in this case compute, because we’re talking about serverless compute is ephemeral. There is nothing to run targeted attacks on. There is no single IP address, for example, that you can continuously to poke different holes at on different ports at. Once that surface area shrinks the traditional attack modes that a lot of the attackers use simply don’t work against serverless applications as well.

While it takes a little bit of thinking like, “Hey, I’m securing… I know how to secure a database,” which is a running infrastructure, as Spencer alluded to, we actually have controls at the service level versus at the IP level where, for example, if you’re running your Lambda functions inside of VPC or virtual private cloud, attackers simply just don’t have that surface area. There’s nothing to really spend time on because these resources just constantly get cycled. There’s no one server to break into or attack really quickly.

Keith Townsend: Spencer, I’d like to go back to that previous question around the overall concept of a service versus a server. We’re talking about serverless security and server-based security. I have to harden the OS. If I want to provide a database service, no matter, let’s say it’s a MySQL database service, then I got to make sure that I’ve locked down the ports to the EC2 instance around SSH, make sure that I’m running bare services that are needed to present the MySQL. Where does that shared responsibility role when it comes to consuming a serverless? AWS actually has serverless databases, the serverless function. Where does AWS take care of that cake, that layered cake that you talk about that customer responsibility picks up?

Spencer Dillard: Yeah, quite a few ways. I mean one, we’re built on top of things like the AWS Nitro system and the hardware virtualization that’s built into that and the protections. And we use technologies like Firecracker, for example, in Lambda that provides virtualization and relies on KVM to provide that isolation. And so, the things that we spend our time worrying about around patching, around making sure that we’ve really had thorough pen testing and all the assessments that we can do all the time that are happening, it really, it’s about delegating those types of activities to AWS because they don’t add a lot of value for customers. They’re not what’s going to differentiate one customer from another and let them focus on their business.

By us having this really hard expectation of ourselves around our security posture about constantly looking for ways to improve isolation, to detect risks, and to always be doing things like patching with minimal to no interruption, those are really hard things that really just, to Usman’s earlier point, they go against the notion of serverless that that’s not where your value is. And so, we really take on those things and spend a lot of time constantly improving in those areas. And it’s just something that we think for a lot of customers, it’s just not worth their effort to do. We can do it really well.

Keith Townsend: Usman, this goes back to a previous pointed question that I asked you around one of the biggest challenges with serverless security. And you said, “Keeping up with the speed of innovation.” As I hear the things that Spencer is talking about from a developer perspective, I’m getting excited like, “Oh, I don’t have to choose which hardened OS image to use? I can just select a service, or I can just take my code and run it?” Now I’m thinking through modernizing my CI/CD pipeline to accept this speed. What are some of the things that AWS is doing to help CISOs and the security teams adopt this new mindset?

Usman Khalid: No, that’s a fantastic question. It goes back to the shared responsibility model and understanding that there’s levels to it. For example, if you are building on Fargate. Now in the traditional case where you’re using servers like that OS level patching has to be the responsibility of some team. Usually for large enterprise customers, there’s an SRE team that’s doing that and driving a campaign across the enterprise to actually go get that done.

Now with Fargate, you don’t have to do that. As you have different environments in your organization, in your CI/CD pipeline, all of them stay patched because of what Fargate’s done as Fargate takes on the responsibility. Now you take Lambda as a layer above that, it takes all even more where even the actual runtime, the guest operating system where your application code, your special sauce is running. Lambda takes the responsibility of keeping those patched as well. And we have security teams that are on top of CVEs, and we have our own practices because we take security as our number one tenant and number one priority for the company that are constantly rolling out these changes way faster than an average enterprise can do it themselves. Going back to how you were getting excited as a developer, this is why you get excited as a developer because now you don’t have the security teams breathing down your neck.

And now from a security team’s perspective, because they know that their application teams are running on this specific technology, they can actually back off and they don’t need to actually have those guardrails. As long as they know that the architecture is serverless in this specific way, they can decide how and when to engage. As I said, with Lambda, there’s hardly any engagement because all of the patching work is taken care of for you. With Fargate, there will be a, the OS patching at least is taken care of. They now have to make sure that the application level guest OS is patched. That’s somewhere going back to the stack that we’re building in this conversation like that the security responsibility stack, this goes higher and higher, the higher level serverless service that developers adopt.

Spencer Dillard: One thing. If you don’t mind, I’d add one thing to it. I think the other part of that is as companies move faster and faster, there’s also the world is constantly changing around us. And the sources of different threats and attacks and concerns is constantly evolving as well. Things like supply chain security or new vulnerabilities at different layers of hardware virtualization, the degree to which open source has become a critical part of pretty much every application today. And so, being in a place that we’re able to see that from across a large customer base, from being able to see that from a lot of different perspectives really puts us in a position to be able to make sure we’re providing those protections versus every customer having to be aware of every trend and every risk that happens. It’s just it moves too fast.

Keith Townsend: Spencer, you have me feeling like a prosecutor here. You just opened the Pandora’s box when it comes to questions that I can ask, because you’ve talked about the software bill of material, where is this software coming from? How do we verify that we can use it? Different, next level problems. And one of the things that I used to push back on my enterprise architecture brother and sisters has been this concept of I have to adopt Kubernetes in order to do DevOps. And I’m thinking, you know what? People have built scalable cloud-based applications for years, and they’ve had this CI/CD process that looks a lot like what we are calling it a DevOps practice today. Talk to me about how companies in AWS’s philosophy is integrating DevOps practices with serverless based computing.

Spencer Dillard: Sure. I’ll start and Usman can jump in of course. And we’ve long used the DevOps model in AWS. It’s been very foundational to how we think about running our services. And the central theme of that for me is about ownership. It’s about empowering developers to be responsible for their software, their dependencies, to make sure that they really are able to fix problems and not have to wait for somebody else. And so, I think when we talk about that in the context of serverless, it’s really about reducing the surface area of the number of things that that developer has to worry about in a DevOps model. It doesn’t make it go away, but it does change the number of things that they have to worry about. As one customer said to me about a year ago, the thing that they love about serverless is there’s less decisions for their developers to make and less mistakes that can be made because they’re making less decisions. I think that really sums up a lot of the value in a really succinct way for me. Usman, anything you wanted to add?

Usman Khalid: I think the main thing I would add, having run so many DevOps teams for almost over 10 years at AWS and looking at how Amazon itself across within AWS and within other parts of Amazon use serverless, one of the biggest challenges teams have is focus. It’s really about, because when you own the ops as well there’s always something interesting going on in your application. Whether it’s security related, whether it’s scaling related, whether it’s user behavior, whether it’s features, something interesting is happening. And so driving that focus for your teams and as Spencer said, having the least amount of things to worry about for your developer so they can just work on as much as possible on the special sauce that is your business is so key in this model.

Now, yes, there are some focus challenges as well. What you get out of that as engineering teams that really deeply understand their application, that they can provide the best service to their customers, the features they build are really well understood. That’s why we know we’re known for being one of the most customer obsessed companies on the planet because even our developers are immersed in the customer experience. Because we operate and the services and we understand what the customer pain points would be very personally, I mean at the individual developer level. But again, that driving that focus and having less things to worry about as Spencer put it, is so, so key in the DevOps model, and that’s why serverless fits that model perfectly to a T.

Spencer Dillard: One thing I would add is when you also look at the range of services we have that support developers in this environment, so whether it is CloudWatch and X-Ray for monitoring and debugging, whether it’s GuardDuty for security or Inspector for configuration management, there’s all these tools that really allow developers to minimize the amount of time they’re having to build automation or spend trying to research things because the data’s available. And so the more we can always keep going back to trying our best to let developers and companies focus on their core value and minimize how much time they spend on security and tooling, the more we’ve helped that customer achieve its goals.

Keith Townsend: Yes. Spencer, you beat me to the punch that I was just about to talk about. We’ve done a couple of these videos with you folks up to this point talking about the high level value with Dan and Pat, down to integrated services and that services level. And I think that’s what you’re heading to. Since serverless has been thought from the ground up, from event driven activities, getting something off the event bus, allowing that to trigger activity, you can now begin to consume this within itself. The events within the serverless architecture itself now can become things that trigger security scans, et cetera.

The really advanced builders can build incredible CI/CD pipelines, and it goes on and on. I am really looking forward to the future recordings and interviews, how people are pushing the envelopes. Make sure to stay tuned to this channel as we talk to AWS, not just about the business value of serverless, but the technology and how to make these things happen. Thanks for joining us.

Author Information

Keith Townsend

Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.

SHARE:

Latest Insights:

Solidigm and NVIDIA Unveil Cold-Plate-Cooled SSD to Eliminate Air Cooling from AI Servers
Ron Westfall, Research Director at The Futurum Group, shares insights on Solidigm’s cold-plate-cooled SSD, developed with NVIDIA to enable fanless, liquid-cooled AI server infrastructure and meet surging demand driven by gen AI workloads.
In an engaging episode of Six Five Webcast - Infrastructure Matters, Camberley Bates and Keith Townsend explore key updates in data infrastructure and AI markets, including the revolutionary IBM Storage Scale and Pure Storage’s latest enhancements.
Kevin Wollenweber, SVP at Cisco, joins Patrick Moorhead on Six Five On The Road to discuss accelerating AI adoption in enterprises through Cisco's partnership with NVIDIA.
Fidelma Russo, EVP & GM at HPE, joins Patrick Moorhead to share insights on HPE's Private Cloud AI advancements and their future AI endeavors.

Thank you, we received your request, a member of our team will be in contact with you.