In this episode of Infrastructure Matters – Insider Edition, Steven Dickens is joined by AWS’s Ajay Nair, General Manager, AWS Lambda, for a conversation focusing on the topic of serverless computing and its place within AWS’s broader portfolio. Ajay explains that serverless has evolved from its initial definition as a way to run code without managing servers to a broader operational model focused on delivering value to customers without getting bogged down in managing infrastructure. He emphasizes that serverless allows customers to delegate outcomes like security, scale, performance, and availability to AWS experts, enabling them to focus on their unique business needs.
Their discussion covers:
- Definition of Serverless: Serverless is an operational model that enables businesses to run or build applications without the need to manage low-level infrastructure. It allows customers to delegate infrastructure responsibilities to AWS, freeing them to concentrate on delivering value to their customers.
- AWS’s Evolving Role: AWS has evolved to meet the diverse needs of its customers. Some customers require differentiated infrastructure and hardware, while others seek a more hands-off approach. AWS provides a spectrum of choices, from fully managed serverless services like Lambda to more hands-on options like EC2 instances, allowing customers to select what works best for their workloads.
- Benefits of Serverless: Customers adopting serverless benefit from lower total cost of ownership, elasticity, reliability, and speed. Serverless enables them to focus on innovation and faster delivery of applications, as AWS takes care of infrastructure management, performance optimization, and security.
- Serverless Across AWS’s Portfolio: AWS is extending the serverless operational model across its entire portfolio, not just infrastructure. This includes databases (e.g., Redshift Serverless and DynamoDB), IoT services, machine learning platforms (e.g., SageMaker), and industry-specific solutions (e.g., healthcare). AWS aims to provide a range of serverless options to meet the needs of different application classes.
Ajay Nair encourages customers to think “serverless first” for new development projects, emphasizing that serverless computing brings agility and cost efficiency to AWS users, allowing them to innovate faster and do less manual infrastructure management.
You can watch the video of our conversation below, and be sure to visit our YouTube Channel and subscribe so you don’t miss an episode.
Listen to the audio here:
Or grab the audio on your streaming platform of choice here:
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Transcript:
Steven Dickens: Hello, welcome to another episode of the Infrastructure Matters podcast and Insider edition. And we’ve got Ajay Nair from AWS with us. Hey, welcome to the show Ajay.
Ajay Nair: Thank you Steven. Excited to be here. Thanks for having me.
Steven Dickens: Pleasure to have you on the show. So let’s get our listeners and viewers here orientated. Position your role. I’m really looking forward to this conversation, but maybe let’s start with some introductions.
Ajay Nair: Absolutely. So I am the general manager for AWS Lambda as part of Amazon Web Services. In my role, I’m responsible for business, product and engineering operations of the product line of AWS Lambda. I’ve been with AWS for about coming up on 10 years now, and I’ve been involved in building various forms of the server’s portfolio, including being the launch product manager for AWS Lambda.
Steven Dickens: Fantastic. Well, there’s a lot to unpack there. Podcasts called Infrastructure Matters. A lot of discussion about serverless and what going serverless means, and you hear conflicting views of what that term actually means. You see the server vendors, and I’ve got a bit, I’ve been guilty of this in the past, going servers matter. It’s not about just services. So maybe let’s start there first, disambiguate the term for the listeners. What that means in your context, the way you look at it, and we’ll maybe sort of factor in AWS’s perspective there.
Ajay Nair: I can imagine the evolution and consternation associated with that word, Steven. I am guilty of being part of the one who created one of those terms as well, so I certainly will take some responsibility for that. I think the definition of serverless has certainly evolved over the course of the years as well. When we initially launched with AWS Lambda in 2014, the term serverless was really designed to help people get that mental model of running code without managing servers or infrastructure. I think over the last almost decade now that we’ve been doing this particular piece of work, that definition has evolved. And today when you talk to customers, their mental model of serverless is closer to delivering value for your customers without having to manage complex infrastructure capabilities. And what that translates onto a day-to-day basis is you’re delegating the outcomes of building on the cloud to people who are experts on those outcomes.
So if you think about what development in the cloud looks like today, you have constructs like developing for distributed services. How do you manage failures at large scale and manage availability and performance, which today I consider grades over there. You’ve got complexities of managing large fleets of ephemeral compute and storage that comes in and out in a virtual capacity. You’ve got network connectivity between various resources that have to be managed with permission constructs and more that come with it. And all of those require a certain degree of expertise. And over the decade or so that we’ve been doing this, that expertise has become the norm.
Everyone requires to do their job, but that’s not the job. That the job is to deliver value to your end customers and so on and so forth. So what we see more and more is customers saying if I can leverage AWS’s expertise in delivering those outcomes, what we call well architected outcomes like security scale performance and availability, and I can focus my efforts on the differentiated work that I need to do for my customers, that’s me adopting a serverless operational model. So today, serverless can be thought of more as that operational model of being able to run or build applications without having to focus on the undifferentiated heavy lifting of managing low level infrastructure, and then ultimately delivering faster value to your customers.
Steven Dickens: If we look back over that last 10 years, and I’ll get you to comment on this, we’ve seen AWS start to develop its own silicon, whether that’s Nitro, Inferencia, Trainium and a raft of others. So we’re seeing sort of almost a divergent path and I’d be keen to understand your perspective of how they come together. You’ve got servers matter and infrastructure matters to the point that AWS felt it wasn’t getting what it needed from chip manufacturers to address the requirements it saw, so it developed its own. And then on a sort of parallel track, you’ve got serverless and an abstraction set of layers away from the underlying infrastructure. So maybe just round those two converging thoughts out for me and maybe that’ll provide some context for our listeners.
Ajay Nair: No, it’s an excellent question. And this I think is probably one of the benefits of the just breadth of customers that AWSC is supporting on top of us. What you will see is customers going through an evolution and that evolution may be the class of applications they’re building. It could be where they are in their own digital transformation journey and the culture and operational transformations they need to do. It could be the fact that they’re going through even business transformations based on the changes in industry today. With the rise of in the past of mobile and today of AI, there is always a new trend that they have to react through. And when people are going through transformation, their needs shift. What is differentiated today becomes undifferentiated tomorrow. And so what we’ve seen is even for example when Lambda started, we were seeing this emergent class of applications that colloquially you’ll have referred to as cloud native applications or microservices based applications that all fit a very similar, for want of a better word, development pattern.
It was a set of machines with load balances without any state on top of it, relatively undifferentiated on top of the actual silicon they needed that they needed to run. So we are like, hey, we can manage that for you. You give us the code, we’ll run it on your behalf, we can scale on that. On the other end of the spectrum, you continue to see applications. We just differentiated on that. A great example is the class of deep learning related applications we are seeing today, which are deeply differentiated by the infrastructure they run on because they can squeeze performance, and there the pattern hasn’t settled yet. We haven’t reached a point of consolidation and transformation.
So I think what you see is AWS reaching out to that breadth of customers that we are engaging on and saying, look, they continue to be customers who either because of where they are in their transformation journey, because of the class of applications they’re building or their own cultural or operational transformation, or where they are in terms of what they’re responding to, they need differentiated infrastructure, they need differentiated hardware. And that’s where AWS continues to innovate. We leverage that innovation both at the silicon and hardware level as well as an expert in operating that metal and then serve it up as a service on behalf of customers where they’re at the other end of the spectrum, so to speak. So where speed matters and infrastructure differentiates you, use the infrastructure capabilities where otherwise AWS helps you go by going serverless.
Steven Dickens: So you’ve mentioned customers a few times there, and I think you can develop whatever fantastic infrastructure you like in these hyper scale cloud platforms, but at the end of the day it comes back to how customers are consuming it and what they’re doing. How are you seeing some of those values and that value from that infrastructure and value from the Lambda service start to come through?
Ajay Nair: It’s a great question. So if you look at why customers are adopting, I’ll start with the serverless side of the equation and kind of work my way through the whole thing. So if you look at where customers broadly adopt AWS, there are typically three values that come up. Total cost of ownership, elasticity, and reliability. So A, we can grow with your business. You’re able to do so in a cost-efficient way and you’re able to do it in a way that you can depend and bet your business on it. We like to joke about serverless as being the compression algorithm for AWS’s experience in delivering those things. And so those three values of total cost of ownership, elasticity and speed continue to show up on AWS on serverless as well. So customers typically see lower cost of ownership because they’re spending less time doing things that they would otherwise.
So they’re able to use their builders for building applications, innovating on behalf of their customers, spending less time on operational and infrastructure level capabilities while getting a dial tone, reliability, and performance that they need for their customers at that front. Performance and security outcomes that today are rapidly becoming table stakes for competing in any other domain coms sort of baked into the box because AWS is able to vertically optimize the stack from load balances to placement to the raw compute infrastructure that’s running on their behalf to squeeze that out. And ultimately it all leads to agility. I like to tell my team, innovation comes from speed and speed comes from doing less. And so because of the fact that they’re able to iterate really quickly, they’re able to ship faster. We have customers like Edmonds who have come to us and said applications that typically took them multiple months to build, they’ve been able to ship within scope of weeks, close to about an 8X improvement in delivery speed by adopting capabilities like surplus.
The place where AWS marries its strength in both what we are able to deliver on surplus as well as its infrastructure capabilities is in innovations like firecracker. So firecracker for example is a native virtualization or isolation deck that we have built that allows us to create lightweight micro VMs which give customers per invocation isolation, which is pretty much unique in the market because of the fact that we control all layers of the stack. We’re able to go and see that particular innovation and they’re deployed on that. So what a customer sees that as is as an inbuilt isolation and security construct as a foundation of many of our serverless products like Lambda and Fargate without any effort on their part. That just comes batteries included on what they’re able to do. And then on the flip side, the selection of infrastructure from the various classes of instances, everything continues to be a place where they see product benefits.
Steven Dickens: It’s interesting, some of the things you mentioned there sort of fit with the rubric that I carry around. I call it fit for purpose workload placement. An analogy people have heard me use a bunch of times is each workload’s like a journey. Some journeys suit planes, some suit cars, some suit a pushbike, some a train. All different forms of transport are applicable. Not one is more valuable than the other. Most of us own a car, but we’re all used to taking different forms of transport. It’s interesting now, and I picked up and I want to go back to it, the sort of six or seven that I carry around are performance, availability, scalability, security. You mentioned cost finops. The other one I sort of carry around and throw into that mix is eco. We’re starting to see people place workloads in different places based on ESG. What’s your perspective? I think the takeaway from both of our comments is one size doesn’t fit all. Is there a sort of rubric or workload placement model that you carry around in your mind or that we should be thinking about from an AWS perspective?
Ajay Nair: I love those seven, and I think if I would wish that more decision makers who are carrying those seven in their head as rubrics for every evaluation, and I think that’s a great place for us to definitely make comparisons. Your mental model of the right choice or right technology for the right workload absolutely resonates. You will ultimately hear AWS talk about that technology choices are a strategy, not a dogma. Like what you choose to pick is ultimately a decision on what you do based on your business context and your needs of your customers, not something that is always a de factor choice, right? In general, what you will see is the places where AWS is able to operate your capabilities on your behalf, AWS makes takes on the responsibility of providing and meeting the bar for those entities that you talked about. Availability, scale, performance, resilience.
You’re delegating those responsibilities to AWS. So for example, Lambda public has a public SLA of three and a half nines and which says AWS stands by that the endpoint that you’re running on top of us we’ll be operating personally and if it goes down, we are responsible. We get called, not you at that particular point. Other aspects like security always continues to be a shared responsibility. Like where infrastructure security remains, AWS’s responsibility, application security remains, customer responsibility. On the flip side, to your point, there are constraints because there is opinions. If you are running things AWS’s way, we will ask you to run certain things our way. For example, when you’re running within Lambda, scale isn’t part of the equation. We have to manage that on your behalf. You have to use a selection of languages and container images that we provide on your behalf.
Scaling behaviors are determined by Lambda. You don’t get to arbitrarily see what the machine is under the box. So what we try to do is offer that as a spectrum of choice. So even if you take the serverless operational model where the goal is to delegate as much infrastructure as possible, we offer that as varying levels of delegation. So you have Lambda one end where everything is down and delegated, code in, endpoint out, well architected outcome in the box. You have services like ECS Fargate where customers are able to delegate the raw compute management. AWS manages the patching host management and all that for running your container infrastructure, but the customer has more flexibility on choice on how requests are getting placed and routed, et cetera. And you could even pick something like EC2 when compared to the actual physical machines that you own in the data center as another layer down over there where you’re saying, hey, you are able to manage virtualization capabilities, but we manage the physical infrastructure on your behalf.
And each of those comes with its own flexibility, which allows you, gives you more comfort and dial on what workloads you can run on top of it. But the outcomes also then become more of your ownership and responsibility. So it feeds back to that model of saying which outcomes do you feel you need to be hands on, wheel on for the good of your business? Which of those are you comfortable with delegating down to AWS and managing on your behalf? And that gives you that flexibility of running the workloads where you choose to. And we can talk about the specific workloads I see landing on various places, but that’s sort of the broader mental model there.
Steven Dickens: So it’s kind of my mental model with a sliding scale almost of how much you want to be hands-on, how much you prepared to delegate to get those outcomes. Are you prepared to at the furthest extreme put the code in and be completely hands off? Or do you want to be as hands-on as possible and maybe setting some parameters in an EC2 instance and you’ve got your hands in the recipe being involved? It’s a sliding scale between those two.
Ajay Nair: Sliding scale, right. And what’s funny is it’s a model that we’ve seen since the beginning of the cloud. We have picked the other extreme with software as a service where we have said, you are completely delegating the capability with the service you’re looking for. So in many ways this is very much a notch even in that spectrum where you go and say, I’m giving you application code, how that application code runs as a “service” in the cloud or on cloud either way, you manage that particular outcome. I will be responsible for how I compose those things together. And as with things, the further you move up the spectrum, the less you’re doing. And in theory, that translates to you moving faster because all you’re doing is putting pieces together and delivering faster over there. The flip side is it’s a shared partnership. AWS is your trusted partner in delivering those outcomes for you.
Steven Dickens: So we’ve talked about infrastructure there. We’ve talked about some of the infrastructure components. We’ve talked about that sliding scale of partnering with AWS. But if we take a lens beyond infrastructure, I know a little bit about AWS’s portfolio beyond infrastructure. You’ve got databases, you’ve got AI and ML platforms, there’s IoT and Edge capabilities going out into industry verticals, we’ve got things like healthcare. Where do you see the serverless component fitting within that kind of broadest lens?
Ajay Nair: So this goes back to what we were talking about earlier of embracing of serverless as an operational model. And I think what you’ve seen AWS really do over the last, again, decade that we’ve been doing this is bringing that serverless operational model to more aspects off the portfolio that we offer. So if you, again, look at the general idea behind the operational model is that delegate as much as you can to AWS and own just the parts out of that differentiate business. So if you look at innovations that we have done in the data management space with Redshift serverless or even EMR serverless, you’ve now offered flavors of those services that can run in a much more hands-off manner for customers like operational responsibilities like scaling and others are taking care of it, and customers can operate their data stores in there as they see fit.
We already have with Dynamo DB, one of the original sort of serverless databases that are in there, especially when used with its on-demand mode where customers are able to delegate the scaling behaviors down to Dynamo DB and they use it as a key value store or any other form factor that they see fit for NoSQL scenarios. So I think what you’re seeing in general is that spectrum model being applied to all aspects of the portfolio, but starting from, I like to think of them as infrastructure primitive as an application primitives. Infrastructure primitives tend to be the load balances and instance types and network constructs and disc stripes and so on and so forth. And then you have application constructs like database, queues, functions or services, workflows and more. And AWS probably has one of the broadest selections, well not probably, has the broadest selection of services that we have that offer those application parameters over there. Between Event Bridge step functions, Lambda, DynamoDB, and all our managed services, otherwise Redshift and others that are offering serverless services.
I think the next frontier of innovation then looks at the classes of applications that you’re calling out here and there are more fit for purpose application classes that we can offer customers. And we have done this now with IoT where there’s a selection of services that allow them to do that, all of that interplay with these services. There is a burgeoning collection across our AIML solutions, including capabilities like SageMaker that wrap some other services as well as offer purpose-built capabilities for customers in building and developing ML solutions.
One of my favorite examples is Amplify. It’s a smaller service, but what it does is offers customers a easy way for building a front-end experience in web apps. And it’s a higher level abstraction targeted at a specific application class, but underneath the covers, it builds on top of Lambda, DynamoDB and others to give customers a higher level abstraction while getting the serverless operational model benefits that go over there.
So as AWS, and again led by customers you’ll see is you want the ability for them to offer that spectrum in all aspects of services across the application primitives that we offer. And you will see that selection continuing to grow.
Steven Dickens: So we’ve had a wide-ranging conversation here. We’ve done serverless, we’ve done applications, we’ve done databases. I think for me as we look to wrap up, final question. What would be the way that you’d like your customers potentially think about the Lambda and the serverless portfolio within AWS? If you were to have the ability to speak through this platform to those end users, what would the summation be? What would you like them to be thinking about as they embark on their AWS journey?
Ajay Nair: I think from AWS’s perspective, customers have actively embraced the serverless operational model and mindset. We have over a million active customers on Lambda. We have over two trillion invocations on Event Bridge every month, which is our premier eventing service. Over tens of trillions of requests on Lambda itself. So it is at a point where the serverless adoption model, it’s popular across large and small customers across all classes of industries as it goes. Most customers, the value that you have to think about is serverless first. Serverless brings you the agility and cost efficiency benefits that you expect from building on AWS. So start with the serverless operation model for any of your net new development that are going out there. And the thought process I’ll leave you with again is innovation comes from speed and speed means doing less. And so do less, go serverless.
Steven Dickens: You’ve practiced that before. That’s not the first time you’ve said this.
Ajay Nair: That’s become sort of a pet phrase of mine for this one, and it truly reflects what I’ve heard back from customers on this. So it becomes an easy one to repeat.
Steven Dickens: I can’t think of a better way to summarize our discussion. So Ajay, thank you very much for your time.
Ajay Nair: I appreciate it. Thank you for the opportunity.
Steven Dickens: You’ve been listening to Infrastructure Matters, a podcast brought to you by The Futurum Group. This has been another insider edition. We’ll see you next time. Thank you very much for watching.
Author Information
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.
Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.
Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.
Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.