Search
Close this search box.

Integration on AWS: Develop a Future-Proof Integration Strategy – The Six Five

Integration on AWS: Develop a Future-Proof Integration Strategy - The Six Five

On this episode of the Six Five, host Steven Dickens is joined by AWS‘s Nick Smit, Principal Product Manager, and Emily Shea, Worldwide Lead, Integration Services Go-To-Market, for a conversation on developing a future-proof integration strategy with AWS.

Their discussion covers:

  • The importance and impact of integration strategies in today’s cloud ecosystems.
  • AWS’s approach to facilitating seamless integration across diverse IT environments.
  • Best practices in developing and implementing a robust integration strategy.
  • How AWS enables organizations to innovate and scale with their integration solutions.
  • The future prospects for integration on AWS and the impact on global business operations.

Learn more at AWS. Watch our previous episode from this series here: Exploring the Future of AWS Serverless with Holly Mesrobian – The Six Five.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Steven Dickens: Hello and welcome to another episode of The Six Five. Today, we’re talking Serverless technology with Emily Shea and Nick Smit from AWS, thanks for joining me, guys. So let’s dive straight in. Let’s do some introductions. Emily, I’ll go to you first. Your role, what do you do for AWS?

Emily Shea: Yeah. So my name’s Emily Shea and I lead the application integration go to market team here at AWS. So me and my team work with some of AWS’s largest customers, specifically with a lot of our integration services.

Steven Dickens: Fantastic. And you Nick?

Nick Smit: Yeah. I’m a principal product manager for Amazon EventBridge. It’s a service in AWS that helps with integrations, so get to learn more about that space as I work on EventBridge.

Steven Dickens: Fantastic. So we’re talking about integration today. We’ll dive straight in here. When organizations are moving to the cloud, they’re starting to modernize, integration’s always part of the conversation. It’s a critical component. How do you see organizations thinking about integration in their migration and modernization efforts? You obviously chatting to lots of customers, chatting to your team. How are you seeing this play out?

Emily Shea: Yeah, so I work with a ton of customers that are in that process. Maybe they’re starting to do a migration to the cloud, moving from on-prem or doing a data center exit or they’re in the cloud and they’re starting to do some modernization. And I do think that integration is really top of mind for those customers and it’s often something that they want to think about getting right first. So some of the customers I talked to, they’re doing a really large data center exit and it’s seldom a kind of one step process of, all right, we’re going to move everything into the cloud all at once. Oftentimes it’s a situation where they have kind of a couple of business critical applications they’re going to move over, but they might have a database or some other system that they want to keep on-prem for maybe for compliance reasons or it’s not something that they want to prioritize moving right away.

And so they really prioritize getting a strong integration solution set up in the cloud because they know that all of their applications as they start to move them over one by one or as they start to build new off of that are still going to need to be at the heart of that and integrate with all the different applications in the various stages of migration and modernization that they might be.

One of the customers that I work with that I think exemplifies this really well is the driver and vehicle licensing agency out here in the UK. So I had the pleasure of speaking with them at the AWS London Summit over the last week here, and they have an amazing story of where they’re the kind of customer that has had this just incredible technology evolution of starting with paper driver licenses and punch cards with early computers and then now moving all the way to today using AI/ML and AWS services and then doing some really cool stuff.

And so as part of their kind of transformation and modernization, they’re doing a lot of work to update their driver license application system and make sure that it’s really extensible and it can easily scale and extend to new business requirements, but they still have systems that are running on mainframe that they need to connect to. And so this is the kind of customer that is going to need to have a really strong integration solution to be able to connect to both the really modern and AI/ML kind of workloads that they’re doing with maybe some image recognition or image adjustments that they need to do with those photos that are coming in with driver’s licenses and then also connect back to the mainframe system. And this is probably, nearly all the customers that I talk to are in this type of scenario with really heterogeneous types of applications that they’re connecting.

Steven Dickens: Nick, it’s interesting there. Some of the things Emily said that sort of resonated with me is that mission-critical piece. It could be the crown jewels application, but the other thing that sort of struck me was also AI and ML workloads need to be integrated. Are you seeing that same sort of key theme come through in your conversations?

Nick Smit: Yeah, Steven, I think whenever we’re looking at these types of sort of areas of technology and really important question to ask is why is this important now? What’s the reason I care about this today? And I think there’s a couple of trends happening that are making integrations increasingly important for customers. Emily touched on that movement from your on-premises environment to the cloud. That’s a big component of this. I think we see customers as they do that start to actually break up their monolithic applications into these microservices, these purpose-built applications for a specific need.

That’s really great. It allows your teams to be more independent. It allows you to kind of separate out your different parts of your business into these kind of purpose-built applications. But the need that arises out of that is this need to connect them all together. Right.Then we also see a lot of folks embracing SaaS applications. These purpose-built apps for a particular need of their business. Right. It might be Stripe for your payment processing, you might use Salesforce for your sales and marketing, Zendesk for your support ticketing. So these are already fantastic applications that give you best-in-class capabilities for a particular vertical in your business. But again, you now have all of these different applications that you need to integrate.

So I think those trends have existed for a while and are increasing the importance of this particular area. But the one that is really accelerating it is this GenAI side of things. Right. And so I think a lot of folks have become really excited about the potential of GenAI and large language models, how they can incorporate these to kind of add more intelligence to their business. But what we find is that for those language models, for those GenAI agents to be effective, you really have to bring your data to them and enrich that data or that interaction with your data.

So for example, you might have something that’s going to help your customers with support ticketing. There’s no real value in an agent being able to answer questions if they don’t have context on a particular user. They don’t know what order was placed recently or if they’re trying to make a payment, what the errors are. And so I think that’s where again, integrations are critical because they help you bring this data to your AI agent and ensure that they are kind of contextualized with that data.

Steven Dickens: So I mean, the way I’m listening to that is we’ve had this challenge of, you mentioned Salesforce, you mentioned Zendesk, some of these applications, they’re not new, they’ve been installed 10, 15 years and the integrations between those has been a challenge. But I think what I hear you saying is it’s kind of now an increasing challenge on top of that because you’ve got AI workloads maybe landing in production for the first time. Is that what you’re seeing as some of those challenges that these organizations face? It’s kind of do the traditional stuff that’s always been hard, but then layer on top of that a new workload landing for the first time with AI and how do you integrate for that type of workload? Is that what, maybe come to you Emily, is that what you’re hearing?

Emily Shea: Yeah, I think that what I’m hearing is that it’s, so like you said, there’s always been a bit of a challenge of getting all these different, these very disparate systems to be able to work together and to be able to get business value out of the data and events that are sitting on all of them. I think it’s mostly that now there’s this really compelling force of, there’s so much kind of innovation and it’s an increasingly competitive space to be able to get that really fantastic GenAI use case out there as all the other folks in your industry are looking around trying to figure that out as well. And so it’s really the push and the drive to say, hey, let’s figure this out and get these things talking together and do it really quickly because things are moving incredibly fast at the moment with all the innovation that we’re seeing out there.

Nick Smit: I think a good example of this, just thinking of a particular customer is the NFL. So we recently had an engagement with them where they have thousands of players who play across the league and they need to generate photos or get photos of each of these players to show on television, to show on the website. And of course with thousands of players and different teams all around the country, you’re going to have different kind of framing of each player. They’re going to have different zoom levels. So to get a kind of consistent look across all of these thousands of players is actually really tricky.

And so in the past that was a human-based task for them. Someone would literally sit there and kind of do the right cropping, the right framing, but they’ve realized with Adobe’s Photoshop AI APIs, they’re able to actually automate a lot of this. So on the face of it that seems like just a kind of programming challenge. Right. I just need to write some code to call these APIs, transform these images. But it turns out when you get into it, that’s really an integration challenge because you have to call these APIs, they’re asynchronous, they don’t respond immediately, so you have to wait for them to finish and get some callback that says to you, okay, this one’s completed.

You then need to go and call another API to transform the image slightly differently. You have to upload them into different S3 buckets. So there’s a lot of different kind of parts of that simple application that really, I think the interesting thing here is there’s all these opportunities to automate more, to bring more intelligence to what you’re doing, but it does require these integrations to actually make them work.

Steven Dickens: So I think anybody who’s listening to the first 10 minutes of the podcast is going to get that as a problem statement. It’s these traditional systems, it’s once it’s more than one system talking to another to deliver an application, that the problem becomes exponentially bigger and harder. You layer AI and the speed of information, Emily, that you talked about on top of that, I think people are going to get that challenge if they’ve been anywhere near this space. I think maybe the question, we’re 10 minutes in, where does AWS play in that? And I think people would understand the problem statement. They would certainly know AWS as a sort of infrastructure provider, if you will. Where do you plan that integration space? I’ll go to you first, Emily.

Emily Shea: Yeah, absolutely. I think one, just maybe to add on to the problem statement, I definitely think that just the variety of different applications that you need to connect. The other ones that come to mind that I hear from customers a lot are just with some of those existing integration solutions that they might be using today, something a little bit more off the shelf. They are needing to have specific skills to interact with them. So there’s only particular people that have the skillset to build with that. But the other big one is that I think a lot of the integration solutions that are out there come with some pretty steep licensing costs.

And so this is one that I hear from customers a fair amount. But then the way that they see customers building their integration solutions on AWS is using a whole combination of all the different purpose-built services that AWS has for integration. So across the spectrum of application integration with events, messages, APIs workflows, file integration is a big one. And then also with data integration, so maybe streaming, maybe ETL, but across all of these, AWS has a bunch of services with very kind of built-in features and native integrations with the rest of the AWS cloud and applications that you might be building. And the way that I see customers doing a lot of this is really taking their integration solutions that they need and building them into their application development practices.

So maybe they have a central cloud team or a central architecture team that sets out and says, okay, these are the most common integration patterns that we’re needing to use across our organization. And they take those and build those out with all of the guardrails and maybe security or compliance requirements that the company might need.And then they make them freely available to the developers that are building applications. And they can go and choose the integration pattern that they need and know that it’s already set up with the guardrails in place. So I think that’s this kind of combination of being able to move very quickly and give that autonomy to developers to build and choose the integration that they need and work that into the application development workflows and processes that they have for any application.

Steven Dickens: Shift it left almost. To shift it left.

Emily Shea: Yes, absolutely. But then also make sure that you have that security baked in because obviously you don’t want it to be a complete free for all. You want to be able to make sure that everything that you deploy is very much meeting the requirements that your company has. And so I’ve seen that balance work out very well.

Steven Dickens: And yourself, Nick, are you seeing that same trend? I think that makes sense there, but are you seeing that same trend?

Nick Smit: Yeah, exactly. I think the customers that I talk to are really interested in empowering their developers to do these integrations. And I think there’s a couple of reasons for that. The first is just that the integrations between systems now is a production level. You rely on these integrations like you do your database being up or whatever other part of your system. And so the idea of offloading this to an IT admin or a line of business user, that works if this is kind of something that is just a convenience type integration, but we’re now at the level where these are like production systems.

We have to ensure that there’s high scale that can go through them and that we have great practices for tracking changes to them and that we have that uptime that we need with something of this critical importance. And so I think that’s where customers are excited about being able to use AWS for their integrations because they know that they can rely on us for security availability, different regions that you can be in. And so all of those capabilities are things that customers already need, and they see that being able to use those same practices that their development team have for building systems and offer functionality to their customers is a key way to kind of build their integrations so that they achieve that resilience and security availability of these integrations.

Steven Dickens: So Emily, coming back to something you mentioned, the cost of some of those traditional legacy integration platforms. Are customers kind of considering AWS as a iPaaS type of vendor and maybe has that sort of commercial competitive landscape looking for you guys is, because I think what Nick was saying around people are looking at AWS as that kind of home for these applications, but maybe they’ve got some legacy investments, maybe they’ve got some existing partners that they’re working with. How are you seeing that competitive landscape play out?

Emily Shea: Yeah, so I definitely see that in a lot of my customer conversations, cost is definitely a factor and something that is obviously very top of mind for everyone that’s thinking about this problem. And what I’m seeing customers find is that for those customers that do have the interest and a bit of that in-house development skill set, they’re familiar with building modern applications on AWS or maybe in the process of building up that skill set, they are finding that they’re able to build the integration use cases that they need with these AWS services. And a lot of the AWS services are built with that serverless operating model.

So they’re able to pay only when their code is running or when they have events or data to process, they’re able to scale up to handle a sudden peak and then also scale back down to zero. And so with all of that and the managed features and the operations that they’re offloading, they’re finding that they are able to achieve a lower total cost of operations or a cost of ownership than something that they might’ve found elsewhere.

Steven Dickens: Is that driven by, the comment that sort of sticks for me there is the serverless piece. Is that because you’re sort of abstracting away the noise of the infrastructure piece and it’s the development teams and the application teams can work where they kind of see the value, which is kind of, “I need to plug these two business services together or these two or three business services together to build a greater whole. I really don’t want to care about the server and storage and the networking connectivity below it.” I mean, I see you nodding here, Nick. Is this what you’re seeing as well?

Nick Smit: Yeah, absolutely. I think what we hear from our customers is that serverless is the best way to build integrations, especially when you’re coming to these things new and you kind of don’t have existing kind of ways that you’re building this beforehand. Coming fresh to serverless allows you to really get the best out of not having to manage that underlying infrastructure. Just going back to the point about total cost of operation, I think serverless helps dramatically in terms of reducing the operational load that you have to kind of handle from your team perspective. But the other part of it is just, and Emily touched on this earlier, but accessibility to skills. So if you go on to one of the big job sites Indeed or whatever it is, and you search for Python developers or JavaScript developers, you’re going to find hundreds of thousands of those, right?

But if you’re looking for someone that has more specialized expertise in one of the integration platforms out there, that’s a much smaller market. And so I think another component of considering a total cost of operation is just who is actually working on these integrations? Are these folks that you have in your company already that you can kind of move to, to help work on these integrations? Or is it someone entirely net new that you have to actually hire or consulting company that you have to outsource to pay for these? And I think that’s another big component of where customers see a dramatic cost reduction, is just being able to reuse all of that skill set that already have on AWS.

Steven Dickens: So as we start to wrap up here, I’m going to come to both of you, crystal ball, you get to look into the future. Where do you see integration going? I’ll go to you first, Emily. Where do you see this, take a step back away from what you see over maybe the next six, nine, even 12 months. Give us that sort of broader lens. Where do you see some of the big themes?

Emily Shea: Absolutely. Yeah. Well, the one that is super interesting to me is just the fact that there’s so many customers that are still looking at how to get onto the cloud and how to build modern applications on the cloud. And so I think that there’s a tremendous amount of room for new improvements in the way that they can get into the cloud quickly, build that integration solution that is going to enable them to start building new modern applications very quickly while still staying connected to their existing systems. And I think that that area is one that there’s a ton of potential in and a ton of room for people that want to move a lot faster there.

I think I’m also seeing a lot of exciting things when it comes to what customers are already starting to build today in with generative AI and integration, obviously huge wave of interest and excitement around that. And I’m seeing some really cool stuff that customers are getting into production with integration services. I think the two big patterns that I’ve observed is one around customers building event-driven architectures. So maybe they have a model endpoint that they want to call and the model endpoint can’t scale as much as the application that is processing those requests. And so they need to buffer those requests to the model endpoint with asynchronous events or messages. And I think that’s a really big one.

The other one that I see a lot of is customers using workflows to be able to have some control and orchestrate the calls they’re making to endpoints. So customers might’ve started out by calling a single model and now they’re saying, “Hey, I can actually get better results if I maybe make different calls to models with different context or with different information I’m sending to models.”Or maybe a cool one that we heard at the London Summit for AWS was TUI.

So they’re a big travel company and they found that they were able to enhance their content generation process by first calling Llama 2 and generating a hotel description and then taking what was generated and passing that on to another model Claude 2 to be able to kind of format and polish that up before able to push that out. And so some really cool things that people are doing around kind of orchestrating calls to models in more advanced ways or some of the cool things that I’m starting to see emerge and foresee us seeing a lot more in the future.

Steven Dickens: And you Nick? I think Emily’s probably stolen half of your thunder there, but what are some of those trends and things you’re seeing?

Nick Smit: I would say the easy bet is to talk about GenAI here. So in terms of being right, I suspect that’s a great call, Emily, but the one that I’m really interested in is these integrations moving to a more real time nature. And so I think about cases where I phone up a company and they know my phone number, they know context about me. And so instead of having to choose a prompt or speak to someone about a particular problem, I simply just get context as I call that says, “Hey, are you calling about this or about something else?” The same thing just as I use applications. I find applications are so much more immersive if they have the right context about me in them. We see things like our various social media, whether it’s TikTok or Instagram, as you swipe in real-time, the things that you watch are determining the next video that you consume after that.

And so I think that where I’m really interested in the integration space is around being able to bring this contextual information in real-time to an experience that a user is having. And I think that as we do more of that, as we’re able to bring information from different places as it happens, we’re going to start giving customers a more personalized, more kind of immersive experience in the applications that we build. And so I think GenAI is a fantastic way to kind of bring more smarts to things, but I would just love to see more context, more immersive experiences that draw on all of the information that businesses already have.

Steven Dickens: Well, it’s a real shame that we’re, what is it, 22 minutes in, and I’ve got to start to bring us home. I feel like I could keep talking to you guys for at least another hour. I think the vision that you paint around where we’re going to go with GenAI and some of that contextual sort of information is going to enrich both end-users where the market’s going in general. So I really appreciate you joining the show. Shame we’re only able to talk for 20 minutes or so here, but thanks very much for joining us.

Nick Smit: Thank you.

Emily Shea: Thank you very much.

Steven Dickens: So thanks for joining us for another episode of our Serverless series with AWS. Please check out those other episodes and we’ll see you next time. Thank you very much for watching.

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

Tackling Mainframe Skills: Upskilling, Reskilling, and External Expertise Solutions for Modernization
Steven Dickens, Chief Technology Advisor at The Futurum Group, shares insights on how Mainframe modernization is accelerating with AI and hybrid IT, but skills gaps remain, with the majority of organizations relying on external providers.
Dion Hinchcliffe, VP of CIO Practice at The Futurum Group, shares insights into NVIDIA’s Q2 2024 earnings, challenges with its Blackwell chip, rising cloud costs, and the evolving AI skills gap.
NVIDIA, VMware, Salesforce, Dell, Pure Storage, and HP Inc. spotlight AI growth and cloud innovation. In episode 230 of the Six Five webcast, Patrick Moorhead, Daniel Newman, and guest Dan Ives discuss their innovations and market challenges.
Max Peterson, Vice President of Sovereign Cloud at AWS, joins Daniel Newman to share his insights on the burgeoning field of digital sovereignty, underlining its significance for businesses and governments alike.