On this episode of the Six Five Webcast Enterprising Insights, host Keith Kirkpatrick shares his insights on Adobe MAX – the company’s annual conference that unveils new product innovations tailored for the creative community.
This discussion covers:
- Highlights from Adobe MAX and key product updates
- The introduction of new AI and generative AI capabilities in Creative Cloud applications, including Photoshop and Premiere Pro
- The critical importance of adhering to responsible AI development principles, especially regarding IP and licensing for commercial applications
- Adobe’s dedicated approach to incorporating customer feedback into its ongoing innovation efforts, ensuring it remains at the forefront of the creative software industry
Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Webcast Enterprising Insights is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Keith Kirkpatrick: Hi, everyone. I’m Keith Kirkpatrick, Research Director with The Futurum Group, and I’d like to welcome you to Enterprising Insights. It’s our weekly podcast that explores the latest developments in the enterprise software market and the technologies that underpin these platforms, applications and tools. This week I’d like to talk about Adobe MAX, which is the annual conference that’s focused on the new innovations and the creative community that uses Adobe products. This is an event that happens each year. Actually, this year is the first year that they actually held it in Miami, Florida, after normally being out in Los Angeles. It’s a large event, I think it’s something like 10,000 attendees. And what I’m going to do is talk a little bit about the new announcements that came out of the conference that resonated with me. Give you my overall impressions about the event, and then we’ll discuss kind of what we’ll be watching from Adobe in the coming months. So then of course I’ll get into my rant or rave segment, where I pick one item in the market and I will either champion it or criticize it. So, let’s get right into it.
So Adobe MAX, as I mentioned, we were in lovely Miami Beach, which is a nice change from Los Angeles. And Adobe really took this conference as an opportunity to talk about some new features, but also kind of refocus its messaging on creators, which is sort of the core of their base of user groups. Essentially, when you think about creators, they’re the artists, the designers, the sound editors, the video creators, within organizations that use their products. In the past, over the past, oh, I don’t know, probably a couple of years, there’s been a lot of messaging around what Adobe could do for enterprises. What it can do for things like content scalability, utilizing new technology to improve productivity and efficiency and all of that. And of course, all of those messages are certainly resonating in the market, given this desire and this need to create more content more quickly over a wider range of variations, different regions, that sort of thing. But in some ways, I think the perception was that Adobe kind of did not do a good enough job kind of talking to their creators, talking to people who actually utilize the product to create new art, to create new campaigns, that sort of thing.
So at this event, they explicitly reached out and basically said to their users, “Hey, we appreciate you. We are certainly working to do everything we can to not incorporate new products that will put you out of a job, but will help you do a better job, that will help you do things more efficiently, that will help remove sort of the boring, repetitive tasks that nobody wants to do,” and really let these creators focus on doing the things they want to do, which is discovering new ideas, trying out new variations. Basically, focusing on the creation of new content as opposed to doing administrative tasks. So that was a really great message. It was repeated several times throughout the event, and of course it very much dovetails nicely with some of the new features and functions that they were announcing at the show. So really, they really announced a ton of different things at this. So I can’t go through all of them, certainly do check out Adobe’s website in their press room to get a full rundown on all of the things that they announced. But I’m just going to go through a few things that really resonated with me.
AI obviously was sort of the star of the show, as you could imagine. So, one of the things that they announced is generative AI features that are being incorporated into their Creative Cloud application suite. So a lot of new AI features, generative AI features are being incorporated in core applications like Photoshop or Premiere Pro. And one of the things that was really interesting in Premiere Pro, which is their video creation editing application, is the ability to extend video clips seamlessly. Now, what do I mean by that? So let’s say you’re working on a campaign where you have a 30-second spot and you have various images coming in and you’re going through and you’re putting everything together, and then you realize that there is a five-second gap where you don’t have any content. You have music going underneath, you have a story kind of going through, but the clips that you have just aren’t long enough in duration. Well, this new feature allows through generative AI to extend that video clip, so it seamlessly looks like… It’s almost like a continuation of the action or motion within the clip that’s generated through AI to really kind of help fill those gaps.
It’s an amazing technology. And the most interesting thing about it is, unlike a lot of generative AI tools or features that have been announced, not just from Adobe, just in general, this is something that addresses an unmet need. If you think about it, video is really challenging because once you’ve shot video, it’s very difficult. If you know anything about the way a lot of these things work, it’s very difficult to go out and re-shoot original content. A lot of times what happens is you wind up having to reuse existing shots or maybe you’re not necessarily using the best shots if you’re trying to fill gaps. This is a great way to allow a designer or a video designer to take a clip they really like and make sure that it fits the spaces that they have. So really interesting feature there. The other thing of course is Adobe announced a new Firefly video model, which is essentially, this is that technology, the underlying technology that allows video to be generated from a text prompt.
So if you think about the use of generative AI to create images, photos, that sort of thing, it’s very similar. This will allow the creation of video simply from a text prompt. And the most important thing to remember about it is not just the fact that it can be done, but it’s being done in a safe way. And by safe, I mean that the video model is only trained on either licensed content within Adobe stock or images that are royalty-free. And that’s rare that we’re going to find that, but this is really important because if you think about the use of video in a commercial setting, it is absolutely critical that any content that is incorporated into this, into a campaign that it is basically cleared. It is licensed, paid for, so there’s no possibility of infringement upon someone else’s work that was not credited. The other thing that’s particularly important is making sure that you’re not infringing on other IP rights.
Meaning, if you were to say, “I would like to generate an image of two people sitting down at a restaurant enjoying a soda together.” Well, the problem is if you just train models on images from wherever that are not properly labeled, you may wind up generating two people sitting at a table drinking a soda, but the soda, because of just the fact that it wasn’t sort of vetted, it could actually have a brand on there and that could be a real problem, particularly if it’s supposed to be used in any kind of a commercial campaign. So the idea here is to make sure that the model is only trained on licensed content and that there are proper controls applied so that when you ask to have a generic image generated, it doesn’t come back with branded content that you shouldn’t be using. So that was a really interesting announcement. There are a couple other things that really kind of struck a nerve with me. There are some new collaboration tools that were announced with Frame.io. Frame.io is their application that allows folks to collaborate on various video projects. So for example, if you were to record a video and you wanted your client to go through and make annotations and say, “Make this section longer, cut this out, make this jump, this cut look more seamless,” that is the tool that allows that collaboration to do it within a platform where the video is running. So it’s a real sort of way to streamline that collaboration process with video.
Now, they also announced some new camera integrations that allow for a direct upload from Canon, Nikon and Leica cameras. That’s really important because there, there’s a direct link in the field from the camera to the cloud so you don’t have to go back to your studio and upload it. But even more importantly, these cameras are also supporting the Content Authenticity Initiative, which incorporates metadata to show the source of the image. And then of course, it captures any other data that’s in terms of, was this image modified by AI or not? Is this a composite? That’s sort of information, and that is really important for many organizations in terms of making sure that if someone is interested, they can identify, is this particular image, what is behind this image? It doesn’t necessarily mean a true image or not, because that’s all based on context, but you can assess that, okay, the image was taken at this date, at this time, at this location.
Now, some of the other credentials in there can be modified. For example, there was a great example of where you might not want to have the photographer’s name attached. If let’s say they are working as a photojournalist in a conflict region, they may not want their address and phone number and name attached to that photo because that could put them in danger. But at any rate, all of these new enhancements are designed to again, create, essentially streamline the creation of content, make it easier for folks to work on that content together collaboratively. Now there were a couple other announcements that really kind of were interesting there. Adobe announced GenStudio for Performance Marketing. This is a new application that leverages AI to optimize the entire marketing content creation process, from planning through measurements. What does all that mean? Essentially, if you think of content and using it for various campaigns these days, it’s not about this mass marketing approach that was taken several decades ago. The idea is that you want to reach out to people and reach A, the right segment of people, and then make sure that you’re appealing to them in a very personalized way, and that all requires data.
And what GenStudio for Performance Marketing does, it is designed to make it easy to have one place where a marketer can go in, find the audiences that they want, segment those audiences, and then incorporate various data from their own sort of data sources to then personalize that to make sure that campaigns go out to the right segments and have the right elements there all on a single platform. And of course, because it’s part of the Adobe ecosystem, it is part of the platform, it’s very easy to integrate and pull content from a variety of different places, whether we’re talking about the Adobe Experience Asset Management System, or the Content Hub Studio, or assets that are being worked on in different applications. The goal is to make, is to provide sort of a single platform to make it easy to grab all those assets without having to make 1,000 copies where you’re not sure which generation is the right one. So, that’s an interesting announcement. I’ll have more on that later in subsequent podcasts. But ultimately, the main thing to remember about that is Adobe is trying to make it easy for marketers and creators to work together, and then to quickly take all of those assets and then really activate them in marketing campaigns.
Now, the last thing that I’ll mention here that was really interesting is something called Project Neo. And this is something we got a preview of, it’s not available yet. But it is a technology that allows the creation of 3D images in a 2D space. So it allows basically, the goal is to make it easier for 2D artists to transition into creating 3D art, and I actually saw a demo of this on stage. And it was really interesting because they were taking an image, incorporating different elements, and really allowing that creator to make very realistic 3D models that not only look great, but where each element was able to be controlled as sort of a separate layer, allowing it to be reused, synthesized, repositioned, getting different angles, all of that kind of stuff. Which is very important particularly as sort of the core assets of content are increasingly being reused in different media, both in static still imagery, as well as in video. And the creation of 3D art and objects is sort of a key to unlocking additional eyeballs, because it is pretty interesting to look at.
Now, I think there’s a couple of things that are kind of overriding in this whole conference that are interesting, and I sort of alluded to it earlier, but when we think about Adobe, it is a very, very powerful platform, it is obviously geared to enterprise use. And I think that’s really important to think about, what are the elements there? Well, obviously it’s robust, it’s scalable, and increasingly, if you think about the types of content that is being worked on and created, these files are really, really large. So when we talk about large files, we’re talking about massive, massive video files that honestly, if you were to move them around, it’s going to take a lot of time. If you’re going to copy them, it’s going to take up a lot of space and you can wind up having issues, in terms of which generation are you working on? Who is changing what elements?
So the idea that this is a single platform where there’s one spot to go for that asset and people can basically go in and modify that within a single platform, that’s sort of a huge benefit to using an Adobe product versus some of the other ones out in the market. And I don’t mean just in terms of creation, but I mean about actually going outside of the creation part and actually getting into the activation part, which is looking at the marketing solutions. And again, does this matter as much for small organizations? Not currently, but I think it will in the future. And the reason for that is with all of these tools, particularly generative AI, it’s going to become much more cost-efficient for designers and marketers to create not just a couple of dozen variations, but hundreds or even thousands of variations on a single asset.
So you can have different assets in different formats for things like Instagram, or Facebook, or the web, or text, or what have you, but then you could also have variations based on region and language. Then you can actually segment that even further in terms of, well, maybe one asset would be designed to appeal to people being a specific generation, or a specific gender, or what have you. The combinations or the possibilities are almost endless, in terms of how many different ways you could segment the market. Up until generative AI, it was almost impossible to create variations for all of that because it just took too much time. You’d have to have a designer going through and manually changing all of those elements every time you want to have a variation. Now with generative AI, you’re actually able to create variations at scale, and that is one of the sort of the core messaging points about Adobe and generative AI. It’s not just that it really allows people to create things from, let’s say a text prompt. It’s really about improving scale by allowing designers to basically create hundreds of variations on a single asset very, very quickly, instead of forcing them to do that manually.
Now I’d also just mentioned an interesting point about using generative AI to create initial assets versus using generative AI to create variations. I think we’re in an interesting point in the market right now where very few creative teams I think are using generative AI as sort of the genesis of their ideation phase. I think most are still doing it the old-fashioned way where they have an idea and maybe they’ll sketch something out and create the concept that way from their own mind. We might just start to see that shift a little bit. But I still think there’s a lot of pride and a lot of value in having ideas come directly from the minds of creators, instead of having it be somewhat derivative by using a generative AI text-to-image or text-to-video tool. We’ll have to see where the market goes, but that’s my sense now. I’ve talked to several creators on site there who also tended to agree with that approach.
And that being said, Adobe has incorporated tools and making it very easy to capture the, let’s say a hand-aligned drawing of a particular image, and pull that into Adobe so it can be modified, reused, changed in terms of taking it from a static analog drawing to a piece of digital art where the different elements can be separated into different layers and modified. So I think Adobe realizes that as well. What else? I think there is another thing that I think I should address here, is Adobe really focused in on announcing a ton of things, more things that I have even have time to talk about today. And I had attended a couple of Adobe events early in a year and I spoke with some creators and there definitely were people who were talking to me who said, “I like Adobe, but I feel that in some ways with some of their products, they have fallen behind some of their competitors, in terms of the number of innovations and the functionality of their products.” Now, it’s always hard to validate whether or not that’s accurate or not, but sometimes perception is reality.
But I would say that Adobe did a great job of coming out and essentially overwhelming everyone with these new innovations now. I think it was very important to do that. This is normally the event at which they roll out the dump truck and dump all these features that are particularly interesting or have a lot of wow value to them. And I think they needed to do that because you do have some rivals out there like Figma, like Panda, which have continued to innovate, and in some ways are viewed as sort of being the more nimble upstarts. Now that being said, I don’t think anyone is going to argue that certain Adobe products are clearly leaders in the market. If you think of something like Adobe’s video offering is there, like Premiere Pro. That’s clearly a very, very popular application. I forget the exact statistic, but it has a massive amount of market share, most professional designers are choosing that platform. And I think all they need to do there is to show that there were continuing to innovate and incorporate new features there.
Again, the most important thing for Adobe I think, is the fact that they have positioned themselves as the safe vendor for companies looking to do commercial work and incorporate images and video that may have some element that has been generated through AI. They have a very strict approach which is, they will only train their models on content that has been licensed and is sitting in their stock, Adobe stock, or content that is royalty-free, and that would be like public domain images. Now, they’ve had some challenges in the past and they admitted it, in terms of the messaging around their terms of use, they were a little confusing. I think a lot of this is a combination of two things. One, Adobe could have been a little clearer when they revised their terms of use, terms of service. And the other thing is that I think just in general, sometimes the community, the creative community will see something, not fully understand it, and then assume the worst.
And then of course, like anything else these days, they hop on their message boards and things kind of go viral in the wrong ways. I still believe that Adobe is taking the right approach and is probably one of the safest, if not the safest platform for folks to use generative AI to create video, text, what have you. Now that being said, if they wind up having issues, again, with content that is being generated and it’s either violating any kind of IP or if it’s spewing out toxic material, Adobe is going to have to deal with that again, and hopefully they’ve learned from past slight missteps. I think they have, but that’s something where I really do believe that most organizations out there realize that Adobe is sort of a leader in the space and has taken a very strong approach by saying, “When we’re going to use AI, at the very start, we’re going to make sure that we only train on content that we believe is safe to be used.”
This kind of stands a little bit in contrast with some rivals. Canva, for example, takes a wider approach to incorporating content and ingesting content into its models. And then once it’s in their model, they will go through and address it by going through and making adjustments to make sure that the model does not return any sort of IP, it does not violate any IP, or does not return inappropriate images, that sort of thing. And again, they claim that that’s been an effective way to deal with that issue. Again, it’s hard to say which approach is the best, but I think generally speaking, it’s always easier to say, “We’re only going to use licensed imagery and we’re going to make sure that we train and label our models from the get-go to avoid issues around IP violations, or toxic content, or inappropriate images,” at the start rather than sort of pulling them into the model and then applying some sort of moderation.
I think the most, or the other thing that I think is really important here about Adobe is they are, I think, one of those companies that are very much focused on making sure that users are able to utilize their platform, which is very powerful and wide-ranging, in a very, very simple way. And what they’ve done is made it so if you are, let’s say a designer, you can work in the application you like, whether it’s Illustrator or Lightroom or Photoshop. And once you’re done with that, you can save your work and it can be picked up through another application, through the DAM or what have you, and you don’t need to do anything differently. This enables creative people to work in the applications where they feel comfortable. It allows marketers for them to work in the applications that they feel comfortable, and there’s really only one sort of original asset, instead of it having versions being copied all over the place. That is not just a hallmark of Adobe, but it also sort of points to a larger trend, which is meeting the customer, meeting that user where they are and letting them do the work that they need to do in the application that they feel most comfortable. I think that’s a huge win for Adobe, in terms of making it easy for folks to use the product where they want and how they want, based on the role that they have in the organization.
Now, with all that said, I will say, I did get a great demo of their content hub and very comprehensive and I was asking about certain features. And yeah, they have certain things that are kind of still in progress that I think are going to need some additional work, largely around the incorporation of certainly, generative AI features to make them smoother, the ability to track where content is going. Just basic little things like that, nothing major, but I think that the most important thing that I got out of this, they were willing to listen. They were willing to say, “Okay, that’s something we hadn’t thought about. We’re going to incorporate that.” And that’s also something that I think over the past year or so, I think Adobe’s gotten better at really kind of trying to listen to their customers as they move forward in market, because they realize they’re getting more competition. So with that, if you want to learn more, I did publish a research note focusing on Adobe MAX. You can find it at Futurum.com. I’m also going to be revisiting several of these topics over the next several weeks and months.
All right, with that, I would like to move to my rant or rave segment. This is where I pick one item in the market and I will either champion it or criticize it. And today I actually have a rant. This is more of a customer experience rant and it goes to the issue of chatbots. And so I was trying to inquire about a certain number of points and rolling them over to certain dollar values within my airline loyalty program. I tried to do it through my app using the little chatbot, and I tried several times, it still did not work properly. I think this points to an issue that, now I’m not sure, I’m not going to identify the airline, but I can tell you it’s a major US carrier. I don’t know which vendor they use for it. I’m just a little surprised because my request was not something that was terribly difficult. Essentially, I wanted to transfer points, convert points into basically… I’m sorry, miles into points in the program and it was a rollover thing, and there was a way to do it by calling up on the phone. But I was kind of surprised I couldn’t even get something handled for something like this, which seems to be pretty common, to happen through the app. It didn’t even recognize what I was asking for. It did however, eventually kick me to a live human agent who was able to address my issue and eventually it was taken care of.
But I was just honestly surprised that with all of the talk about generative AI and the use of chatbots as a way to really kind of deal with customer issues, that it couldn’t handle something this basic. I hope that this improves in the future because ultimately, I did spend a fair amount of time on this task and if you think about a customer like myself who is a loyalty member with some status, it’s pretty surprising that they wouldn’t address an issue like this. So that is my rant for the week on that. All right, well that’s all the time that I have this week. So I want to thank you for joining me here on Enterprising Insights. We’ll be back again soon to cover all of the happenings within the enterprise application market in the future. So with that, please be sure to review, rate, and subscribe to the podcast on your preferred platform, and we will see you again very soon. Thanks, and have a great day.
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.