Menu

Adobe’s Use of Midjourney

Adobe’s Use of Midjourney

The Six Five team discusses Adobe’s use of Midjourney.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: So Adobe got, they took some heat this week. I think the story first broke on Bloomberg, but basically that Adobe their whole, “We’re really ethical. We train everything 100% using an ethical framework on our own stock imagery.” It turns out that someone was able to deconstruct the model and find that Midjourney, another model that’s used, another LLM for, or not an LLM, it’s an image generation AI tool, some of their imagery was used in Adobe’s Firefly. Pat, okay, so can I be a little bit, I’m going to be a little callous about this.

Patrick Moorhead: Do it.

Daniel Newman: Feels to me like, really? Really everybody’s loving on some Sora and here you got the CTO of Sora, “What did you train your model on?” “Publicly available data.” “Did you use YouTube?” “Uh…” And then, by the way, did I do a pretty good impression of the interview? I’m going to get the angle right. You did. All right. So anyways, and they came back and basically nobody cares. In the end it’s like, so basically she didn’t say we trained it on something else, but she didn’t really say they didn’t train it on YouTube data. So here we have a model that’s what, 5% apparently that was trained on mid-journey. I think lower prioritized data in the overall model framework.

I think if we actually unpack all the models that have been trained, we would be super disappointed to find out that everybody’s telling us something that’s not exactly correct. This doesn’t give Adobe a free pass. They try to use the high road, we’re doing it and we’re better in how we do things. And it’s never good when it comes back that you didn’t do what you said you did. But having said that, I also think there’s just such a huge gambit. The vast majority, almost 95%, was trained exactly as they prescribed. I do think that this is a little bit of clickbait, it’s a little bit of, oh my god-ism. And even the titles made it sound like Adobe used all Midjourney. It was a very small amount. I think we either need full transparency or we don’t. Meaning if we’re going to do this and roast companies for what they’re doing, then we should look at all the models and how they’re trained.

I don’t think we want to do that though. I don’t think people want to know how this happened. I don’t think a lot of people want to know how much of their personal and private data has probably been used, anonymized or not, to train these models. I think it’s a little eerie out there. Having said that, I do think Adobe is trying very hard to hold the line, be a bit more above board, be a bit more transparent in what they’re doing. Never works well when you say that, and then it comes out that you didn’t do exactly what you say. But I wouldn’t be surprised, Pat, and that’s why I’m callous about it, is if we unpack the training data sets for almost all these models that we look at to find out that a lot of data from a lot of sources that surprise us was actually used in the making of these models. So Adobe’s got a little cleanup to do, but not… I don’t think this is as severe as the headlines.

Patrick Moorhead: Dan, I don’t even know what to take to the bank anymore. This was not something that I expected from Adobe at all. And particularly the company has a page called “Adobe Firefly versus Midjourney.” And the last thing on the page talks about community first, and compensating Adobe stock contributors, and commercially safe for individuals and enterprise, creative teams. And now I believe that they’re doing all of those, but it was a little bit of a surprise and Adobe was the most, in my head, the most pristine of that because they, at least the way I interpreted it was, it was black and white.

“We’re using Adobe stock footage that we’re paying contributors for and you don’t have to worry about getting sued” or something like this. So are we finding out that pretty much everybody is doing this? I think so. And unlike you, Dan, I would like to see what’s under the hood of all of these models. In fact, one of the things I gave a lot of credit to Salesforce is that, and IBM, is they gave their sources and their methods for it. So for instance, what’s the data they used, how did they prune it, and what was the method of the output? And to me that’s really good. I will bet you that Adobe’s getting some indemnification requests at this point. But we’re going to have to see.

And by the way, comparison to DALL-E 3, you have no idea what they trained it on. Remarkably, when you try to create a Yoda character, it comes back and it looks like a freaking Yoda character, did they totally Hoover the entire Disney workup? I don’t know. But if we’re looking at that from that angle, Adobe looks pretty darn clean. You cannot get Firefly to do anything that comes back that looks like it came back from licensed content, Disney content, or something like that.

Daniel Newman: That’s a great point, Pat. That’s why I said the trillion parameter models though, billion even. This isn’t being QA’d by some fact-checker. It’s too big. There’s literally no… The only thing you can build is more AI to actually check the AI. I don’t know. There’s going to be mistakes made, but if this is what we’re holding the world accountable for, which it should, I just hope we have some higher standards up the chain in other areas.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
Collapsing the Stack VAST Data’s Bid to Own the AI Data Loop
February 27, 2026

Collapsing the Stack: VAST Data’s Bid to Own the AI Data Loop

Brad Shimmin, Vice President at Futurum, analyzes the VAST Data platform updates from VAST Forward, detailing how the new Policy Engine, Tuning Engine, and Polaris architectures are simplifying the AI...
Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another
February 27, 2026

Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another?

Futurum’s Alastair Cooke shares his insights on new HPE research that finds that only 5% of enterprises are fully prepared for the so-called Great Virtualization Reset, even as two-thirds plan...
NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand
February 27, 2026

NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand

Futurum’s Nick Patience analyzes NVIDIA’s Q4 FY 2026 earnings, highlighting data center scale, networking expansion, and agentic AI adoption shaping AI infrastructure demand....
Salesforce Q4 FY 2026 Earnings Show Agentic AI Scaling, Guidance Steadies
February 27, 2026

Salesforce Q4 FY 2026 Earnings Show Agentic AI Scaling, Guidance Steadies

Keith Kirkpatrick, VP and Research Director at Futurum, analyzes Salesforce’s Q4 FY 2026 earnings, focusing on Agentforce scaling, enterprise AI execution metrics, and what FY 2027 guidance signals for growth...
The Storage Era is Dead; Long Live Everpure!
February 25, 2026

Storage Evolved: Everpure Takes on Data Challenges for an AI World

Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Pure Storage’s rebrand to Everpure as well as its supportive acquisition of 1touch.io, exploring why dropping "Storage" is...
Five9 Q4 FY 2025 Earnings Revenue Beat, AI Momentum, Cash Flow High
February 25, 2026

Five9 Q4 FY 2025 Earnings: Revenue Beat, AI Momentum, Cash Flow High

Keith Kirkpatrick, VP & Research Director, Enterprise Software & Digital Workflows at Futurum, notes Five9’s Q4 FY 2025 AI momentum and record bookings signal strong H2 FY 2026 growth....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.