Search
Close this search box.

Incorporating Generated Images into Adobe’s Firefly Model

Incorporating Generated Images into Adobe’s Firefly Model

Generative AI has enabled a wide range of interesting new capabilities, particularly in the creative industries. One of the most hyped is text-to-image generation, which lets a user type in a prompt describing an image, and a generative AI model will then create an image based on the elements described in the prompt.

A number of companies have entered into this market, but among the most high profile is Adobe, which released its Firefly image-generation software to much fanfare last year. The announcement was notable from a workflow perspective, as generative AI enabled creators and productivity workers to quickly generate images by using a text prompt, and the capability has been rolled out across Adobe’s platform.

The challenge, of course, is that it takes a massive amount of data and images to train these generative AI-based models. Other image-generation companies, including Midjourney, OpenAI, and Stability AI, built their media-generating models with datasets that have pulled images from around the internet, without paying royalties to their creators for their use. This practice opens these firms – as well as commercial organizations who use these generated images – to lawsuits from artists who say that they are not being compensated properly for their work.

Adobe stressed that the technology was “commercially safe,” meaning that all of the images used to train the model were contained within Adobe Stock, its database of hundreds of millions of licensed images, or images that were in the public domain. This became a key point of differentiation for Adobe, which promised to indemnify its users against any copyright infringement claims from artists.

However, according to recent news reports, it has come to light that Adobe also incorporated images from other sources and models to train Firefly, with generated images accounting for about 5% of the total number of images ingested. This was because creators were allowed to submit images into Adobe’s stock marketplace that used the generative AI technology from other companies.

Toxicity and Bias, Accuracy, and Rightful Compensation

There are really a few different issues at stake when considering generative AI image generation. Toxicity and bias, which refers to ensuring that when a user types in a prompt to generate an image, the model does not utilize or reinforce age-old stereotypes or biases to create that image. A common example of a model exhibiting toxicity would be one that only returned images of African American men when typing in a prompt for a criminal. Similarly, models that incorporate pre-existing biases may generate an image of a mother, father, and two children when a user inputs a prompt asking the model to generate a picture of a family, given that there are a wide variety of family structures that would be considered commonplace today.

However, the desire to ensure that older stereotypes or biases aren’t incorporated into the model’s algorithms have sometimes led to guardrails that go too far, and overcorrect, resulting in historically inaccurate depictions of real people, places, or things. This can also occur due to the presence of AI-generated images being incorporated into the training set, which may skew the model to not properly represent the world as it is when prompted to generate historical or real-world things.

That brings us to the Adobe story. The mere presence of AI-generated images does not automatically mean the model is flawed. However, it does bring up two other issues around rightful compensation, and the need to constantly implement guardrails to ensure generated content is free of toxicity and bias, but also reflects the world as it is.

Adobe has said that its Enterprise plan indemnifies users in the event that users get sued for using content that is produced via Firefly generation tools. I don’t think this will change that policy; Adobe continues to use that as a key point of differentiation against its competitors, and I expect they will continue to have a dialog with their creators to address fair compensation issues. I also suspect they will continue to review the inputs and outputs of their model to continue to tune and refine the AI guardrails.

Buyers also seem to be confident that Adobe is taking the right approach. According to Futurum Intelligence’s survey of 222 AI decision makers, 14.4% indicated they were adding Adobe’s products in either late 2023 or 2024, only trailing organizations such as Google, Microsoft, IBM, and Cisco.

More Images=Better Models?

Ultimately, the incorporation of generated images within a training data set will remain a discussion item over the next several months and perhaps years. One view is that a broader and more diverse set of images can actually lead to better and more accurate models, particularly if the generated images are properly labeled. Others may take the view that Adobe should’ve been more upfront about the composition of its training data, and more clearly discussed how these images would add to the capability of its own models.

Ultimately, we don’t believe that the use of generated images within their training data set makes Adobe’s claims of a more ethical approach completely inaccurate. It does, however, portend larger conversations around the value of using open model approaches, appropriate data sets and disclosure, licensing and rights, and the need for continuous improvement and training.

As a society, there is a continuous demand for ongoing innovation and improvement, which often sits in direct conflict with the desire to ensure that there are appropriate controls and guardrails in place to limit functionality in the name of safety and responsibility. Adobe has shown in the past that it is willing to tackle this issue head-on; failing to do so in the future will impact their ability to retain large enterprise customers, as well as land new logos.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Enterprising Insights, Episode 19: Adobe Summit Wrap-Up with Robert Kramer, Moor Insights & Strategy

Adobe Focuses on Productivity and Efficiency in Summit Announcements

Adobe Announces GenStudio, Its GenAI-First App for Marketing Teams

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

SHARE:

Latest Insights:

Frank Geraci, President at Cronos, joins David Nicholson to share his insights on Huddle, a groundbreaking Smartsheet solution set to redefine configuration management, version control, and the use of Smartsheet portals.
Cicero, Director of Product Marketing at Smartsheet joins David Nicholson to share his insights on ENGAGE 2024. Discover the groundbreaking announcements and the unique energy that makes ENGAGE an unmissable event.
Jennifer Stockton and Courtney Finger share how Smartsheet transformed Conga's marketing operations from "chaos to collaboration," highlighting the pivotal role of Smartsheet in streamlining processes and enhancing creativity at scale.
Amilcar Alfaro, Sr. Director, Product Marketing at Smartsheet, joins Keith Townsend to share insights on the crucial updates from ENGAGE 2024, emphasizing the value of enterprise-grade scale and the platforms' user-friendliness.