Home News What Does Commercially Safe Generative AI Even Mean? Here’s How Adobe’s Answer

What Does Commercially Safe Generative AI Even Mean? Here’s How Adobe’s Answer

by admin

By now, we’ve all heard the gospel of AI and how it will change everything as we know it.

And yet, for every CEO who has found efficiency gains, there are two others who have been burned by a misaligned deployment or an AI vendor failing to deliver on their promises.

AI may be transformative, but like every paradigm-shifting technology, it faces its share of challenges when it comes to commercialization.

This is particularly true for Generative AI (GenAI) which comes with a bright future and a dark past thanks to how many of the systems are trained.

Leveraging the endless bounty of online content to train GenAI definitely makes it powerful, but it also creates a minefield of legal and ethical concerns that can stand in the way of commercialization.

From copyright violations to reputational risks and AI agents that offer unauthorized discounts your company is on the hook for, the road to deploying GenAI in a commercially safe manner is nothing if not bumpy.

And if you’re looking for the government to help, better take a deep and comfortable seat. Legislators around the world are only just catching up with today’s internet realities, and they are years, if not decades, away from adjusting to the GenAI future where everything that leaves a trace online could be fodder for a neural network.

Yet, the demand for GenAI tools will only grow stronger, making the question of safe commercialization an increasingly urgent one to solve.

There will be many candidate answers to choose from, and Adobe is among the first to chart a path towards one with Adobe Firefly; a family of generative AI models that sees the company balance progress with guardrails and responsible innovation.

Here’s how Adobe took a strategic swing at solving one of GenAI’s biggest challenges.

Solving For Trust Is the First Step to Take To Commercially Safe Generative AI

After more than a year of relentless hype about AI, one may rightfully wonder why its impact on our daily lives remains relatively limited.

Beyond ChatGPT’s browser and the handy “summarize this” or “draft me that” buttons sprinkled across our day-to-day platforms, AI’s presence has yet to match its revolutionary promise.

The reason is something far simpler than the technology undergirding the AI industry: trust.

As Sen Ramani, Accenture’s global lead for Data & AI, puts it, “The only limiting factor between us and the AI future we can already peer into is trust.” This was also the theme of Accenture’s 25th annual Technology Vision research recently released at CES.

Foundational models can dazzle with their technical prowess as much as they’d like, but there’s nothing developers can do as long as businesses hesitate to pull the trigger on adoption, something most won’t even consider without safeguards for privacy, intellectual property compliance, and ethical transparency.

Without these guarantees, even the most advanced AI innovations risk being relegated to pilot projects or abandoned altogether. This coincides with Accenture findings, which cites only 36% have scaled GenAI solutions, despite just 13% having achieved enterprise-level impact. Thispresents a cultural and organizational challenge many GenAI developers haven’t fully come to address yet.

Sen expands on this: “AI needs to be safe, observable, and accountable to its users. You can’t just deliver technical capabilities and expect them to succeed purely on their merits. Trust isn’t a bonus here; it’s the foundation of adoption.”

This is why organizations must offer transparency into what their AI systems are doing, how they’re doing it, and why, something AI companies like Articul8 are addressing by embedding observability into the core of their offerings.

Principles such as transparency, observability and accountability are critical because, as Sennotes, “We’re still in the early stages of agentic workflows, and the gap between what AI can do and what businesses are ready to deploy is wide. Bridging that gap starts with going back to first principles such as these. Without them, even the most promising AI models will struggle to make the leap from labs to boardrooms”, he adds.

The importance of trust becomes even clearer when we look at historical parallels.

Afif Khoury, CEO of SOCi, draws on the lessons of the big data boom.

“We’ve seen this before,” he reflects. “It was never about just having the data; success came from building actionable systems that meet client needs while addressing a number of other concerns about safety, security and privacy.”

The same dynamic applies to AI today: simply having powerful models isn’t enough, and the companies using more than just their CliffsNotes from the past will succeed.

Afif observes this dynamic keenly when it comes to the ability of smaller players to potentially upset giants sleeping at the wheel: “If the economy was a race with a handful of people ahead, GenAI has hit a reset button. A brand-new player could come up right now without the burden of hundreds of millions sunk into the past – now antiquated – way of doing things, and as long as they solve for adoption by building trust, the game is theirs.”

This reset levels the playing field but also raises the stakes for the established incumbents in the industry as well. The companies that solve for trust will gain a critical edge, while those that ignore it risk being overtaken by leaner, more agile competitors.

Perhaps this is why we see industry-defining giants like Adobe forge ahead with such pace when it comes to safe commercialization.

Rising up to the Challenge of Commercially Safe GenAI: Adobe Firefly

For decades, Adobe has defined the creative tools industry, landing a spot on the coveted list of companies that have had their products or brand names become verbs.

Whether it’s Photoshop, Premiere Pro or Acrobat, Adobe’s products have essentially been the gold standard for decades, but the competitive landscape is shifting quickly.

Disruptors like Canva and Figma, which Adobe failed to acquire, have carved out territory by focusing on accessibility, collaboration, and speed.

This growing competition poses a unique challenge for Scott Belsky, Adobe’s Chief Strategy Officer who took the role after a series number-one hits under his belt, including Behance, several books and more.

With a career defined by navigating industry disruption in the creative fields, Scott understands the stakes better than most.

“The creative tools market is evolving faster than ever,” he explains. “Becoming complacent would be a death-sentence no matter how far ahead you might be, and if anything, GenAI has spurred our team to run faster than it ever has,” Scott continues.

One of the latest products Scott has helped usher in at Adobe is Firefly; the company’s homegrown family of creative generative AI models that power features like turning text to images or video generations and more across Adobe’s products.

The team approached the task of developing Firefly with a clear vision: to build a GenAI tool their clients could feel safe in adopting.

Scott recounts the direct feedback Adobe received from a customer in a meeting exploring their appetite for GenAI: “Our client told us outright, ‘We’d never use generative AI that’s trained on content from competitors or unlicensed sources. The risks are too high.’”

This concern became the cornerstone of Firefly’s development and serves as a stark reminder of the importance of trust to all those who are building GenAI tools regardless of the industry.

“When creating Firefly our north star was trust. We wanted to create a system that empowered creators without introducing the kinds of risks that could undermine their work or their confidence in the tools they use,” Scott explains before adding how Firefly is trained exclusively on content that Adobe has permission to use, which includes licensed content such as Adobe Stock and public domain content where copyrights have expired, never on customer data.

Scott highlights the importance of this decision: “The creative community is our foundation, and using unlicensed or unauthorized material would have gone against everything Adobe stands for. We had to ensure Firefly wasn’t just innovative for innovation’s sake, it had to be a product that is safe, ethical, and commercially viable as well as a game-changer.”

This commitment to safety didn’t come without challenges.

Developing a generative AI model with these constraints required rethinking traditional approaches to AI training. Approaches, that have mostly relied on the ‘break things first, apologize later’ approach.

Adobe relied heavily on Adobe Stock, its vast library of licensed content, as the foundation for Firefly’s training data. Scott notes, We knew this approach would limit the model in some ways, but that’s a trade-off we were willing to make. In fact, the limitations we imposed are the defining features of Firefly which will never generate an image of a competitor’s product or someone else’s work of art.”

The approach Scott and the team landed on hints at a deeper truth many forget in the goldrush of GenAI. Guardrails are much more than compliance-checkboxes or self-imposed limitations. They are what ultimately sells the product.

With GenAI, constraints like these might be the only path to achieving product-market fit, even if it means growing at a slower pace or building a model that intentionally does less than it technically could.

Scott reflects on what imposing strict limitations when building Firefly meant to the team: “In a market as fast-moving as this one, it’s tempting to prioritize speed over safety. But for a company like Adobe, the trust of our clients simply isn’t negotiable. Firefly is proof that you can innovate responsibly and still create something transformative.”

This focus on trust is exactly where Adobe offers a blueprint for responsible AI commercialization; a path that others in the industry would do well to follow.

Firefly exemplifies how innovation and responsibility can coexist, even in a fiercely competitive and rapidly evolving market. At the same time, it highlights the unique constraints faced by legacy players.

Unlike startups that can afford to “move fast and break things,” Adobe’s actions carry the weight of its global reputation.

“Our decisions impact millions of creators and businesses,” Scott emphasizes. “We represent the creative community, and that responsibility shapes everything we do.”

Carrying this responsibility means Adobe may not always venture where less ethically encumbered competitors might. Yet, it positions the company to take on the position of a trusted leader in the GenAI space; a critical distinction in an industry where trust is often the deciding factor for adoption.

You may also like

Leave a Comment