New ‘nutrition labels’ to combat AI deepfakes


Amid mounting concerns about deepfakes and AI-generated misinformation, tech giant Adobe is launching new tools that will produce ‘nutrition labels’ for images and videos to show their provenance, including if the content was generated using AI tools.

As generative AI tools like Midjourney, DALL-E, Adobe Firefly, Leonardo.AI and Canva Magic Design have exploded in popularity over the past year, so too have concerns that the tools are being used to infringe copyright and spread misinformation.

An AI generated image from Adobe Firefly using the prompt: ‘A world created by AI’.

An AI generated image from Adobe Firefly using the prompt: ‘A world created by AI’.Credit: Adobe Stock

Such concerns are reaching fever pitch amid the US election and next year’s federal election in Australia, with some experts warning that manipulated content could sway voters and potentially have a negative impact on democracy.

Adobe’s own research of 2000 content creators found that while they believe AI tools can help them save time and money, they also hold concerns about their content being used unethically to train generative AI models without their consent.

They are also concerned about a loss of control over their work, including fear of others stealing or taking credit for their work. Nearly half the respondents say they have encountered work online that is similar to their own and believe was created with generative AI.

“I had a recent experience where a company created an AI model that made illustrations in my style,” one creator said, according to Adobe’s report.

“My art gets resold online all the time. I literally don’t have the time for all the takedown requests any more,” said another.

Adobe’s new tools, being primed for a release early next year, will allow creators to easily attach secure ‘content credentials’ to their images, videos, audio, and other digital works. These credentials include metadata that can convey the creator’s identity, website, social media links, and preferences around whether the content can be used to train AI models.

Advertisement

“This is about empowering creators and giving them the tools to protect their work and maintain attribution as their content moves across the internet,” Ely Greenfield, the CTO of Adobe’s digital media business, said in an interview.

“With the rise of AI, it’s become crucial for creators to have a way to assert their rights and preferences over how their content is used.”

The tools will take the form of a free web app launching to the public in early 2025, allowing content creators to establish ownership over digital content even if it hasn’t been created with Adobe’s products. An extension for Google Chrome will also allow users to inspect any content credentials associated with content, including its edit history.

The metadata on a piece of content might show “this content was edited using an AI tool”, for example, along with which AI tool was used.

Crucially, Adobe has won buy-in for its ‘Content Authenticity Initiative’ from over 3700 member organisations including ChatGPT-maker OpenAI, Meta, Google, Microsoft, TikTok and Nvidia.

Greenfield said Adobe’s goal is for content credentials to become as ubiquitous as the lock icon in web browsers, signalling transparency and trustworthiness.

“Just as HTTPS has become the standard for secure web connections, we want content credentials to be the standard for digital content authenticity,” he said. “Our hope and expectation is that it gets attached to every piece of content you see. ”

Amid concerns around deepfakes and election integrity, the art world is also paying close attention. George Hartley is the co-founder of Australia’s largest online art gallery, Bluethumb.

George Hartley, co-founder and chief marketing and chief product officer of Bluethumb.

George Hartley, co-founder and chief marketing and chief product officer of Bluethumb.

Hartley said Australians prefer original, hand-made art over AI-generated pieces, and overwhelmingly demand transparency in the use of AI in art-making. Nearly 90 per cent of Bluethumb’s art collectors said they are not inclined to purchase AI-generated art – even if it comes at a lower price – and believe handmade art holds intrinsic value that AI cannot replicate.

A vast majority (91 per cent) of artists surveyed by Bluethumb believe it is crucial to disclose when AI tools are used in creating art, and 70 per cent say they never use AI, though many acknowledge it might become unavoidable in the future.

“Australians are aware of the potential for AI to reshape the art world in a way that adversely affects artists who are making art by hand with traditional mediums,” he said. “In the future, the key will be striking a balance between innovation and authenticity to preserve the integrity of human artistic expression and protect our visual arts community.”

Melbourne-based art collector Freddy Grant said there’s something soulless about AI art, in his opinion.

“What’s important to me is connecting with artists and supporting them. I’d prefer to know that I’ve bought from an artist who has created their work themselves rather than using AI.

The Australian government has proposed introducing EU-style mandatory guardrails for AI in high-risk settings. The government last month issued a proposals paper and is seeking further feedback.

“This is probably one of the most complex policy challenges facing governments the world over,” Industry and Science Minister Ed Husic told a press conference in September.

Minister for Industry and Science Ed Husic.

Minister for Industry and Science Ed Husic.Credit: Alex Ellinghausen

“The Australian government is determined that we put in place the measures that provide for the safe and responsible use of AI in this country.”

Husic said Australia’s current regulatory system is not fit for purpose to respond to the risks posed by AI. Generative AI, which can automatically generate images, video, audio, and computer code, is more likely to be ‘high-risk’, according to Husic.

An Adobe spokeswoman said the company supported the government’s proposed approach to focus on high-risk AI use cases.

“The government’s proposals paper rightly considers the impact of AI on democracy and elections, and we agree with their finding that transparency is a key pillar of democracy,” the spokeswoman said.

Adobe faced a wave of user complaints earlier this year when it changed its terms of service suggesting it was giving itself access to users’ work to train its generative AI models, even work protected by non-disclosure or confidentiality agreements.

According to Greenfield, that was a misunderstanding. He said Adobe does not train its Firefly models on customer content and never has.

“That was really unfortunate,” he said.

“What happened was we made what was a very minor and innocuous change to our terms of use. It was a two-word change that triggered an automatic requirement that our users accept them. And because of the heightened state of how companies use content, a bunch of people then said, ‘Oh, Adobe’s changing the terms of use. Let me go read them much more closely than I have ever read them before’.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday.Sign up here.

Most Viewed in Technology


Leave a Reply

Your email address will not be published. Required fields are marked *