AI has made it hard to tell what’s real and what’s not, especially in photos and videos. This has led to deepfakes, misleading AI robocalls, accusations of fake AI crowds, and misinformation, which is only ramping up ahead of the November presidential election.
However, Andy Parsons, senior director of the Content Authenticity Initiative at Adobe, says the problem could be solved with another technology: content credentials.
These “nutrition labels” of information are embedded into the metadata of digital content and act as an invisible watermark to show if something was made with AI. They answer questions about how a piece of content was made and if it was created or edited with AI.
Creators can add content credentials to anything they make in programs like Photoshop and Microsoft Designer.
This level of transparency can help dispel doubt, particularly during breaking news and election cycles,” Parsons told Entrepreneur.
How Big Tech Is Using Credentials
OpenAI is already using the credentials in images created by the DALL·E 3 AI image generator as part of its approach to the 2024 elections. Google is exploring credentials to determine where content came from across its products, including YouTube. And in October, Leica released the world’s first camera that automatically adds content credentials to pictures taken with it.
Meanwhile, in May, TikTok became the first social media platform to use content credentials to detect and automatically label content “AI-generated” where appropriate.
Parsons says that the “widespread adoption” of credentials by social media platforms would “establish a chain of trust.”
“This will empower consumers to verify details themselves and allow the good actors to be trusted,” he said.
Adobe is a steering committee member of the Content Authenticity Initiative, along with Google, Microsoft, Sony, Intel, and over 2,000 other companies.