Google says that it plans to roll out changes to Google Search to make clearer which images in results were AI generated — or edited by AI tools.
This new feature will soon be rolled out across Google Search, Google Lens, and Android’s Circle to Search. It will rely on something called “C2PA metadata,” a standard developed by the Coalition for Content Provenance and Authenticity. This coalition, backed by industry giants like Amazon, Adobe, Microsoft, OpenAI, and Google, aims to track the origins of images and indicate if they’ve been created or altered by AI.
However, C2PA adoption has been slow, with only a few cameras and tools from brands like Leica and Sony embracing it. The other major problem with metadata is that it can easily be removed or scrubbed and images from some of the more popular generative AI tools, like Flux which xAI’s Grok chatbot uses for image generation, don’t have C2PA metadata attached to them, in part because their creators haven’t agreed to back the standard.
The timing of Google’s rollout is significant, as the rise of AI-generated scams and deepfakes continues. Between 2023 and 2024, AI-related scams increased by a staggering 245%. Deloitte projects that losses from deepfakes could jump from $12.3 billion in 2023 to $40 billion by 2027. With AI-generated content becoming more sophisticated, the public is growing increasingly concerned about being misled by hyper-realistic fake images and manipulated content. Not that it was ever a good idea, but with every improvement in AI, typing Dua Lipa deepfake into a search engine becomes less and less a good idea.
But here’s where things get murky: What exactly counts as an “AI-edited” image? The line between an AI-enhanced photo and one edited by a skilled Photoshop expert is becoming blurrier every day. AI tools can perform tasks like background removal or facial retouching—things human editors have been doing for years. So, when does an image cross the threshold into being labeled as “AI-edited”? Without clear guidelines, this distinction remains ambiguous and raises questions about how effective Google’s labeling will be.
Take, for example, the image above. While it is an actual photograph of a Hyundai i30 in front of the lovely Boyanup Tavern, the original unedited version at the bottom tells a different story. There’s been heavy use of Generative AI for certain edits to remove distracting elements on the road and building, but arguably the most visually significant change—the colour of the car—was done manually in Photoshop with a rushed mask job and the Replace Colour tool. Interestingly, this change wouldn’t trigger an AI-generated label, leaving viewers to assume the image is “real” by omission. This illustrates the challenge: AI labels alone can’t provide the full context.
So what does this mean for the average person? Can we realistically expect users to differentiate between real and AI-generated images when even experts can struggle? Media literacy is crucial, but it can’t be the only defence against the increasingly convincing nature of digital deception.
In the end, while Google’s new AI flagging feature might create some positive buzz, it’s unlikely to make a significant impact without broader industry adoption and more resilient tools. We’ve seen similar measures rolled out before, and for those of us paying attention, it’s hard not to approach this with a healthy dose of skepticism.
For businesses navigating this evolving digital landscape, it’s more important than ever to remain aware of these shifts and consider how AI-generated content might influence your brand. If you’d like to discuss how to stay ahead of the curve with your digital strategy, feel free to reach out. We’re always here to help at Three Waters Digital.
