Tech
Google to initiate efforts to detect origin of AI-manipulated images
Search engine giant, Google, is increasing its efforts to accurately label content created by artificial intelligence (AI) and has updated its internal “About This Image” tool to include a universal standard for identifying the source of an AI-edited image.
The global Coalition for Content Provenance and Authenticity (C2PA) and Google collaborated to develop the new label.
A unified method for AI certification and detection, facilitated by “Content Credentials,” a verification tool, has been committed to by C2PA members.
READ ALSO: Google loses final EU court appeal against $2.7bn fine in antitrust shopping case
Among the major companies, Google is leading the way by incorporating the new 2.1 standard from C2PA into products such as Google Search and eventually Google Ads (the “About This Image” prompt can be accessed by clicking on the three vertical dots above a photo that you see during a search).
This standard offers an approved “Trust List” of tools and gadgets that may be used to verify a picture or video’s origin via its metadata.
“For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate,” Laurie Richardson, Google vice president of trust and safety, told the Verge. “Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.”
Join the conversation
Opinions
Support Ripples Nigeria, hold up solutions journalism
Balanced, fearless journalism driven by data comes at huge financial costs.
As a media platform, we hold leadership accountable and will not trade the right to press freedom and free speech for a piece of cake.
If you like what we do, and are ready to uphold solutions journalism, kindly donate to the Ripples Nigeria cause.
Your support would help to ensure that citizens and institutions continue to have free access to credible and reliable information for societal development.