In an effort to combat the spread of false information, Google is conducting tests using a digital watermark that can identify photos created by artificial intelligence (AI).
SynthID, which was developed by Google’s DeepMind artificial intelligence division, will be able to recognize photos created by machines.
It does this by inserting alterations to individual pixels within an image, making the watermarks undetectable to the naked eye but apparent to computers.
On the other hand, DeepMind has stated that it is not “proof against extreme image manipulation.”
As a result of advances in technology, it is becoming progressively more difficult to differentiate between actual photographs and those that have been artificially made, as demonstrated by the AI or actual quiz provided by BBC Bitesize.
Image generators powered by artificial intelligence have broken through to the public, with the popular tool Midjourney boasting more than 14.5 million users.
They make it possible for anybody to create photos in a matter of seconds by following straightforward text instructions, which raises problems about copyright and ownership all across the world.
Google has developed its own image generator that it calls Imagen. The Google method for creating and verifying watermarks will only apply to photos that were generated using this tool.
Absent from view
A watermark is often a logo or some text that is placed to an image in order to demonstrate ownership of the image as well as partially to make it more difficult to copy the image and use it without the owner’s permission.
It can be found in the pictures that are displayed on the BBC News website, each of which has a copyright watermark placed in the bottom-left corner.
A straightforward explanation to assist you in comprehending AI.
However, due to the ease with which they may be altered or removed, watermarks of this type are not ideal for use in determining the origin of images generated by AL.
Hashing is a technology that is used by computer businesses to produce digital “fingerprints” of known recordings of abuse. This allows the companies to identify the movies and promptly remove them from the internet if they begin to spread. However, if the video is cropped or manipulated in any way, these can also get corrupted.
People will be able to utilize Google’s software to rapidly determine whether or not an image is authentic by using a watermark that is created by Google’s system. This watermark is virtually imperceptible to the naked eye.
The head of research at DeepMind, Pushmeet Kohli, said to the BBC that the company’s system transforms images in such a subtle way that “to you and me, to a human, it does not change.”
In contrast to hashing, he added that the software used by the company can still recognize the existence of the watermark even after the image has been subsequently cropped or changed.
“You can change the color, you can change the contrast, you can even resize it,” he said, “[and DeepMind] will still be able to see that it is AI-generated.” “You can change the size,” he added.
However, he emphasized that this is only a “experimental launch” of the system, and that the business requires customers to utilize it in order to gain a better understanding of how reliable it is.
Conformity to Standards
In July, Google was one of seven top businesses in artificial intelligence that signed up to a voluntary agreement in the United States to ensure the safe development and use of artificial intelligence. This agreement included ensuring that humans are able to detect computer-made photographs by applying watermarks, which was one of the provisions of the agreement. Google was one of the companies.
But Claire Leibowicz, from the campaign group Partnership on AI, said there needs to be more collaboration across firms. Mr. Kohli stated that this step was a reflection of those pledges, and he claimed that it was a move that mirrored those commitments.
She remarked, “I believe that standardization would be beneficial for the industry.”
It is necessary for us to monitor the effects of the many approaches that are being taken; how can we improve the reporting that we receive regarding which approaches are successful and for what purpose?
She stated that “a lot of institutions are exploring different methods,” which contributes to higher degrees of complexity because “our information ecosystem relies on different methods for interpreting and disclaiming the content is AI-generated.” “A lot of institutions are exploring different methods,”
Microsoft and Amazon are two of the major technology corporations that, together with Google, have committed to watermarking some of the content that is generated by AI.
Meta has published a research paper on its unpublished video generator called Make-A-Video. The document states that watermarks would be applied to generated films to address similar requests for transparency over AI-made works. Beyond photos, Meta has also created a video generator called Make-A-Video.
At the beginning of this year, China put a complete prohibition on AI-generated photos that did not include watermarks. As a result, companies like Alibaba are adding watermarks to works made with the text-to-image tool that is part of their cloud subsidiary called Tongyi Wanxiang.