AI Fakes to be Labeled

Photo - AI Fakes to be Labeled
AI fakes are becoming widespread – so it’s time to counter them.
My name is (who?)  My name is (chka-chka, Slim Shady)  Hi, my name is (huh?)  My name is (what?).

This is the verse that, arguably, comes to mind when you see the pic of Pope below you who slightly resembles a white rapper like Eminem.
image
You may have seen it before –  but there’s a little detail about it that you should know. It’s AI-generated, and not all people seem to have realized it.
“AI-powered image generation is booming and for good reason: It’s fun, entertaining, and easy to use. While these models enable new creative possibilities, they may raise concerns about potential misuse from bad actors who may intentionally generate images to deceive people. Even images created in good fun could still go viral and potentially mislead people,” Meta writes.

The Fundamental AI Research (FAIR) team at Meta has decided to ensure that people are aware of fakes by introducing together with Inria a special Stable Signature, i.e. an invisible watermarking technique to distinguish when an image is created by an open source generative AI model.

Here’s how it works.

Invisible watermarking incorporates information into digital content, with the stable Signature method disabling the potential removal of  the watermark by rooting it in the model with a watermark that can trace back to where the image was created.

The company then uses the example of Alice and Bob to explain further.

Alice is the one mastering a generative model. However, before she starts distributing it, she fine-tunes a small part of the model also known as the decoder  to root a given watermark for Bob. The latter makes it possible to identify the model version, a company, a user, etc.

After this, Bob receives his version of the model and generates images. The generated images will carry the watermark of Bob. They can be analyzed by Alice or third parties to see if Bob indeed generated the image.

If Bob shares it with others, regardless of the number, and all of them alter it, the stable signature will remain intact.

“No matter how a person transforms an image, the original watermark will likely remain in the digital data and can be traced back to the generative model where it was created,” the company writes.

Meta also states the used watermarking method allows it to trace images from various versions of the same model – a function that passive techniques, for example, can’t ensure. The company states that technology will be made available soon to the AI research community to access these tools in “hope of driving continued collaboration and iteration.”

“The research we’re sharing today focuses on images, but in the future we hope to explore the potential of integrating our Stable Signature method across more generative AI modalities,” FAIR writes.

Previously, GNcrypto reported that French AI companies are frustrated with regulation.