As the world grapples with deepfakes, AI companies agree to a set of principles (2024)

Could this be the solution, even if to an extent, to the rage of artificial intelligence (AI) generated deepfakes and child sexual abuse materials? As AI tools continue to improve at a rapid pace, bad actors are using them to create ‘real’ manipulated content. Thorn and All Tech is Human, both non-profits, have managed to bring AI companies to the table in an attempt to create new AI standards, particularly for the safety of children.

As the world grapples with deepfakes, AI companies agree to a set of principles (1)

The principles include a re-look at the training data used for AI models, as well as the need for watermarking AI generations and developing new detection solutions that’ll prevent generative tools from creating AI-generated child sexual abuse material, or AIG-CSAM. At this time, 11 tech companies have signed up, including Meta, Google, Anthropic, Microsoft, OpenAI, Stability.AI and Mistral AI.

HT launches Crick-it, a one stop destination to catch Cricket, anytime, anywhere. Explore now!

“We find ourselves in a rare moment, a window of opportunity, to still go down the right path with generative AI and ensure children are protected as the technology is built,” says Thorn, in a statement about the “Safety by Design for Generative AI” principles. The intent remains, to widen scope. “Over the coming weeks, we will be adding additional companies to the list of key industry players committing to the Safety by Design generative AI principles,” says David Polgar, Founder & President at All Tech Is Human.

The window of opportunity that Thorn is referring to, may be closing fast. A recent illustration being in March, when a review of Meta’s ad library for Facebook and Instagram, after the two platforms displayed ads with a blurred and fake nude image of a young celebrity, pointed to an app called Perky AI.

“It’s not science fiction coming at some point in the future, possibly or hypothetically,” summarised Sen. Richard Blumenthal, (D-Conn.), chairman of the committee, at the hearing of US Senate’s subcommittee on privacy, technology and the law, earlier this month. The Perky ads were discussed, at this hearing.

Recently, Microsoft Bing and Google Search were found to be showing deepfake images as part of search results for specific search phrases. Later, both companies confirmed the materials were identified and removed.

A critical principle that AI companies have signed up for, includes a serious relook at the data sets that are used to train AI models. Core to this is early detection of CSAM and child sexual exploitation material (CSEM) from these models. Meta, Microsoft and others are targeting risks to children, “alongside adult sexual content in our video, images and audio generation training datasets.”

“This commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material and other forms of sexual harm against children,” says Courtney Gregoire, Microsoft’s chief digital safety officer.

In December, researchers from Stanford University’s Internet Observatory said they’d found more than 1,000 images of child exploitation in a popular open-source image database, called LAION-5B. This is used to train generative AI tools, such as the popular and incredibly realistic text-to-image generator Stable Diffusion 1.5. Though Stability AI did not create or manage that database, it was immediately removed from the process.

AI companies understand it is important to identify generated content from real ones, and also the source of generations in case of offensive content generated by bad actors, with an intent to harm. Meta, Microsoft, Google and others insist they are working to deploy solutions that embed signals in the content as part of the process for generating an image, audio clip or video.

Stability AI, in its commitment says the intention is to “disallow the use of generative AI to deceive others for the purpose of sexually harming children, and explicitly ban AIG-CSAM from our platforms.”

The other solution is watermarking AI generated content, to identify the creator and source. There has been progress on that front. Meta and OpenAI, for instance, had confirmed earlier this year that any generations on their platforms will now include watermarks or labels. “This action aids in distinguishing between human and synthetic content, crucial for safeguarding user privacy and combating the proliferation of deepfakes,” Nilesh Tribhuvann, Founder and Managing Director, White & Brief Advocates & Solicitors, told HT at the time.

There is also the Adobe led Coalition for Content Provenance and Authenticity (C2PA), which includes Google, Microsoft, Intel, Leica, Nikon, Sony and Amazon Web Services, which is pushing the case for “Content Credentials” with every generated content.

News / Business / As the world grapples with deepfakes, AI companies agree to a set of principles

As the world grapples with deepfakes, AI companies agree to a set of principles (2024)
Top Articles
Latest Posts
Article information

Author: Golda Nolan II

Last Updated:

Views: 5618

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Golda Nolan II

Birthday: 1998-05-14

Address: Suite 369 9754 Roberts Pines, West Benitaburgh, NM 69180-7958

Phone: +522993866487

Job: Sales Executive

Hobby: Worldbuilding, Shopping, Quilting, Cooking, Homebrewing, Leather crafting, Pet

Introduction: My name is Golda Nolan II, I am a thoughtful, clever, cute, jolly, brave, powerful, splendid person who loves writing and wants to share my knowledge and understanding with you.