Has my stock been accused of fraud?Join over 160k users who know.

Ticker Price Change($) Change(%) Shares Volume Prev Close Open Gain($) Gain(%)
Ticker Status Jurisdiction Filing Date CP Start CP End CP Loss Deadline
Ticker Case Name Status CP Start CP End Deadline Settlement Amt
Ticker Name Date Analyst Firm Up/Down Target ($) Rating Change Rating Current

News

OpenAI Is Working On New AI Image Detection Tools In Busy Election Year Worldwide To Tell If Photos Were Made By DALL-E

Author: Kaustubh Bagalkote | May 09, 2024 10:01am

OpenAI has unveiled new AI tools that can detect if an image was created using its DALL-E AI image generator. The company has also introduced advanced watermarking techniques to better identify the content it generates.

What Happened: Microsoft Corp.-backed (NASDAQ:MSFT) OpenAI is developing advanced methods to trace and authenticate AI-generated content. This involves a cutting-edge image detection classifier to identify AI-generated photos and a watermarking system for tagging audio content discreetly, the company said in a blog post.

Additionally, OpenAI also introduced Model Spec, a framework outlining expected AI tool behaviors, to guide future responses from AI tools like GPT-4.

The classifier is capable of determining whether an image was generated by DALL-E 3. OpenAI claims that the classifier remains accurate even if the image undergoes cropping, compression, or alterations in saturation.

However, its ability to identify content from other AI models is limited, as it only flags around 5 to 10% of images from other image generators, such as Midjourney.

OpenAI has previously added content credentials to image metadata from the Coalition of Content Provenance and Authority (C2PA). This month, OpenAI also joined C2PA's steering committee. The AI startup has also started adding watermarks to clips from Voice Engine, its text-to-speech platform currently in limited preview.

Both the image classifier and the audio watermarking signal are still being refined. Researchers and nonprofit journalism groups can test the image detection classifier by applying it to OpenAI's research access platform.

This comes at a time when a record number of countries worldwide are either holding national elections or will hold them later in 2024. Countries like the U.S., India, and the United Kingdom are set to hold elections within the next six months.

See Also: ChatGPT Is Not A ‘Long-Term’ Engagement Model, OpenAI’s Top Executive Says: ‘Today’s Systems Are Laughably Bad

Why It Matters: OpenAI’s new AI-generated image detection tools come at a time when concerns over misinformation spread via AI-generated content are on the rise.

In March, it was reported that AI image creation tools from OpenAI and Microsoft were being used to fabricate images that could contribute to election-related disinformation. This raised concerns about the potential for AI tools to be misused for malicious purposes.

AI and misinformation have been a hot topic leading up to the 2024 election, with more than half of Americans expressing concerns about the potential for AI to spread misinformation.

Read Next: Open AI CEO Sam Altman Once Called GPT-2 ‘Very Bad’ But Now Confesses He Has A ‘Soft Spot’ For The Version — Here’s ChatGPT’s Evolution Story

Image Via Shutterstock

This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Posted In: MSFT

CLASS ACTION DEADLINES - JOIN NOW!

NEW CASE INVESTIGATION

CORE Finalist