SAN FRANCISCO: ChatGPT developer OpenAI has announced it will release tools to tackle disinformation ahead of the dozens of elections this year in countries that are home to half the world’s population.
OpenAI launch anti-disinformation tools: The explosive success of ChatGPT sparked a global artificial intelligence revolution, raising concerns about the potential for voter fraud and misinformation on the internet.
OpenAI announced on Monday that it will not permit the use of its technology, which includes ChatGPT and the image generator DALL-E 3, for political campaigns. Several nations, including the United States, India, and Britain, have scheduled elections for this year.
In a blog post, OpenAI declared its intention to ensure that their technology is not utilized in a manner that could undermine the democratic process. It said, “We’re still investigating the potential effectiveness of our tools for personalised persuasion.”
“We don’t allow people to build applications for political campaigning and lobbying until we learn more.”
The World Economic Forum cautioned last week in a report that AI-driven misinformation and disinformation are the largest short-term global dangers and might topple recently elected governments in key countries.
Election disinformation has long been a source of concern, but experts claim that the threat has increased since powerful AI text and image generators are now widely available.
If discerning whether the content they are exposed to is modified or phony proves difficult for consumers, this is especially true.
On Monday, OpenAI announced development of tools to enable users to determine whether an image was create using DALL-E 3 and to provide accurate credit for language produced by ChatGPT.
The corporation stated, “We will implement the digital credentials of the Coalition for Content Provenance and Authenticity early this year. This approach uses cryptography to encode details about provenance of the content.”
The group, also known as C2PA, intends to enhance ways for identifying and tracing digital information. Microsoft, Sony, Adobe, and Japanese photo companies Nikon and Canon are among its members.
“Redlines”
According to OpenAI, ChatGPT will refer visitors to reliable sources when they ask procedural questions regarding US elections, such where to cast their ballot.
“Lessons from this work will inform our approach in other countries and regions,” the business added.
Additionally, it stated that DALL-E 3 included “guardrails” that prohibit users from creating pictures of actual individuals, such as candidates.
The announcement from OpenAI comes after US Internet behemoths Google and Facebook parent Meta last year disclosed measures to restrict electoral meddling, particularly with regard to artificial intelligence.
AFP previously disproved edited movies, such as deepfakes, claiming to show US President Joe Biden proposing a military draft and former Secretary of State Hillary Clinton promoting Florida Governor Ron DeSantis for president.
AFP Fact Check confirmed that politicians shared manipulated audio and video on social media prior to last month’s presidential election in Taiwan.
Experts claim that misinformation is causing a crisis of trust in political institutions, even if a large portion of this content, produced using AI programs, is of poor quality and it is not immediately apparent.