1. Website Planet
  2. >
  3. News
  4. >
  5. OpenAI Sets Rules to Fight Election Misinformation
OpenAI Sets Rules to Fight Election Misinformation

OpenAI Sets Rules to Fight Election Misinformation

Ivana Shteriova January 30, 2024
January 30, 2024
OpenAI introduced a new set of rules to prevent mass production of election misinformation. The ChatGPT creator declared politicians and those involved in creating their campaigns are not allowed to use its AI technology for the 2024 elections.

The AI startup said it is actively working to restrict the use of its AI tools for creating “misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates.” For example, OpenAI has trained its text-to-image AI model DALL·E to deny requests related to image generation of real people, including political candidates taking part in the 2024 elections.

OpenAI also won’t allow users to create chatbots that impersonate real people like political candidates and institutions like the government. Furthermore, it won’t allow users to generate content discouraging people from voting or misleading them to think they aren’t eligible to vote.

The AI startup has partnered with the National Association of Secretaries of the State (NASS), America’s oldest nonpartisan professional organization for public officials, to ensure all ChatGPT procedural election queries redirect to the CanIVote.org site.

OpenAI is increasing transparency on its sources of AI-generated information, promising to provide users with attribution and links to real-time news and other sources. Furthermore, it plans to implement the Coalition for Content Provenance and Authenticity’s cryptography-based digital credentials for DALL·E-generated images. The company also announced that it’s working on a provenance classifier for detecting DALL·E images.

OpenAI’s election measures follow the lead of several tech companies that have updated their election policies to mitigate the risks of the new, rapidly evolving AI technologies.

In late 2023, Google announced restrictions on answers generated by its AI tools related to election questions. It also announced that it will demand political campaigns advertising on Google to disclose AI use, similar to Meta’s updated rules. YouTube made a similar announcement, requiring content creators to state if they used AI in their videos.

Despite these efforts, tech companies struggle to find the right strategies to protect election integrity and prevent any type of AI-fueled misinformation. An August report by the Washington Post, for example, showed OpenAI failed to enforce its policies regarding political campaigns.

The lack of federal regulation means companies like OpenAI can get away with measures that don’t work. The Federal Election Commission is currently evaluating whether “fraudulently misrepresenting other candidates or political parties” applies to AI-generated content, but no uniform standard governs how politics can use AI.

Rate this Article
4.0 Voted by 2 users
You already voted! Undo
This field is required Maximal length of comment is equal 80000 chars Minimal length of comment is equal 10 chars
Any comments?
Reply
View %s replies
View %s reply
More news
Show more
We check all user comments within 48 hours to make sure they are from real people like you. We're glad you found this article useful - we would appreciate it if you let more people know about it.
Popup final window
Share this blog post with friends and co-workers right now:

We check all comments within 48 hours to make sure they're from real users like you. In the meantime, you can share your comment with others to let more people know what you think.

Once a month you will receive interesting, insightful tips, tricks, and advice to improve your website performance and reach your digital marketing goals!

So happy you liked it!

Share it with your friends!

<

Or review us on

3235774
50
5000
74307203