• Home
  • The Issues
  • Guidelines Set by OpenAI to Tackle Misinformation in Elections

Guidelines Set by OpenAI to Tackle Misinformation in Elections

By on January 16, 2024 0 95Views

There is growing concern about the potential impact of artificial intelligence on the 2024 elections, as OpenAI announced on Monday that the use of their AI tools by politicians and their campaigns is prohibited.

According to OpenAI’s blog post, impersonation is also prohibited. The policies state that users are not allowed to develop chatbots that pretend to be political candidates, government agencies, or officials, including the US secretaries of state who oversee elections.

OpenAI’s announcement demonstrates their effort to proactively address concerns about the potential harm of artificial intelligence. This technology has already been utilized during the current election season to spread fabricated visuals, raising concerns about the potential for computer-generated misinformation to compromise the integrity of the democratic process.

The policies of OpenAI are similar to those adopted by other major technology platforms. However, even social media companies that are significantly larger than OpenAI and have extensive teams dedicated to election integrity and content moderation have demonstrated difficulties in enforcing their own rules. It is likely that OpenAI will face similar challenges, and the absence of government regulation leaves the public with no choice but to trust the companies’ claims.

Big Tech platforms are gradually developing a diverse range of policies pertaining to the issue of “deepfakes,” which refers to deceptive material produced through generative artificial intelligence.

Last year, Meta made a statement that they would prohibit political campaigns from using generative AI tools in their advertisements. Additionally, they would mandate politicians to reveal the use of any AI in their ads. In a similar move, YouTube declared that all content creators must disclose if their videos contain “realistic” but manipulated content, which could include the use of AI.

The different rules that apply to various types of content creators in different situations emphasize the lack of a consistent standard governing the use of artificial intelligence in politics.

The potential application of US rules prohibiting the fraudulent misrepresentation of opposing candidates or political parties to AI-generated material is currently under deliberation by the Federal Election Commission, with a final decision still pending.

Some members of Congress have suggested a nationwide prohibition against the deceptive utilization of AI in political campaigns, but no progress has been made on passing such legislation. In a separate effort to establish regulations for AI, Senate Majority Leader Chuck Schumer has declared that addressing AI in elections is a pressing matter, but he spent a significant portion of last year conducting private briefings to educate senators on the technology in anticipation of creating laws.

The unclear regulations on AI deepfakes have caused concern among campaign officials. In response, President Joe Biden’s reelection campaign is currently creating a legal strategy to address the issue of manipulated media.

According to the general counsel for the Biden campaign, they plan to have a range of resources available to handle various scenarios. These resources may include templates and draft pleadings that can be filed in US courts or with regulators from other countries in order to address any foreign disinformation efforts. This approach allows the campaign to be prepared for any potential situation that may arise.

The Biden campaign’s efforts demonstrate the lack of trust in tech platforms’ ability to effectively handle the impact of AI on elections, despite their claims of being prepared for it.