Robocalls that sound just like the president but aren’t. Videos of political figures doing or saying things they’ve never done. Social media posts created by ChatGPT, not humans. Artificial intelligence is playing a big role in how political—and misinformation — campaigns are run, and its influence could impact the outcomes of 2024 elections around the world.
Generative AI is being used to create misrepresentative political photos, videos, audio, and text — deepfakes that seem more and more real as technology advances. Now, even people with limited tech knowledge can use AI generators to create content that promotes a false narrative.
“Deepfakes can come from a wide variety of sources, as the technology has become easy for individuals to obtain and use,” said Shomir Wilson, associate professor. “Some deepfakes are likely to come from large, sophisticated organizations and others may come from people acting alone.”
Regardless of who’s creating them, deepfakes can be used to sway public opinion around political issues, deceive voters, and undermine trust in the electoral process. And in a concerning twist, public figures who do have something to hide can receive a “liar’s dividend” by falsely claiming that real scandals or other damaging truths were generated by AI.
Perhaps the most difficult deepfakes to identify are in the form of text. Dongwon Lee, professor, is researching the integrity of AI-generated text on the internet. As generative AI tools become increasingly more powerful, they’re creating outputs that are nearly indistinguishable from human-made content.
“The rise of fake news and disinformation in recent years makes it important to know where the content we see on the web is coming from, particularly if we are making decisions based upon that information and whether such AI-generated content is truthful and fact-grounded or not,” he said.
Only about a third of U.S. states have passed laws regulating AI in politics, while Congress continues to evaluate how to best balance AI investment and innovation with guardrails to ensure responsible development of the technology. Until interventions like new legislation and voter education efforts are in place to preserve information integrity, the responsibility to recognize digital fakes lies with the people.
“Sometimes there are telltale signs that something was made by generative AI — like distorted speech in audio — but the technology to create deepfakes keeps improving,” Wilson said. “To evaluate content, it’s important to use critical thinking. Consider whether a news article or a video seems intended to inform or to provoke a reaction, and whether it's consistent with other things you know about a topic.”
This story was originally published in the Summer 2024 issue of iConnect>>>