AI Expert Reveals Ways to Combat AI-Generated Fake News

AI & Tech Weekly News Roundup

The rapid advancements in AI technology have made it possible for automated systems to generate convincing texts, images, voices, and videos. This raises the question of whether we can trust our own senses and judgment in the face of such sophisticated AI-generated content.

Stefan Feuerriegel, an AI expert, points out that even ordinary people were unable to distinguish between real and artificially generated content, as demonstrated by the viral picture of Pope Francis. This highlights the ease with which AI can produce realistic images, even without special skills. While AI still has some flaws, such as inaccurate backgrounds and rendering of hands, engineers are continuously improving the technology, making these flaws less apparent.

Feuerriegel emphasizes the danger of AI-generated fake news, particularly because it can be personalized and tailored to target specific groups based on their religion, gender, or political beliefs. AI can be used to create bots that engage in personalized conversations on social media platforms, and even generate fake calls that mimic the voice of someone’s family member.

Identifying signs of fake news becomes increasingly challenging when AI-generated content becomes indistinguishable from real content. Low-resolution images and videos circulating on the internet often make it difficult to assess their authenticity. This poses a significant problem in situations like armed conflicts, where AI-generated content can have a profound political impact.

Feuerriegel argues that we are already living in an era of fake news, and the problem is only expected to worsen. The real concern lies in actors who exploit these tools for large-scale disinformation campaigns, such as state actors with political agendas. The Russian invasion of Ukraine, for example, showcased the extensive use of pro-Russian propaganda.

The challenge for fact-checkers is the speed at which fake news spreads. Human fact-checkers often take up to 24 hours to verify a news story, by which time the misinformation has already gone viral. Platforms like Facebook and Twitter are employing generative AI to automatically identify fake news, and discussions are underway about using watermarks to recognize and filter out AI-generated content. However, the cooperation of platforms is crucial for this approach to be effective.

To counter the flood of AI-generated misinformation, Feuerriegel suggests that individuals should consume content more cautiously, especially on social networks. Platforms themselves must fulfill their responsibility by identifying the sources of information and reminding users to critically evaluate the content they encounter. Additionally, there is a need for media skills and digital literacy training that covers AI disinformation and is continuously updated.

Regarding regulations, Feuerriegel acknowledges the complexity of the issue, as measures may infringe upon freedom of speech. However, he believes that swift solutions can be found through the collaborative efforts of politicians, researchers, and various interdisciplinary fields.

While there is still much research to be done, many institutions, including Ludwig-Maximilians-Universität München, are actively studying AI-generated disinformation. Research disciplines such as linguistics, sociology, political science, and behavioral science are collectively exploring this complex subject to better understand people’s reactions and develop implementable approaches.

As the threat of AI-generated fake news becomes increasingly sophisticated, it is essential to stay informed, critically evaluate information, and foster collaboration between platforms, individuals, and policymakers to counter the dissemination of misinformation.