In a world where digital content can be easily manipulated, the 2024 US presidential election faces an unprecedented challenge: AI-generated deepfakes. According to the Talkdesk AI and the Election Survey, 55% of Americans are worried about the impact of election-related deepfakes on democracy. These concerns underscore the growing unease about the role of AI in shaping public perception and influencing electoral outcomes.
The survey reveals that 55% of respondents believe AI content recommendations exacerbate political polarization. Furthermore, 51% would lose trust in the political system if AI deepfakes were to influence the election, with 1 in 10 saying they would never vote again, including 18% of Gen Z voters.
“AI has been a hot topic across industries for several years. Many businesses have implemented AI-powered tools to achieve greater efficiency and boost the customer experience to new heights. However, AI remains a foreign concept to most consumers, and the rapid rise of generative AI has only muddied the waters further with tools that are easy to access and use but hard to regulate. Bad actors use AI to lure unsuspecting individuals and businesses into scams, and the impact on American democracy in the 2024 presidential election could be significant if we don’t put targeted, proactive mitigation measures in place today,” said William Welch, President and Chief operating officer at Talkdesk.
Going beyond elections
The research indicates that the ramifications of AI deepfakes extend beyond politics, affecting everyday life and brand-consumer relationships. A third of Americans (35%) now question all online content due to election-related deepfakes, and 68% are more cautious about how brands use AI. This skepticism highlights the broader implications of AI misuse in eroding public trust.
A looming threat
The potential for AI to create a contentious environment before and after the election is a major concern among Americans. According to research, 63% of Americans worry that AI will foster a more divisive political climate leading up to the election. In addition, 58% fear that AI-generated content could be used to discredit election results once they are announced. As many as 62% of respondents are also concerned that foreign governments might use AI deepfakes to influence the outcomes of the election in their favor.
The need for regulation
Voters believe that both candidates and lawmakers should take proactive measures to combat harmful AI content. The survey revealed that 55% of Americans think the government should increase regulation to curb the spread of deepfakes. Moreover, 51% believe it should be illegal to create deepfakes. Half of the respondents indicated that they would evaluate political candidates based on their efforts to address AI-generated misinformation.
Consumer trust
The survey also highlights the impact of AI deepfakes on consumer behavior and trust. A notable 21% of voters expect that their vote could be influenced by a deepfake, potentially without their knowledge. Additionally, 31% of voters find it difficult to reliably distinguish between real and AI-generated election content. This prevalence of deepfakes has led 35% of Americans to become more skeptical of all online content.
Generation gaps
Generational differences reveal varying levels of trust and verification practices among voters. For instance, 15% of Gen Z voters would be most deterred by false claims about cultural topics, such as a candidate’s stance on banning popular music. Meanwhile, 11% of voters admitted that they don’t always verify the authenticity of news before sharing it, with Gen Z being the most likely to share unverified content at 52%.
Another bad example of artificial intelligence is an attorney's reliance on ChatGPT for legal research. This has resulted in fabricated cases and led a Texas judge to require attorneys to certify that their filings are not AI-generated or have been human-reviewed.