A study has found that AI chatbots designed to sway voters’ choices can still influence participants a month after their conversation.
(Washington, AFP) Research shows that partisan artificial intelligence (AI) chatbots can shift voters' political views—even brief interactions and arguments based on true or false claims proved persuasive.
Experiments using generative AI models such as OpenAI’s GPT-4o and China’s DeepSeek revealed that ahead of the 2024 U.S. presidential election, these models shifted supporters of Republican candidate Donald Trump toward Democratic candidate Kamala Harris by nearly four points on a 100-point scale.
In 2025 opinion polls conducted in Canada and Poland, opposition supporters changed their views by as much as 10 points after chatting with a bot designed to persuade voters.
The findings were published in the journals Science and Nature. David Rand, lead author of the study, noted that these shifts are large enough to affect the voting decisions of a significant share of voters. Participants were informed beforehand that they were interacting with an AI.
Rand said that when asked how they would vote if an election were held that day, about 10% of respondents in Canada and Poland changed their voting intentions—and roughly 1 in 25 in the U.S. He added, however, that “voting intentions do not always align with actual voting behavior.”
Follow-up surveys showed that a month later, the persuasive effect of the chatbot still influenced about half of the respondents in the UK and one-third in the U.S. “In social science, any effect that lasts a month is relatively rare,” Rand noted.
He also observed that although most fact-checked statements used by the chatbots were accurate, “AI supporting right-wing candidates made more inaccurate claims”—likely due to biases in training data. Numerous studies have found that “right-wing content online tends to be less accurate.”
|