AI Political Chatbots Can Sway Voters, New Research Finds

CN
Decrypt
Follow
2 hours ago

New research from Cornell University and the UK AI Security Institute has found that widely used AI systems could shift voter preferences in controlled election settings by up to 15%.


Published in Science and Nature, the findings emerge as governments and researchers examined how AI might influence upcoming election cycles, while developers seek to purge bias from their consumer-facing models.


“There is great public concern about the potential use of generative artificial intelligence for political persuasion and the resulting impacts on elections and democracy,” the researchers wrote. “We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes.”





The study in Nature tested nearly 6,000 participants in the U.S., Canada, and Poland. Participants rated a political candidate, spoke with a chatbot that supported that candidate, and rated the candidate again.


In the U.S. portion of the study, which involved 2,300 people ahead of the 2024 presidential election, the chatbot had a reinforcing effect when it aligned with a participant’s stated preference. The larger shifts happened when the chatbot supported a candidate the participant had opposed. Researchers reported similar results in Canada and Poland.


The study also found that policy-focused messages produced stronger persuasion effects than personality-based messages.


Accuracy varied across conversations, and chatbots supporting right-leaning candidates delivered more inaccurate statements than those backing left-leaning candidates.


“These findings carry the uncomfortable implication that political persuasion by AI can exploit imbalances in what the models know, spreading uneven inaccuracies even under explicit instructions to remain truthful,” the researchers said.


A separate study in Science examined why persuasion occurred. That work tested 19 language models with 76,977 adults in the United Kingdom across more than 700 political issues.


“There are widespread fears that conversational artificial intelligence could soon exert unprecedented influence over human beliefs,” the researchers wrote.


They found that prompting techniques had a greater effect on persuasion than model size. Prompts encouraging models to introduce new information increased persuasion but reduced accuracy.


“The prompt encouraging LLMs to provide new information was the most successful at persuading people,” the researchers wrote.


Both studies were published as analysts and policy think tanks evaluate how voters viewed the idea of AI in government roles.


A recent survey by the Heartland Institute and Rasmussen Reports found that younger conservatives showed more willingness than liberals to give AI authority over major government decisions. Respondents aged 18 to 39 were asked whether an AI system should help guide public policy, interpret constitutional rights, or command major militaries. Conservatives expressed the highest levels of support.


Donald Kendal, director of the Glenn C. Haskins Emerging Issues Center at the Heartland Institute, said that voters often misjudged the neutrality of large language models.


“One of the things I try to drive home is dispelling this illusion that artificial intelligence is unbiased. It is very clearly biased, and some of that is passive,” Kendal told Decrypt, adding that trust in these systems could be misplaced when corporate training decisions shaped their behavior.


“These are big Silicon Valley corporations building these models, and we have seen from tech censorship controversies in recent years that some companies were not shy about pressing their thumbs on the scale in terms of what content is distributed across their platforms,” he said. “If that same concept is happening in large language models, then we are getting a biased model.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink