Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

AI Political Chatbots Can Sway Voters, New Research Finds

CN
Decrypt
Follow
3 months ago
AI summarizes in 5 seconds.

New research from Cornell University and the UK AI Security Institute has found that widely used AI systems could shift voter preferences in controlled election settings by up to 15%.


Published in Science and Nature, the findings emerge as governments and researchers examined how AI might influence upcoming election cycles, while developers seek to purge bias from their consumer-facing models.


“There is great public concern about the potential use of generative artificial intelligence for political persuasion and the resulting impacts on elections and democracy,” the researchers wrote. “We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes.”





The study in Nature tested nearly 6,000 participants in the U.S., Canada, and Poland. Participants rated a political candidate, spoke with a chatbot that supported that candidate, and rated the candidate again.


In the U.S. portion of the study, which involved 2,300 people ahead of the 2024 presidential election, the chatbot had a reinforcing effect when it aligned with a participant’s stated preference. The larger shifts happened when the chatbot supported a candidate the participant had opposed. Researchers reported similar results in Canada and Poland.


The study also found that policy-focused messages produced stronger persuasion effects than personality-based messages.


Accuracy varied across conversations, and chatbots supporting right-leaning candidates delivered more inaccurate statements than those backing left-leaning candidates.


“These findings carry the uncomfortable implication that political persuasion by AI can exploit imbalances in what the models know, spreading uneven inaccuracies even under explicit instructions to remain truthful,” the researchers said.


A separate study in Science examined why persuasion occurred. That work tested 19 language models with 76,977 adults in the United Kingdom across more than 700 political issues.


“There are widespread fears that conversational artificial intelligence could soon exert unprecedented influence over human beliefs,” the researchers wrote.


They found that prompting techniques had a greater effect on persuasion than model size. Prompts encouraging models to introduce new information increased persuasion but reduced accuracy.


“The prompt encouraging LLMs to provide new information was the most successful at persuading people,” the researchers wrote.


Both studies were published as analysts and policy think tanks evaluate how voters viewed the idea of AI in government roles.


A recent survey by the Heartland Institute and Rasmussen Reports found that younger conservatives showed more willingness than liberals to give AI authority over major government decisions. Respondents aged 18 to 39 were asked whether an AI system should help guide public policy, interpret constitutional rights, or command major militaries. Conservatives expressed the highest levels of support.


Donald Kendal, director of the Glenn C. Haskins Emerging Issues Center at the Heartland Institute, said that voters often misjudged the neutrality of large language models.


“One of the things I try to drive home is dispelling this illusion that artificial intelligence is unbiased. It is very clearly biased, and some of that is passive,” Kendal told Decrypt, adding that trust in these systems could be misplaced when corporate training decisions shaped their behavior.


“These are big Silicon Valley corporations building these models, and we have seen from tech censorship controversies in recent years that some companies were not shy about pressing their thumbs on the scale in terms of what content is distributed across their platforms,” he said. “If that same concept is happening in large language models, then we are getting a biased model.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX 活期简单赚币,让你的链上黄金生生不息
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

11 hours ago
These Three Altcoins Just Got Leveraged Crypto ETFs
12 hours ago
Solana DeFi Exchange Drift Protocol Exploited, Upwards of $285 Million Stolen
12 hours ago
Google\\\'s Veo 3.1 Lite Cuts API Costs in Half as OpenAI\\\'s Sora Exits the Market
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
57 minutes ago
Kick Co-Owner Trainwreckstv Burns Through $10M After Streaming Return: ‘Worst I’ve Ever Run’
avatar
avatarcoindesk
1 hour ago
Ripple Treasury puts XRP and RLUSD inside corporate finance for the first time
avatar
avatarbitcoin.com
1 hour ago
Central Bank of Nigeria Selects Six Entities for New Virtual Asset Pilot
avatar
avatarVitalik Buterin
2 hours ago
My self-sovereign / local / private / secure LLM setup, April 2026
avatar
avatarbitcoin.com
2 hours ago
The Future of Work: Human API Enables Real-Time Collaboration Between Humans and AI
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink