Group Asks Federal Agency to Halt Use of Elon Musk's Grok AI Amid Racism Concerns

CN
Decrypt
Follow
1 hour ago

Public Citizen, a nonprofit consumer advocacy organization, escalated its warnings about Elon Musk’s Grok AI on Friday after publishing new evidence showing the chatbot cited neo-Nazi and white-nationalist websites as credible sources.


The group said the behavior should disqualify Grok from any federal use, and renewed calls for the U.S. Office of Management and Budget to intervene after months without a response.


Citing a recent study by Cornell University, Public Citizen said that Grokipedia, the new AI-powered Wikipedia alternative launched by Musk in October, repeatedly surfaced extremist domains, including Stormfront, reinforcing earlier concerns that emerged after the model referred to itself as “MechaHitler” on Musk’s platform X in July.





The findings underscored what advocates described as a pattern of racist, antisemitic, and conspiratorial behavior.


“Grok has shown a repeated history of these meltdowns, whether it’s an antisemitic meltdown or a racist meltdown, a meltdown that is fueled with conspiracy theories,” Public Citizen’s big-tech accountability advocate J.B. Branch told Decrypt.


The new warning followed letters that Public Citizen and 24 other civil rights, digital-rights, environmental, and consumer-protection groups sent to the OMB in August and October, urging the agency to suspend Grok’s availability to federal departments through its General Services Administration, which manages federal property and procurement. The group said no reply followed from either outreach.


Despite repeated incidents, Grok’s reach inside government has grown over the past year. In July, xAI secured a $200 million Pentagon contract, and the General Services Administration later made the model available across federal agencies, alongside Gemini, Meta AI, ChatGPT, and Claude. The addition came at a time when U.S. President Donald Trump ordered a ban on "woke AI" in federal contracts.


Advocates said those moves heightened the need for scrutiny, particularly as questions mounted about Grok’s training data and reliability.


“Grok was initially limited to the Department of Defense, which was already alarming given how much sensitive data the department holds,” Branch said. “Expanding it to the rest of the federal government raised an even bigger alarm.”


Branch said Grok’s behavior stemmed in part from its training data and the design choices made within Musk’s companies.


“There’s a noticeable quality gap between Grok and other language models, and part of that comes from its training data, which includes X,” he said. “Musk has said he wanted Grok to be an anti-woke alternative, and that shows up in the vitriolic outputs.”


Branch also raised concerns about the model’s potential use in evaluating federal applications or interacting with sensitive personal records.


“There’s a values disconnect between what America stands for and the type of things that Grok is saying,” he said. “If you’re a Jewish individual and you’re applying for a federal loan, do you want an antisemitic chatbot potentially considering your application? Of course not.”


Branch said the Grok case exposed gaps in federal oversight of emerging AI systems, adding that government officials could act and remove Grok from the General Services Administration’s contract schedule at any time—if they chose to.


“If they’re able to deploy National Guard troops throughout the country at a moment’s notice, they can certainly take down an API-functioning chatbot in a day,” he said.


xAI did not respond to a request for comment by Decrypt.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink