Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

AI read "1984" and decided to ban it.

CN
深潮TechFlow
Follow
3 hours ago
AI summarizes in 5 seconds.
Those who resist AI judgment lose their jobs, while those who sign off on AI judgment face no consequences.

Author: Curry, Shenchao TechFlow

Last week, a secondary school in Manchester, UK, used AI to review its library.

AI produced a list of 193 books to be removed, each with a reason. George Orwell's "1984" was prominently included, with the reason being "contains themes of torture, violence, and sexual coercion."

"1984" describes a world where the government monitors everything, rewrites history, and decides what citizens can and cannot see. Now, AI did the same for a school, and it might not even know what it is saying.

The school's librarian felt it was unfair and refused to fully implement the recommendations given by AI.

image

The school then initiated an internal investigation against her, citing "child safety," accusing her of bringing inappropriate books into the library, and reported her to the local government. She took sick leave due to stress and eventually resigned.

Ironically, the local government's investigation concluded that she did indeed violate child safety procedures, upholding the complaint.

Caroline Roche, chair of the UK School Library Association, stated that this conclusion means she can never work in any school again.

Those who resist AI judgment lose their jobs, while those who sign off on AI judgment face no consequences.

Afterward, the school acknowledged in internal documents that all classifications and reasons were generated by AI, stating: "Although the classifications were generated by AI, we believe this classification is generally accurate."

A school handed the judgment of "what books are suitable for students" over to AI, which returned an answer it itself might not understand, and then a human administrator approved it without a second glance.

This issue, exposed by the UK-based free speech organization Index on Censorship, raised questions far beyond the shelves of one school:

When AI begins to decide what content is appropriate and what is dangerous, who judges whether AI's judgments are correct?

Wikipedia Closes Its Doors to AI

In the same week, another institution answered this question through action.

The school allowed AI to decide what people can read. The world's largest online encyclopedia, Wikipedia, made the opposite choice: it will not let AI decide what goes into the encyclopedia.

In that same week, English Wikipedia officially passed a new policy prohibiting the use of large language models to generate or rewrite entry content. The voting result was 44 votes in favor, 2 votes against.

The direct cause was an AI account called TomWikiAssist. In early March of this year, this account autonomously created and edited multiple entries on Wikipedia, which was promptly addressed by the community once discovered.

An AI can write an article in just a few seconds, but it takes volunteers several hours to verify the facts, sources, and wording of an AI-generated entry for accuracy.

image

Wikipedia's editing community consists of a limited number of people. If AI can produce content endlessly, human editors will not be able to keep up.

This is not even the most troublesome part. Wikipedia is one of the most important sources of training data for AI models globally. AI learns from Wikipedia and then uses what it has learned to write new Wikipedia entries, which are then fed back into the next generation of AI models for further training.

Once erroneous information generated by AI seeps in, it will continuously amplify in this cycle, becoming a nesting doll of AI poisoning:

AI contaminates training data, and training data further contaminates AI.

However, Wikipedia's policy also leaves two loopholes for AI: editors can use AI to polish their own writings or assist with translations. But the policy explicitly warns that AI may "exceed your requests, change the meaning of the text, and misalign it with the cited sources."

Human writers make mistakes, and Wikipedia has relied on community collaboration to correct them for over twenty years. AI makes mistakes differently; what it fabricates may look more real than reality and can be produced in bulk.

A school trusted AI's judgment and ended up losing a librarian. Wikipedia chose not to trust AI, shutting the door directly.

But what if even the creators of AI start to lose faith in it?

The Creators of AI Are Now Afraid

While external institutions are closing their doors to AI, AI companies are also pulling back.

In the same week, OpenAI indefinitely shelved ChatGPT's "adult mode." This feature was originally planned to launch last December, allowing age-verified adult users to engage in erotic conversations with ChatGPT.

CEO Sam Altman personally previewed this in October last year, stating that it would treat adult users "like adults."

However, after being delayed three times, it was completely scrapped.

According to the Financial Times, OpenAI's internal health advisory board unanimously opposed this feature. The advisors' concerns were very specific: users could develop unhealthy emotional dependencies on AI, and minors would inevitably find ways to bypass age verification.

One advisor's statement was even more direct: without significant improvements, this could become a "sexy suicide coach."

The error rate of the age verification system exceeds 10%. With ChatGPT's weekly active user base at 800 million, this 10% means millions of people could be misclassified.

The adult mode was not the only product axed this month. AI video tool Sora and ChatGPT's built-in instant checkout feature were also taken offline simultaneously. Altman stated the company needed to focus on core business and cut "side tasks."

Yet OpenAI is simultaneously preparing for an IPO.

A company racing to go public is rapidly cutting potentially controversial features; this action is perhaps more accurately termed as not focusing.

Just five months ago, Altman was still saying he wanted to treat users like adults, and five months later, he realized his company hadn’t figured out what content users should be allowed to access.

If even the creators of AI do not have answers, then who should draw that line?

The Unbridgeable Speed Gap

If you consider these three events together, it’s easy to reach a core conclusion:

The speed at which AI produces content and the speed at which humans review content are no longer on the same scale.

The decision made by the school in Manchester becomes quite understandable in this context. How long would it take the librarian to read all 193 books before making a judgment? Running it through AI takes just minutes.

The principal chose the方案 in just a few minutes; do you really think he trusts AI's judgment? I think it's more that he didn't want to spend that time.

This is an economic issue. The cost of generation approaches zero, while the cost of review is borne entirely by humans.

So every institution affected by AI is forced to respond in the most blunt manner: Wikipedia simply prohibits, OpenAI directly cuts product lines. None of the solutions are the result of thoughtful consideration; they all stem from a lack of time to think it through, to block off and address later.

"Block off and address later" is becoming the norm.

The capabilities of AI iterate every few months, while discussions about what content AI should handle lack even a decent international framework. Each institution only cares about the line in its own yard, and the lines contradict each other, with no one coordinating.

The speed of AI is still accelerating. The number of reviewers won’t increase. This gap will only widen until one day something far more severe than banning "1984" happens.

By then, it may be too late to draw that line.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

唯一支持期权 AI 交易的工具就在OKX
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

1 hour ago
HashKey launches Agent Skills to accelerate AI integration into the execution layer of digital financial services.
1 hour ago
Market share exceeds 61%: After becoming the absolute C position in tokenized stocks, what new highlights does Ondo have?
1 hour ago
AI expansion is overwhelming the power grid, 7 energy investment principles you must know.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarPANews
13 minutes ago
Predicting "When will Trump end the war"? Here are five key points.
avatar
avatarOdaily星球日报
30 minutes ago
Coinbase partners with Freddie Mac to make crypto assets a real "down payment" for buying a home.
avatar
avatarTechub News
33 minutes ago
Crayfish: The Gateway to Navigating the AI Era
avatar
avatar律动BlockBeats
42 minutes ago
Insider move! Will Trump call for a ceasefire within 5 days?
avatar
avatarTechub News
57 minutes ago
Ethereum wants to redo itself without downtime.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink