Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

OpenAI exposes the "Polaris" project, and the "Mass Unemployment in 2028" may indeed be coming.

CN
深潮TechFlow
Follow
1 hour ago
AI summarizes in 5 seconds.
An AI intern capable of conducting independent research will be created in September; this time it may not just be an empty promise.

Not long ago, an article predicting the year 2028 went viral online. The article pointed out that due to advances in AI, there will be a significant wave of unemployment in 2028, with many people's jobs being replaced by AI.

Once the article was released, coupled with the situation in the Middle East, it severely impacted the US stock market that day. This incident can be considered surreal, after all, the article clearly appeared to be written by AI, yet it seemingly resonated with people's fears regarding "AI causing massive unemployment," resulting in such a significant impact.

Recently, a piece of news disclosed by OpenAI made people realize that the "mass unemployment in 2028" might not be entirely baseless.

Recently, OpenAI's chief scientist Jakub Pachocki said something chilling during an exclusive interview with MIT Technology Review — their "North Star" is to establish a fully automated multi-agent research system before 2028.

In September this year, the first phase goal will be achieved:

A "self-sufficient AI research intern" capable of independently addressing specific research questions.

This is not a placeholder in the product roadmap, nor is it just a casual boast from Altman on X. This is OpenAI beting its entire company resources in one direction.

The Significance of the "North Star"

When tech companies refer to their "North Star," it usually means two things: one is that other matters must yield, and the second is that there is internal consensus within the company.

From OpenAI's actions in the past two weeks, this judgment is fundamentally valid.

On March 19, OpenAI announced the acquisition of the developer tool company Astral, with the team merging into the Codex department; at the same time, the company announced the integration of ChatGPT, Codex, and the browser into a unified desktop "super application," led by application head Fidji Simo, with Greg Brockman assisting in advancing organizational reform.

The era of fragmented products is coming to an end, and OpenAI is pushing all chips in one direction.

And that direction aims at "letting AI do research on its own."

Pachocki's logic is actually quite clear: reasoning models, agents, explainability, these three technological routes originally fought their own battles within OpenAI but are now being unified under one goal — to create an AI researcher capable of autonomously operating in data centers for long durations. He stated that once this is accomplished, "this is the thing we truly rely on."

Former OpenAI researcher Andrej Karpathy's perspective is even more direct — "All cutting-edge laboratories for large language models will do this; this is the ultimate boss battle." He added a sentence worth pondering: "Scaling will certainly be more complex, but doing this is merely an engineering issue; it will succeed."

Notice his wording: it's not 'can we,' it's 'when will we.'

Anthropic in Action

On the same day OpenAI announced its "North Star," Anthropic quietly launched Claude Code Channels — a feature that allows developers to interact directly with ongoing Claude Code sessions via Telegram and Discord.

Looking at this separately seems minor, but when placed in the overall trend, it becomes significant.

Anthropic's logic is: rather than telling developers what AI might do in the future, it is better to embed it into the developers' real workflows now. Telegram and Discord are not academic papers; they are places where programmers work daily. Allowing Claude Code to thrive here means transforming it from a "tool" to a "colleague."

The community's reactions confirm this judgment.

Some users directly stated: "Claude has effectively killed OpenClaw with this update; you no longer need to buy a Mac Mini." The implication behind this statement is that Anthropic's infrastructure improvements have made open-source alternatives lose their cost advantages.

Furthermore, looking at a more macro timeline, the iteration speed of Anthropic on Claude Code is indeed astonishing. In just a few weeks, it has integrated text processing, thousands of MCP skill sets, and autonomous bug-fixing capabilities. While OpenAI is strengthening Codex through its acquisition of Astral, Anthropic has already pushed Claude Code directly into the developers' chat window.

Both companies are racing towards the same destination but through entirely different routes — OpenAI is working on "fully automated researchers for 2028," while Anthropic is developing "agent tools available for use today."

The Real Challenge

However, there is a detail that cannot be overlooked.

Pachocki did something quite rare during the interview — he proactively addressed the challenges of safety and control, and spoke quite candidly.

He mentioned that their idea is to use other large language models to "monitor the notes of the AI researcher," capturing poor behavior before it becomes problematic. But he immediately admitted: "Our understanding of large language models is not sufficient for us to fully control them; to genuinely say 'this problem has been solved,' it will take a long time."

A chief scientist of a company stating "we do not have complete control" while announcing plans to deliver a fully automated AI research system by 2028 is something worth serious contemplation by everyone.

This is not about pessimism but rather about understanding the true difficulties of this matter. The fact that Pachocki can articulate this statement indicates that OpenAI internally has a clear awareness of the challenges on this path.

At the technical level, there is a "Karpathy cycle" summarized by researchers that is worth referencing — a successful automated AI research framework requires three elements: an agent with the authority to modify individual files, a single objective metric that can be objectively tested, and fixed experimental time limits.

This framework has begun to produce results in actual environments. Shopify CEO Tobias Lütke publicly shared a case: he allowed an autoresearch agent to run overnight, and the next morning, the agent conducted 37 experiments, enhancing model performance by 19%.

From concept to execution, this path is shorter than imagined.

The Future with a $20,000 Subscription Fee

The "North Star" project represents not only a technological advantage but also a decisive factor in business.

A set of figures from Paul Roetzer makes one want to look at it a few times: he cited OpenAI's internal projections predicting that by 2029, the agent business alone could generate $29 billion in annual revenue, which includes $2,000 monthly fee for "knowledge agents" and $20,000 monthly fee for "research agents."

These figures indicate that "AI researchers" have never been just a technical goal; they represent a revenue roadmap.

The $20,000 monthly fee for "research agents," when converted, is a fraction of a senior researcher's annual salary, but they can work 24/7 and run 37 experiments simultaneously. This does not replace a specific individual but rather redefines what "research productivity" is.

This reminds me of Karpathy's statement — "this is the ultimate boss battle." The boss he refers to is not a competitor but the ceiling of AI capabilities itself.

Once AI can independently advance scientific research, the speed of AI progress will no longer be limited by the number of human researchers and working hours.

Pachocki expressed the same sentiment, just more measured — "once the system can operate autonomously in data centers for extended periods, this is what we truly rely on."

The AI research intern by September 2026 is not the end, but an important starting point.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

别分几毛了,来分 4.8 亿 NIGHT!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

10 minutes ago
Trump, the world's largest oil trader.
1 hour ago
Stanford VC Course Highlights: Basics of Venture Capital Every Founder Should Understand
1 hour ago
Besides Resolv being hacked, this type of DeFi vulnerability has already appeared four times.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
7 minutes ago
Trump told 12 lies in a year, and Wall Street's TACO has finally stopped buying into it.
avatar
avatar深潮TechFlow
10 minutes ago
Trump, the world's largest oil trader.
avatar
avatarOdaily星球日报
39 minutes ago
The two parties rarely join forces, and the United States aims to ban sports betting on prediction markets.
avatar
avatarPANews
46 minutes ago
Ray Dalio, founder of Bridgewater Associates: The concept and operation mechanism of an all-weather portfolio.
avatar
avatarTechub News
47 minutes ago
Gold experienced its largest weekly decline in 43 years. Why did safe-haven assets fall first?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink