Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

121 new AI safety regulations, who will they hit?

CN
智者解密
Follow
3 hours ago
AI summarizes in 5 seconds.

On March 26, 2026, at 8:00 AM UTC+8, the Technology Department of the Ministry of Industry and Information Technology launched a 30-day public consultation for the seventh batch of industry standard projects, publicly soliciting opinions from society on a total of 121 AI safety-related standards, including the “Artificial Intelligence Security Governance Model Context Protocol Application Security Requirements.” The standard-setting entities are clearly defined as coming from the official governance system, and terms such as “Model Context Protocol Application Security Requirements” are systematically included in the draft of industry standards for the first time, indicating that regulatory perspectives are beginning to delve into the AI protocol layer and invocation detail level. In the current situation where AI is rapidly infiltrating DeFi and Web3, the tension between accelerated regulation and the "too fast" pace of cutting-edge applications is increasing: on one side, standard texts are rapidly taking shape, and on the other side, on-chain AI applications and funding layouts must start to rehearse compliance pressures. This article will follow this public consultation list, starting from policy texts, moving towards the frontline battlefield of on-chain AI applications and funding games.

121 Safety Standards Announced: Regulatory Framework Begins to Anchor at the Protocol Level

This release from the Ministry of Industry and Information Technology is the seventh batch of industry standard project public announcements, running from March 26 to April 24, 2026, with a consultation period of exactly 30 days, and feedback can only be submitted via email. This batch includes a total of 121 standards, which itself reflects a trend of AI safety governance moving from scattered clauses to a systematic engineering approach, covering multiple dimensions such as safety governance, application requirements, and industry norms. Although the briefing did not disclose a detailed list of every item, the naming can indicate that AI is no longer merely abstracted as a “system” or “service,” but is broken down into finer technical levels such as protocols, contexts, and invocation paths.

Among them, the “Artificial Intelligence Security Governance Model Context Protocol Application Security Requirements” stands out. The direct inclusion of “Model Context Protocol” in the title means that regulators are beginning to recognize and benchmark such foundational protocols: it sits between the model and the external world, responsible for interfacing inputs, managing contexts, and standardizing invocation processes, essentially acting as a “traffic hub” in the AI protocol layer. TechFlow interprets this as China's first systematic proposal of AI protocol layer security standards, transitioning from “managing platforms and applications” to “managing protocols and interfaces,” which is directional for all subsequent DeFi and Web3 AI infrastructures built on it.

Procedurally, the combination of a 30-day public consultation period and collecting feedback via email marks the transition of standards from internal planning to the eve of societal negotiations. On one hand, this is still just a draft soliciting opinions, meaning there is room for adjustments to specific clauses; on the other hand, locking the feedback window to one month conveys a clear attitude from the regulators regarding the pace of advancement—there is not much time, and missing this round will directly face compliance questions after the finalization of the standards.

Model Context Protocol Named: On-Chain AI Data Gateways Are Tightening

To understand the potential impact of this standard on the on-chain world, we first need to break down the term “Model Context Protocol” in layman's terms. Simply put, it manages three key links in the actual use of models: who can feed data to the model, under what rules these data are continuously reinforced and remembered, and when, through what interface, and with what permissions the model is called. To ordinary users, it functions like an invisible intermediary; for AI developers and on-chain projects, it represents the most concentrated layer of security responsibility.

Foresight points out that once the “Model Context Protocol Application Security Requirements” are refined, the compliance threshold for data in training and inference of on-chain AI models is likely to be generally raised. In the training phase, historical transaction data, on-chain interaction logs, and social media sentiment flows must find a new balance between the legality of sources and the traceability of processing; in the inference phase, when invoking on-chain data or CEX market data in real-time, how to label, filter, and retain traces will also come under scrutiny. This poses a reconstructive pressure at the foundational engineering level for AI DeFi protocols that rely on continuous learning and dynamic strategy adjustments.

In specific scenarios, DeFi smart advisory, on-chain risk control robots, and AI oracles may all be affected. Smart advisors need to explain whether the historical data used for their strategy training has legitimate sourcing and usage authorization; on-chain risk control robots, when monitoring clearing risks and tracking abnormal behaviors, must ensure that the data paths they retrieve and store comply with the principle of minimum necessity; AI oracles must answer whether the prices, news, and macro indicators captured off-chain form a verifiable “data history” during labeling, cleansing, and aggregation.

However, it must be acknowledged that the current standard text is still at the level of titles and principles, lacking specific technical parameters and determination metrics. This means that external judgments about its impact can only be confined to a general "range prediction" of tightening compliance boundaries, rather than making detailed level predictions about whether a certain invocation frequency or data format is directly prohibited. For on-chain AI projects, what can be done now is to understand the "surface" that regulators are trying to cover, rather than fantasizing about piercing through all "points" in advance.

Decisions for DeFi and Web3 Teams: Continue to Sprint Ahead or Start Catching Up

Over the past two years, both domestic teams and overseas projects have uniformly adopted a “sprint first and catch up later” model around AI+DeFi, AI agents, on-chain automated position management: first stacking a usable strategy engine with open-source large models or self-trained small models, then linking operations to the chain via smart contracts; once users and TVL come in, they go back to discuss data compliance, access control, and auditing issues. The relative emptiness of regulatory texts has given project stakeholders a gray space to “test the waters.”

Once AI safety standards are detailed and implemented, data traceability, access control, and audit trails will become unavoidable hard indicators. Data traceability requires projects not only to know what their models have “consumed,” but also to be able to restore the sources, authorizations, and processing procedures of these data when questioned by regulators; access control means determining who can invoke models and under what conditions new contexts can be written, necessitating an architectural upgrade from “default open” to “tiered authorization”; audit trails force all key invocations and strategy adjustments to form verifiable records on-chain or off-chain, which directly challenges many currently opaque quantitative and advisory products.

How to divide responsibility will be one of the most controversial topics ahead: authors of open-source models, nodes running community operations, and companies providing hosting and API services—who is more likely to be required to bear compliance responsibilities and risk backup? In the public chain ecosystem, the ideal of “everyone participates, everyone is responsible” is hard to align with the practical regulatory logic, which tends to lock onto clear entities that can be held accountable, naturally concentrating pressure on leading development teams and organizations providing commercial services.

More complicated is that, under the clear prohibition against fabricating implementation timelines, teams do not know when or how these standards will actually take effect. This forces them to repeatedly weigh two objectives: whether to continue focusing resources on product iteration and user growth and catch up on compliance later; or to preemptively reserve engineering bandwidth and budget to structurally modify data flows, model invocation paths, and permission systems. There won’t be a unified answer, but whichever path is chosen, the era of ignoring the standards themselves and treating them as “distant noise” is over.

Regulatory and Exchanges Progressing in Sync: Funding Begins to Price for “Safety Narrative”

Interestingly, at the same time as this AI safety standard list public announcement, the market side is also accelerating new launches. The briefing noted that Gate listed PRL trading, and Tori Finance launched delta-neutral yield products, actions from both centralized exchanges and DeFi protocols essentially create a “same-day, same-frame” contrast with regulatory announcements—one side is the formation of standard language, the other side is product innovations around AI and quantitative narratives.

New listings on exchanges and iterations of DeFi yield products inherently attract incremental funding with labels like “AI-driven,” “quantitative strategies,” and “smart hedging.” However, as expectations for AI safety standards heat up, the compliance uncertainties of these narratives are also accumulating: how models are trained behind strategies, where the data comes from, and whether the invocation process can be audited may very likely shift from just background noise in marketing language to a checklist that is scrutinized during future audits. Those who only pay lip service to “AI empowerment” in white papers and tweets, but cannot reconcile their data governance pathways in technical documents, face not regulatory benefits but risks of being prioritized for accountability.

In such an environment, funding behavior will also show subtle shifts: more capital will favor projects that reserve compliance space and have higher technical transparency, rather than those that rely purely on hype. Protocols capable of proactively providing model architecture explanations, data sourcing frameworks, and invocation logging solutions, even if their short-term yields are not the most eye-catching, will more easily attract stable funding.

A foreseeable pathway includes exchanges and leading DeFi protocols actively linking up with the emerging standard language system, designing “safety labels” and disclosure frameworks for AI-related assets. For instance, prior to listing new coins or accessing new strategies, introducing foundational information disclosure checklists: whether open-source models are used, whether data is authorized and anonymized, and whether audit capabilities for invocation exist, transforming vocabulary that originally existed only in regulatory documents into safety labels that users can perceive. This is both a defense to reduce their own compliance risks and a proactive layout for future “standardization premiums.”

Who Might Run Ahead of Regulation: From Standards to New Generation Infrastructure

From the perspective of investors and entrepreneurs, the “Model Context Protocol Application Security Requirements” is not just a constraint but the beginning of a new round of infrastructure opportunities. Surrounding the expectations for this standard, a batch of services focused on data access gateways, model invocation audits, and permission management tools can be envisioned: data access gateways responsible for inserting a layer of compliance filtering and authorization verification between the model and data sources; invocation auditing systems generating traceable records for each inference process; permission management tools helping projects achieve multi-role and multi-dimensional invocation control, all have the potential to become independent middleware or SaaS services.

For AI projects deployed across multiple chains, compliance is no longer “a local issue for a certain country,” but a global structural issue requiring partitioned deployment and data classification. The same model can utilize stricter datasets and invocation rules in areas with stricter compliance requirements while retaining higher flexibility in regions with relatively loose regulations, achieving a clear balance between performance and compliance through regional and layer partitioning. This will force teams to reserve differentiated switches for different jurisdictions from the design phase, rather than patching later.

Along this trend, it is likely that a specialized role will emerge—“AI security as a service”: they will not directly cater to end users but will provide standard linkage consulting, technical architecture assessment, and audit reports aimed at regulators or partners for exchanges, DeFi protocols, and Web3 projects, translating complex regulatory language into executable engineering specifications. This serves as both an extension of compliance consulting and a new branch of the security auditing industry.

It is important to note that standards should not simply be viewed as absolute negatives or positives. The real beneficiaries of this wave of AI safety normalization will be the few players who are the first to internalize regulatory language into product capabilities: they can occupy the mindset of “leading in safety and compliance” in market narratives, negotiate for higher pricing power, and gain stronger resilience during periods of frequent risk events. For most followers, standards merely establish an additional baseline; for a few pioneers, they serve as materials for building a moat.

From Soliciting Opinions to Landing Negotiations: Time is Running Out for Developers and Investors

Overall, this round of public consultation, which includes 121 AI safety standards, has released a clear signal: whether in model training, protocol invocation, or on-chain integration, the regulatory coordinates of AI and the crypto world are rapidly taking shape. Standards are no longer limited to macro slogans but are beginning to anchor down to details like model context protocols, indicating that future disputes will increasingly occur within engineering decisions such as “how to invoke” and “what data to invoke.”

It must also be emphasized that everything is still at the consultation stage. From March 26 to April 24, 2026, this 30-day period is a limited window for Web3 and DeFi teams to have a voice and reserve compliance space: they can participate in text refinement via email feedback and can match against their own potential weak points internally. Once the standards are set, adjusting the architecture often means performing surgery on existing businesses.

For developers, current actionable steps include: systematically mapping data flows and model invocation paths, marking the authorization status and processing methods of each data source; assessing how many changes are needed to introduce access control and invocation auditing under the current architecture; paying close attention to every step of the upcoming standard drafts to avoid being “caught off guard” on key clauses. For investors, beyond price and TVL, it's time to incorporate the projects' data governance, model transparency, and compliance reserves into the due diligence checklist, rather than only listening to the attractive stories of “AI” and “quantitative.”

Looking ahead to the next one or two years, the mainline narrative of AI and crypto is likely to shift from simply “telling stories” to a new cycle of “talking standards” and “talking compliance engineering.” Those who manage to survive this narrative transition and embed standards as part of their capabilities will be more qualified to participate in the next round of profit distribution from AI and crypto integration.

Join our community to discuss and become stronger together!
Official Telegram community: https://t.me/aicoincn
AiCoin Chinese Twitter: https://x.com/AiCoinzh

OKX welfare group: https://aicoin.com/link/chat?cid=l61eM4owQ
Binance welfare group: https://aicoin.com/link/chat?cid=ynr7d1P6Z

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

原油暴动!Bybit注册100倍杠杆爆赚
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 智者解密

10 minutes ago
Interactive Brokers Partners with Cryptocurrency Accounts: A New Bet for Wall Street Brokers
20 minutes ago
Coinbase Partners with Chainlink: On-Chain Battle of Exchange Data
27 minutes ago
Iran rejects ceasefire, US stock market chain reform moves together: Where is cryptocurrency heading?
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar智者解密
10 minutes ago
Interactive Brokers Partners with Cryptocurrency Accounts: A New Bet for Wall Street Brokers
avatar
avatar智者解密
20 minutes ago
Coinbase Partners with Chainlink: On-Chain Battle of Exchange Data
avatar
avatar智者解密
27 minutes ago
Iran rejects ceasefire, US stock market chain reform moves together: Where is cryptocurrency heading?
avatar
avatar智者解密
36 minutes ago
Iran angrily rejects ceasefire proposal: Are safe-haven funds betting on Bitcoin?
avatar
avatar智者解密
1 hour ago
Interest Rate Swaps Meet Major Collateral: Institutional Version of DeFi Puzzle
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink