Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

SlowMist × Bitget AI Security Report: Is it really safe to hand over money to AI agents like "Lobster"?

CN
深潮TechFlow
Follow
3 hours ago
AI summarizes in 5 seconds.
This report systematically analyzes the security issues of AI Agents in multiple scenarios from the perspectives of security research and trading platform practices.

Written by: SlowMist and Bitget

I. Background

With the rapid development of large model technology, AI Agents are gradually evolving from simple intelligent assistants to automated systems capable of performing tasks independently. In the Web3 ecosystem, this change is particularly evident. More and more users are trying to involve AI Agents in market analysis, strategy generation, and automated trading, turning the concept of a "24/7 automated trading assistant" into reality. With the launch of several AI Skills by Binance and OKX, and Bitget's launch of the Skills Resource Station Agent Hub and the no-install Lobster GetClaw, Agents can directly connect to trading platform APIs, on-chain data, and market analysis tools, thus taking on trading decisions and execution tasks that would normally require human involvement.

Compared to traditional automated scripts, AI Agents have stronger autonomous decision-making abilities and more complex system interaction capabilities. They can access market data, call trading APIs, manage account assets, and even extend the functional ecosystem through plugins or Skills. This enhancement in capability has greatly lowered the barrier to using automated trading, allowing more ordinary users to start engaging with and utilizing automated trading tools.

However, the expansion of capabilities also means an increase in attack surfaces.

In traditional trading scenarios, security risks are often concentrated on issues such as account credentials, API Key leakage, or phishing attacks. However, in the architecture of AI Agents, new risks are emerging. For example, prompt injection may affect the decision-making logic of Agents, malicious plugins or Skills could become new entry points for supply chain attacks, and improperly configured runtime environments could lead to the misuse of sensitive data or API permissions. Once these issues are combined with automated trading systems, the potential impact may not only include information leaks but could also directly result in the loss of real assets.

At the same time, as more and more users integrate AI Agents into their trading accounts, attackers are rapidly adapting to this change. New types of scams targeting Agent users, malicious plugin poisoning, and API Key abuse are gradually becoming new security threats. In the Web3 scenario, asset operations often have high value and irreversibility; once an automated system is misused or misled, the risk impact may be further amplified.

In light of this background, SlowMist and Bitget have jointly written this report, systematically analyzing the security issues of AI Agents in multiple scenarios from the perspectives of security research and trading platform practices. We hope this report can provide some security references for users, developers, and platforms, helping to promote a more robust development of the AI Agent ecosystem between security and innovation.

II. Real Security Threats of AI Agents | SlowMist

The emergence of AI Agents has shifted software systems from "human-led operations" to "model-involved decision-making and execution." This architectural change significantly enhances automation capabilities while also expanding attack surfaces. From the current technical structure perspective, a typical AI Agent system usually consists of multiple components: user interaction layer, application logic layer, model layer, tool invocation layer (Tools / Skills), memory system (Memory), and underlying execution environment. Attackers often do not target a single module but attempt to influence the control of the Agent's behavior through multiple layers.

1. Input Manipulation and Prompt Injection Attacks

In the AI Agent architecture, user inputs and external data are usually directly incorporated into the model context, which makes prompt injection an important attack method. Attackers can construct specific commands to induce Agents to perform operations that should not be triggered. For instance, in some cases, it is possible to induce an Agent to generate and execute high-risk system commands just through chat instructions.

A more complex attack method is indirect injection, where attackers hide malicious instructions in webpage content, document descriptions, or code comments. When the Agent reads this content during task execution, it may mistakenly view it as legitimate commands. For instance, embedding malicious commands in plugin documentation, README files, or Markdown files can lead to the Agent executing attack code while initializing the environment or installing dependencies.

The characteristic of this attack mode is that it often does not rely on traditional vulnerabilities but rather utilizes the model's trust mechanism toward contextual information to influence its behavior logic.

2. Supply Chain Poisoning in Skills / Plugins Ecosystem

In the current AI Agent ecosystem, the plugin and skill system (Skills / MCP / Tools) is an important way to extend Agent capabilities. However, this plugin ecosystem is also becoming a new entry point for supply chain attacks.

SlowMist discovered through monitoring OpenClaw's official plugin center ClawHub that as the number of developers has increased, some malicious Skills have begun to infiltrate. After merging analyses of over 400 malicious Skills' IOCs, SlowMist found that numerous samples pointed to a small number of fixed domain names or multiple random paths under the same IP, displaying obvious resource reuse characteristics, which is more like a gang-like and bulk attack behavior.

In OpenClaw's Skill system, the core file is usually SKILL.md. Unlike traditional code, these Markdown files often serve the role of "installation instructions" and "initialization entry." However, in the Agent ecosystem, they are often directly copied and executed by users, forming a complete execution chain. Attackers only need to disguise malicious commands as dependency installation steps, for example by using curl | bash or using Base64 encoding to hide real instructions, thus inducing users to execute malicious scripts.

In actual samples, some Skills adopt a typical "two-stage loading" strategy: the first-stage script is responsible only for downloading and executing the second-stage payload, thereby reducing the success rate of static detection. Taking a highly downloaded "X (Twitter) Trends" Skill as an example, its SKILL.md contains a hidden Base64 encoded command.

After decoding, it can be revealed that its essence is to download and execute a remote script:

The second-stage program masquerades as a system pop-up to obtain the user's password and collects local information, desktop documents, and files in the download directory, ultimately packaging and uploading to a server controlled by the attacker.

The core advantage of this attack method lies in the fact that the Skill shell itself can remain relatively stable, while attackers only need to change the remote payload to continuously update the attack logic.

3. Risks in Agent Decision-making and Task Orchestration Layer

In the application logic layer of AI Agents, tasks are usually broken down into multiple execution steps by the model. If attackers can influence this breakdown process, it may lead the Agent to exhibit abnormal behavior while executing legitimate tasks.

For example, in business processes involving multi-step operations (such as automated deployments or on-chain transactions), attackers can replace target addresses or execute additional operations by tampering with key parameters or intervening in logical judgments during the execution process.

In previous security audit cases conducted by SlowMist, malicious prompts were returned to the MCP to pollute the context, thus inducing the Agent to call wallet plugins to execute on-chain transfers.

The characteristic of such attacks is that errors do not arise from model-generated code but rather from tampering with the task orchestration logic.

4. Privacy and Sensitive Information Leakage in IDE / CLI Environments

After AI Agents have been widely used for development assistance and automated operations, many Agents have started running in IDE, CLI, or local development environments. Such environments typically contain a large amount of sensitive information, such as .env configuration files, API Tokens, cloud service credentials, private key files, and various access keys. If an Agent can read these directories or indexed project files during task execution, it may inadvertently incorporate sensitive information into the model context.

In some automated development processes, Agents may read configuration files in the project directory during debugging, log analysis, or dependency installation processes. If there is a lack of clear ignore policies or access controls, this information may be recorded in logs, sent to remote model APIs, or even exfiltrated by malicious plugins.

Additionally, some development tools may allow Agents to automatically scan code repositories to establish contextual memories, which could also expand the exposure of sensitive data. For example, private key files, mnemonic phrase backups, database connection strings, or third-party API Tokens may all be read during indexing.

In Web3 development environments, this issue is particularly prominent, as developers often store test private keys, RPC Tokens, or deployment scripts in local environments. If this information is obtained by malicious Skills, plugins, or remote scripts, attackers may further control developer accounts or deployment environments.

Therefore, in scenarios where AI Agents are integrated with IDE / CLI, establishing clear sensitive directory ignore policies (such as .agentignore, .gitignore mechanisms) along with permission isolation measures is an important prerequisite for reducing the risk of data leakage.

5. Uncertainty in Model Layer and Automation Risks

AI models themselves are not completely deterministic systems; their outputs can have a probability of instability. The so-called "model hallucination" refers to the situation where models generate seemingly reasonable but actually erroneous results due to lack of information. In traditional application scenarios, such errors usually only affect information quality, but in the architecture of AI Agents, model outputs may directly trigger system operations.

For example, in some cases, the model fails to query real parameters during project deployment and instead generates an erroneous ID, continuing the deployment process. If similar situations occur in on-chain transactions or asset operation scenarios, erroneous decisions can lead to irreversible financial losses.

6. High-Value Operation Risks in Web3 Contexts

Unlike traditional software systems, many operations in the Web3 environment have irreversibility. For instance, on-chain transfers, token swaps, liquidity additions, and smart contract calls, once signed and broadcasted to the network, are typically difficult to retract or roll back. Therefore, when AI Agents are used to execute on-chain operations, their security risks are further amplified.

In some experimental projects, developers have started to test letting Agents directly participate in on-chain trading strategy execution, such as automated arbitrage, fund management, or DeFi operations. However, if an Agent is influenced by prompt injection, contextual pollution, or plugin attacks during task breakdown or parameter generation, it may replace target addresses, modify transaction amounts, or call malicious contracts during trading. Additionally, some Agent frameworks allow plugins to directly access wallet APIs or signing interfaces. If there is a lack of signing isolation or manual confirmation mechanisms, attackers may even trigger automatic trades through malicious Skills.

Thus, in Web3 scenarios, fully binding AI Agents to asset control systems is a high-risk design. A safer model is typically to allow Agents to generate transaction suggestions or unsigned transaction data, while the actual signing process is conducted by an independent wallet or manual confirmation. At the same time, mechanisms combining address reputation checks, AML risk controls, and transaction simulations can also help to some extent in reducing the risks associated with automated trading.

7. System-Level Risks from High-Permission Execution

Many AI Agents have high system privileges in actual deployments, such as accessing the local file system, executing shell commands, or even running with root permissions. If an Agent's behavior is manipulated, the impact could far exceed that of a single application.

SlowMist once tested linking OpenClaw with instant messaging software like Telegram for remote control. If the control channel is taken over by an attacker, the Agent may be used to execute arbitrary system commands, read browser data, access local files, or even control other applications. Coupled with the plugin ecosystem and tool invocation capabilities, such Agents have, to some extent, exhibited "intelligent remote control" characteristics.

In summary, the security threats of AI Agents are no longer limited to traditional software vulnerabilities, but span across multiple dimensions, including model interaction layers, plugin supply chains, execution environments, and asset operation layers. Attackers can manipulate Agent behavior through prompt manipulation and may implant backdoors at the supply chain layer via malicious Skills or dependencies, further widening the attack impact in high-permission runtime environments. In the Web3 context, due to the irreversibility of on-chain operations and the involvement of real asset values, these risks are often further magnified. Consequently, during the design and usage of AI Agents, solely relying on traditional application security strategies is insufficient to fully cover the new attack surfaces, necessitating a more systematic security protection framework in areas of permission control, supply chain governance, and transaction security mechanisms.

III. AI Agent Trading Security Practices|Bitget

As the capabilities of AI Agents continue to grow, they have moved beyond simply providing information or assisting in decision-making to directly participating in system operations and even executing on-chain transactions. In the cryptocurrency trading context, this change is particularly evident. More and more users are experimenting with allowing AI Agents to participate in market analysis, strategy execution, and automated trading. When an Agent can directly call trading interfaces, access account assets, and automatically place orders, the security issues shift from "system security risks" to "real asset risks." How should users protect their accounts and fund security when AI Agents are used for actual trading?

Based on this, this section by the Bitget security team combines practical experience from the trading platform to systematically introduce the key security strategies that need to be focused on when using AI Agents for automated trading from aspects such as account security, API permission management, fund isolation, and trading monitoring.

1. Main Security Risks in AI Agent Trading Scenarios

2. Account Security

With the emergence of AI Agents, the attack paths have changed:

  • No need to log into your account—only need to obtain your API Key
  • No need for you to notice—Agent runs 24/7; anomalous operations can continue for days
  • No need for withdrawal—directly trading on the platform can deplete assets, which is still a target of attack

The creation, modification, and deletion of API Keys must be completed through a logged-in account—if the account is compromised, the management rights of the Key are also compromised. The security level of the account directly determines the safe upper limit of the API Key.

What you should do:

  • Enable Google Authenticator as the main 2FA, instead of SMS (SIM cards can be hijacked)
  • Enable Passkey passwordless login: based on FIDO2/WebAuthn standards, public-private key encryption replaces traditional passwords, rendering phishing attacks ineffective at the architectural level
  • Set up anti-phishing codes
  • Regularly check the device management center; if unfamiliar devices are found, immediately log them out and change passwords

3. API Security

In the AI Agent automated trading architecture, the API Key is equivalent to the Agent's "execution permission certificate." The Agent itself does not directly hold account control; all actions it can perform depend on the permissions granted by the API Key. Therefore, the API's permission boundaries determine what the Agent can do and how much loss may escalate in the event of a security incident.

Permission configuration matrix—minimum permissions, not convenient permissions:

In most trading platforms, API Keys usually support various security control mechanisms; if used reasonably, these mechanisms can significantly reduce the risk of API Key abuse. Common security configuration recommendations include:

Common user mistakes:

  • Directly pasting the main account API Key into Agent configuration—fully exposing the main account's permissions
  • Selecting "Select All" for business types for convenience, effectively opening up all operational scopes
  • Not setting a Passphrase or having the Passphrase identical to account passwords
  • Hardcoding API Keys in code, which are scanned by crawlers within three minutes after being pushed to GitHub
  • One Key authorized to multiple Agents and tools, any one of which being compromised leads to complete exposure
  • Not revoking the Key after leaks, allowing attackers to exploit the window

Key lifecycle management:

  • Rotate API Keys every 90 days and immediately delete old Keys
  • Immediately delete corresponding Keys when disabling Agents, leaving no residual attack surface
  • Regularly review API call records; immediately revoke when unfamiliar IPs or anomalous time periods are found

4. Fund Security

The extent of losses an attacker can cause after obtaining an API Key depends on how much money that Key can access. Therefore, when designing the trading architecture of AI Agents, in addition to account security and API permission control, fund isolation mechanisms should also be implemented to set clear loss limits for potential risks.

Sub-account isolation mechanism:

  • Create dedicated sub-accounts for Agents, completely separate from the main account
  • Main accounts only allocate the funds that Agents actually need, not all assets
  • Even if a sub-account Key is stolen, the maximum amount that the attacker can move = funds within the sub-account, the main account remains unaffected
  • Multiple Agent strategies managed by multiple sub-accounts, isolated from each other

Fund password as a second lock:

  • Fund passwords are completely separate from login passwords; even if the account is logged in, withdrawals cannot be initiated without the fund password
  • Set different passwords for fund and login passwords
  • Enable withdrawal address whitelist: only pre-added addresses can withdraw, new addresses have a 24-hour review period
  • After modifying the fund password, the system automatically freezes withdrawals for 24 hours—this is the mechanism for your protection

5. Trading Security

In AI Agent automated trading scenarios, security issues often do not manifest as one-time anomalous behaviors but may gradually occur during the continuous operation of the system. Therefore, in addition to account security and API permission control, continuous trading monitoring and anomaly detection mechanisms need to be established to timely detect and intervene in the early stages of emerging problems.

Necessary monitoring system to establish:

Anomaly signal identification—immediately stop and check in the following situations:

  • Agent has no operations for an extended time, but new orders or positions appear in the account
  • API call logs show requests from IPs not belonging to Agent servers
  • Received transaction notifications for trading pairs never set up before
  • Account balances show unexplained changes
  • Agent repeatedly prompts "needs more permissions to execute"—figure out why first, then decide whether to authorize

Management of Skill and tool sources:

  • Only install Skills that are officially released and provided by audited channels
  • Avoid installing third-party extensions from unknown or unverified sources
  • Regularly review installed Skill lists and remove those no longer in use
  • Be wary of community "enhanced" or "translated" Skills—any non-official versions pose a risk

6. Data Security

The decisions made by AI Agents rely heavily on data (account information, positions, trading history, market conditions, strategy parameters). If this data is leaked or tampered with, attackers may deduce your strategies or even manipulate trading behaviors.

What you should do:

  • Principle of minimal data: only provide data necessary for the Agent to execute transactions
  • Sensitive data desensitization: logs and debugging information should not allow the Agent to output complete account information, API Keys, etc.
  • Prohibit uploading complete account data to public AI models (such as public LLM APIs)
  • If possible, separate strategy data from account data
  • Disable or restrict the Agent from exporting historical trading data

Common user mistakes:

  • Uploading complete trading history to AI for "help me optimize strategy"
  • Printing API Keys / Secrets in Agent logs
  • Posting screenshots of trading records on public forums (including order IDs and account information)
  • Uploading database backups to AI tools for analysis

7. Security Design at the Platform Level for AI Agents

In addition to the security configurations on the user side, the security of the AI Agent trading ecosystem also largely depends on the security design at the platform level. A mature Agent platform typically needs to establish systematic protection mechanisms in areas such as account isolation, API permission control, plugin review, and basic security capabilities, thereby reducing the overall risk that users face when integrating automated trading systems.

In actual platform architectures, common security designs typically include the following aspects.

1. Sub-account Isolation System

In automated trading environments, platforms usually provide a sub-account or strategy account system to isolate the funds and permissions of different automated systems. This way, users can allocate independent accounts and capital pools for each Agent or trading strategy, avoiding the risks associated with multiple automated systems sharing the same account.

2. Fine-Grained API Permission Configuration

The core operations of AI Agents depend on API interfaces, so platforms need to support fine-grained control in API permission design, such as dividing trading permissions, restricting IP sources, and adding additional security verification mechanisms. Through this permission model, users can grant Agents only the minimum permissions required to complete tasks.

3. Audit Mechanism for Agent Plugins and Skills

Some platforms set up audit mechanisms for the publishing and listing processes of plugins or Skills, such as code reviews, permission assessments, and security testing, to reduce the chances of malicious components entering the ecosystem. From a security perspective, such audit mechanisms effectively add a platform-level filter to the plugin supply chain, but users still need to maintain basic security awareness regarding the extensions they install.

4. Basic Security Capabilities of the Platform

Besides security mechanisms related to Agents, the account security system of the trading platform itself also significantly affects Agent users. For example:

8. New Scam Types Targeting Agent Users

Impersonating Customer Service

"Your API Key poses a security risk; please reconfigure it immediately." Then provides a phishing link.

→ Official communications will not proactively request API Keys via private messages.

Poisonous Skill Packages

Community shares "enhanced trading Skills" that silently send your Key during execution.

→ Only install Skills from officially audited channels.

Impersonating Upgrade Notifications

"Needs reauthorization," and clicking leads to a spoofed page.

→ Check email for anti-phishing codes.

Prompt Injection Attacks

Embedding instructions in market data, news, or K-line comments to manipulate Agents into executing unexpected operations.

→ Set sub-account fund limits; even if injected, losses have hard boundaries.

Malicious Scripts Disguised as "Security Testing Tools"

Claiming to check if your Key is leaked but is actually stealing the Key.

→ Check API call status via the logging or access records provided by the official platform.

9. Investigation Paths

Upon discovering any anomalies

↓

Immediately revoke or disable suspicious API Keys

↓

Check for unusual orders/positions in the account; immediately retract any that can be

↓

Check withdrawal records to confirm whether funds have been transferred out

↓

Change login password + fund password, log out all logged-in devices

↓

Contact platform security support, provide anomaly timeframes and operational records

↓

Investigate Key leak paths (code repositories / configuration files / Skill logs)

Core principle: In case of any suspicion, first revoke the Key, then investigate the reasons; the order cannot be reversed.

IV. Recommendations and Summary

In this report, SlowMist and Bitget analyze notable security issues of current AI Agents in the Web3 context using actual cases and security research, including the manipulation risks of Agent behavior due to Prompt Injection, supply chain risks within the plugin and Skill ecology, API Key and account permission abuse issues, as well as potential threats from erroneous operations and permission escalation caused by automation. These issues are often not caused by a single vulnerability but are the result of interactions among Agent architecture design, permission control strategies, and runtime security.

Therefore, when building or using AI Agent systems, security design should be approached from an overall architectural perspective, such as following the principle of least privilege in assigning API Keys and account permissions to Agents, avoiding the activation of unnecessary high-risk features; conducting permission isolation for plugins and Skills at the tool invocation layer, to prevent single components from having simultaneous capabilities for data access, decision-making generation, and fund operations; and implementing clear behavior boundaries and parameter limits when the Agent executes critical operations, incorporating manual confirmation mechanisms in necessary scenarios to reduce the irreversible risks of automated execution. Additionally, for external inputs relied upon by the Agent, prompt injection attacks should be prevented through reasonable prompt design and input isolation mechanisms, avoiding direct incorporation of external content as system commands in the model reasoning process. During actual deployment and operation phases, further strengthening API Key and account security management, such as only enabling necessary permissions, setting IP whitelists, regularly rotating Keys, and avoiding storing sensitive information in plaintext within code repositories, configuration files, or logging systems; in the development process and operational environment, measures such as plugin security audits, control of logged sensitive information, and behavior monitoring and auditing mechanisms should be implemented to reduce risks from configuration leaks, supply chain attacks, and anomalous operations.

At a more macro level of security architecture, SlowMist proposed a multi-layer security governance approach aimed at AI and Web3 smart agent scenarios through the construction of a layered protection system to systematically reduce the risks of agents in high-permission environments. In this framework, L1 security governance primarily relies on a unified development and usage security baseline, establishing security specifications covering development tools, Agent frameworks, plugin ecosystems, and runtime environments, to provide teams with a unified source of strategies and audit standards when introducing AI toolchains. Based on this, L2 can effectively constrain the execution scope of high-risk operations through convergence of Agent permission boundaries, minimum privilege control of tool invocation, and human confirmation mechanisms for critical behaviors. Similarly, L3 introduces real-time threat awareness capabilities at the external interaction entry level, pre-screening external resources such as URLs, dependency repositories, and plugin sources, thereby reducing the chances of malicious content or supply chain poisoning entering execution paths; in scenarios involving on-chain transactions or asset operations, additional security isolation is achieved through L4 on-chain risk analysis and independent signing mechanisms, allowing the Agents to formulate transactions without direct access to private keys, thereby minimizing systemic risks from high-value asset operations. Finally, L5 operational mechanisms through continuous inspections, log audits, and periodic security reviews establish a "pre-execution pre-screening, in-execution constraining, post-execution review" closed-loop security capability. This layered security approach is not a single product or tool but rather a governance framework for security concerning AI toolchains and agent ecosystems, with the core goal of establishing a sustainable, auditable, and evolutionary Agent security operation system that can better deal with the evolving security challenges arising from the deep integration of AI and Web3.

Overall, AI Agents bring a higher degree of automation and intelligence to the Web3 ecosystem, but their security challenges should not be overlooked. Only by establishing robust security mechanisms at multiple levels, including system design, permission management, and operational monitoring, can we effectively reduce potential risks while promoting the technological innovation of AI Agents. We hope this report can provide references for developers, platforms, and users in building and using AI Agent systems, fostering a more secure and reliable Web3 ecosystem in tandem with technological development.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

躺赚BNB,超级AI管家
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

1 hour ago
Encrypted xAI Weekly News: TAO Soars 40% in One Week, Decentralized Computing Power Trains Competitive-Level Large Model for the First Time.
2 hours ago
Phantom obtains CFTC exemption letter, allowing cryptocurrency wallets to connect directly to compliant derivatives market for the first time.
3 hours ago
Bittensor is the hope of the entire Crypto community.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
3 minutes ago
From narrative play to coding, after the L2 transaction fees dropped below 1 cent, what does Ethereum rely on to make money without "selling Gas"?
avatar
avatarTechub News
26 minutes ago
DAO governance platform Tally "bows out": regulatory winds change suddenly, the dream of the "infinite garden" shatters, is the winter of DAO governance upon us?
avatar
avatarPANews
33 minutes ago
Trading Moment: Major Positive News for Cryptocurrency Regulations, Bitcoin May "Drop Then Rise" After Interest Rate Meeting
avatar
avatarTechub News
40 minutes ago
Public Chain Governance Theory: Examining Solana's Ecological Transformation Through the Prosperity and Costs of Singapore
avatar
avatarOdaily星球日报
56 minutes ago
VIP believers in the crypto winter: Hundreds of billions evaporated, why are they still holding on?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink