Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Zhipu, The Dark Side of the Moon, and Xiaomi Together at Round Table Conference: Large Models Truly Start to "Work," Computing Power Remains the Biggest Bottleneck

CN
深潮TechFlow
Follow
3 hours ago
AI summarizes in 5 seconds.
Yang Zhilin hosted, Luo Fuli and Zhang Peng shared insights, and this "lobster meeting" thoroughly discussed the future of AI.

Author: Chen Junda

Zhishang reported on March 27, that today, at the Zhongguancun Forum, Zhipu CEO Zhang Peng, CEO of Moonshadow Yang Zhilin (acting as host), Xiaomi MiMo large model leader Luo Fuli, CEO of Wuwen Qinkong Xia Lixue, and Assistant Professor Huang Chao from the University of Hong Kong shared the stage for a rare in-depth dialogue on the future directions of open-source large models and intelligent agents.

This dialogue started with the hottest current topic, OpenClaw, and the guests unanimously agreed that intelligent agents have allowed large models to truly begin "working". OpenClaw expands the capability boundaries of large models, but also places higher demands on them. Zhipu is researching long-term planning, self-debugging, and other capabilities, while Luo Fuli's team is more focused on reducing costs and increasing speed through architectural innovations, even achieving model self-evolution.

Infrastructure must also keep up with the pace of intelligent agents. Xia Lixue believes that the current computing systems and software architectures are still designed for human use, not for intelligent agents; in fact, the operational capabilities of humans are limiting the performance of agents. Therefore, we need to build "Agentic Infra."

In the eyes of several guests, open-source is one of the core driving forces behind the development of large models and intelligent agents. Assistant Professor Huang Chao from the University of Hong Kong believes that the prosperity of the open-source ecosystem is key to the transition of intelligent agents from being "played with" to becoming real "workers"; only through community collaboration can software, data, and technology transition to the native form of intelligent agents, ultimately forming a sustainable global AI ecosystem.

In addition, several guests discussed topics such as price increases for large models, token consumption surges, and key trends in AI for the next 12 months. Here are the core points from this roundtable forum:

1. Zhang Peng: As models grow larger, the cost of inference will also increase accordingly. Recently, Zhipu's price increase strategy is actually a return to normal commercial value; long-term competition based on low prices is detrimental to industry development.

2. Zhang Peng: The explosion of new technologies like intelligent agents has led to a 10-fold increase in token consumption, but the actual demand may grow by 100 times, and there is still significant unmet demand. Therefore, computing power remains a key issue in the next 12 months.

3. Luo Fuli: From the perspective of base large model manufacturers, OpenClaw ensures the lower limit of fundamental large models while raising the upper limit. The task completion rate of domestic open-source models combined with OpenClaw is already very close to Claude.

4. Luo Fuli: DeepSeek has brought courage and confidence to domestic large model manufacturers. Some model structure innovations that seemed "compromised for efficiency" have sparked significant transformations, allowing the industry to achieve the highest level of intelligence under fixed computing resources.

5. Luo Fuli: The most important aspect of the AGI journey in the next year is "self-evolution." Self-evolution enables large models to explore like top scientists, making it the only place capable of "creating new things." Xiaomi has already achieved a tenfold increase in research efficiency through Claude Code and top models.

6. Xia Lixue: When the AGI era arrives, the infrastructure itself should ideally be an intelligent agent, autonomously managing the entire infrastructure and iterating it based on AI clients' needs, achieving self-evolution and self-iteration.

7. Xia Lixue: OpenClaw has ignited a surge in token consumption. The current pace of token consumption reminds me of the early days of 3G when mobile data was just starting, and there was only a 100MB allowance each month.

8. Huang Chao: Many future software applications may not be aimed at humans; software, data, and technology will take on an Agent-Native form, and humans may only need to use those "GUIs that make them happy."

Following are the full transcript of this roundtable forum:

01. OpenClaw as "scaffolding," large model token consumption still in the 3G era

Yang Zhilin: It’s an honor to invite all these distinguished guests today. The guests come from the model layer, computing power layer, and agent layer. The main keywords today are open-source and agents.

The first question is about the currently popular OpenClaw. What do you find most imaginative or impressive about using OpenClaw or similar products? From a technical perspective, how do you see the evolution of OpenClaw and related agents today?

Zhang Peng: I started playing with OpenClaw a long time ago when it was still called Clawbot. I enjoyed tinkering with it because, after all, I am also from a programming background, and I have some personal experiences with these tools.

I believe the greatest breakthrough OpenClaw brings is that it is no longer exclusive to programmers or geeks. Ordinary people can now easily use the capabilities of top models, particularly in programming and intelligent agents.

So to this day, when I communicate with others, I prefer to refer to OpenClaw as "scaffolding." It provides a possibility, building a very solid, convenient, yet flexible scaffolding on top of the model. Everyone can use the novel capabilities offered by low-level models according to their wishes.

Previously, one’s ideas might have been limited by the inability to write code or lack of other related skills, but with OpenClaw, completing these tasks is finally possible through very simple communication.

OpenClaw had a significant impact on me, or rather, it allowed me to reevaluate this whole concept.

Xia Lixue: When I first used OpenClaw, I didn't quite adapt because I was used to conversing with large models, and I felt that OpenClaw reacted slowly.

But later, I realized a crucial difference: it is fundamentally a "person" who can help me complete large tasks. I began to assign it more complex tasks and found that it could actually perform quite well.

This realization made a significant impression on me. Models initially interacted through tokens, and now they can evolve into agents, transforming into lobsters that can help you complete tasks. This advancement greatly enhances the overall imaginative space of AI.

At the same time, this sets higher demands on the entire system's capabilities. This is why I initially found OpenClaw somewhat sluggish. As an infrastructure supplier, I see that OpenClaw brings more opportunities and challenges to the large systems and ecosystem behind AI.

The resources we currently have at our disposal are insufficient to support such a rapidly growing era. For example, in our company, since late January, our token consumption has doubled approximately every two weeks, and now it has increased nearly tenfold.

I last observed such a speed when I was consuming data on a 3G phone. I have a feeling that the current token consumption is similar to when mobile data was limited to only 100MB per month.

In this scenario, all our resources need to be optimized and better integrated. We must ensure that everyone, not just in the AI field, but in every part of society can leverage OpenClaw's AI capabilities.

As a player in infrastructure, I am very excited and deeply touched by this era. I also believe there are many areas for optimization that we should explore and attempt.

02. OpenClaw raises the upper limit of domestic models, interaction model breakthroughs are significant

Luo Fuli: I view OpenClaw as a revolutionary and disruptive event in the evolution of the agent framework.

Actually, all the people I know who are deeply coding still prefer Claude Code. However, I believe those using OpenClaw will feel that many of its designs in the agent framework are ahead of Claude Code. Recently, many updates to Claude Code are actually moving towards OpenClaw.

My experience using OpenClaw is that this framework expands my imagination anywhere and anytime. Initially, Claude Code could only extend my creativity on my desktop, but OpenClaw allows me to extend my creativity anytime and anywhere.

The core values brought by OpenClaw mainly fall into two points. The first is that it is open-source. Open-source significantly benefits community participation, emphasizing and promoting the evolution of this framework, which is a crucial prerequisite.

An AI framework like OpenClaw offers great value in that it raises the upper limit for domestic models that may be close to closed-source models but have not fully caught up.

In most scenarios, you will find that the task completion rate of (domestic open-source models + OpenClaw) is already very close to that of Claude's latest model. At the same time, it also well guarantees the lower threshold—through a Harness system or its Skills system and various designs, it ensures the completeness and accuracy of tasks.

In summary, from the perspective of developers at large model manufacturers, OpenClaw ensures the lower limit of the base large model while raising the upper limit.

Additionally, I believe another value it brings to the entire community is it ignites recognition among many, revealing that beyond large models, the agent layer contains vast imaginative potential.

Recently, I've observed that apart from researchers, more and more people are beginning to participate in the AGI revolution, engaging with more powerful agent frameworks like Harness and Scaffold. These individuals, in some ways, are using these tools to replace part of their work and freeing up time to engage in more imaginative activities.

Huang Chao: From the perspective of interaction models, I think OpenClaw's success can be attributed to the fact that it provides a more "human-like" experience. We've been working on agents for a year or two, but earlier agents like Cursor and Claude Code felt more like “tools.” OpenClaw’s embedded approach resembles “instant messaging software,” giving people a sense of closeness to their ideal "personal Jarvis." I think this is a significant breakthrough in interaction models.

Another insight it brings to the community is that simple yet efficient frameworks like Agent Loop have proven to be viable again. It also prompts us to rethink whether we need a versatile super agent capable of doing everything or a better "little housekeeper"—like a lightweight operating system or scaffolding.

The thought brought by OpenClaw is to foster a "little system" or "lobster operating system" and its ecosystem that truly allows people to adopt a playful mindset and thereby leverage all the tools within the entire ecosystem.

With the emergence of capabilities like Skills and Harness, more and more people can design applications tailored for systems like OpenClaw that empower various industries. I think this naturally aligns closely with the entire open-source ecosystem. In my view, these two points are the greatest insights we have gained.

03. GLM new model designed for "working," price increases reflect a return to normal commercial value

Yang Zhilin: I would like to ask Zhang Peng. Recently, I saw that Zhipu released the new GLM-5 Turbo model, which I understand has made significant enhancements in agent capabilities. Could you introduce to everyone what differences this new model has compared to other models? Additionally, we have observed a price increase strategy; what market signals does this reflect?

Zhang Peng: This is a great question. A couple of days ago, we did indeed urgently update, which is actually a phase in our overall development roadmap that was released early.

The main goal is to shift from "simple conversation" to "truly working"—this is what everyone has recently felt: large models are no longer just for chatting; they can genuinely assist people in their work now.

However, the implicit demands for "working" entail high ability requirements. Models need to self-plan long-term tasks, continuously trial and error, compress context, debug, and may even need to handle multimodal information. Therefore, the demands on the model's capabilities differ from traditional general dialogue models. GLM-5 Turbo specifically strengthens these aspects, especially regarding enabling it to work and continuously loop for up to seventy-two hours; we have done a lot of work in this area.

Additionally, everyone is concerned about token consumption. Allowing a smart model to tackle complex tasks involves a massive token consumption. The average person might not deeply perceive this, but they notice significant expense when viewing the bills. Thus, we have also optimized token efficiency so that the model can complete complex tasks with higher efficiency. Overall, the model architecture remains a multitasking collaborative architecture but has been selectively strengthened in capability.

The price increase is also easy to explain. As mentioned earlier, it's no longer simply asking a question and receiving an answer; the inference chain is very long. Many tasks involve writing code and breaking down barriers with underlying infrastructure, continually debugging and correcting errors, leading to significant consumption. The token amount required to complete a complex task may be ten to a hundred times that required for a simple question.

Therefore, prices need to be adjusted somewhat; as the model has grown, inference costs have correspondingly increased. We are returning to normal commercial value because relying on low prices long-term does not benefit the entire industry. This also allows us to create a positive cycle in commercialization, continuously optimizing model capabilities and providing better services.

04. Creating a more efficient token factory, infrastructure itself should be an agent

Yang Zhilin: As more open-source models emerge and begin to create an ecosystem, various models can provide users with more value on different computing platforms. With the explosive demand for tokens, large models are transitioning from a training era to an inference era. I would like to ask Lixue, from the perspective of infrastructure, what does the inference era mean for Wuwen?

Xia Lixue: We are an infrastructure company born in the AI era, currently supporting Zhipu, Kimi, Mimo, and others, enabling everyone to make better use of token factories. We are also collaborating with many universities and research institutions.

Thus, we have been reflecting on what kind of infrastructure the AGI era will require and how we can gradually achieve and simulate it. We are well-prepared for short, medium, and long-term problems that need to be addressed.

The most immediate issue is the explosive increase in token consumption brought by OpenClaw, which demands higher system optimization. Price adjustments are actually one of the responses to this demand.

We have strategically aligned and addressed these issues through hardware-software integration. For instance, we have connected almost all types of computing chips, unifying dozens of different computing clusters with several different chips from within the country. This helps to address the shortage of computing resources in AI systems, and when resources are lacking, the best approach is to utilize all available resources effectively, ensuring that every computing power is applied to the right areas to maximize conversion efficiency.

In this phase, our priority is to create a more efficient token factory. We have made numerous optimizations, including ensuring that models can optimally adapt to memory and other resources available in hardware; we are also exploring whether the latest model structures and hardware can generate deeper interactions. However, solving the current efficiency issues is merely establishing a standardized token factory.

Looking towards the agent era, we believe this is not sufficient. Agents are more like people; tasks can be assigned to them. I firmly believe that much of the cloud computing era's infrastructure has been designed to serve a program and human engineers, not AI. This is akin to developing infrastructure with interfaces for human use and then wrapping a layer on top to interface with agents; this method restricts the performance potential of agents to human operational capabilities.

For example, agents can think and initiate tasks at milliseconds, but lower-level capabilities like Kubernetes are not prepared for this because human-initiated tasks typically take minutes. Therefore, we need more advanced capabilities, which we refer to as "Agentic Infra," or an "intelligent token factory," which is what Wuwen Qinkong is developing.

In the long term, when the AGI era genuinely arrives, we believe that the infrastructure itself must also be intelligent agents. The factory we are building should be able to self-evolve and self-iterate, establishing an autonomous organization. It will function as a CEO, an agent like OpenClaw, managing the entire infrastructure and iterating based on AI client demands. This way, interaction between AIs can be better coupled. We are also exploring ways to facilitate better communication between agents and capabilities like Cache to Cache.

Thus, we have continuously reflected on how infrastructure and AI development should not exist in an isolated state—fulfilling requests upon demand—but rather generate rich chemical reactions. This is the true meaning of soft-hard collaboration, the cooperation between algorithms and infrastructure, and it has always been Wuwen Qinkong's mission. Thank you.

05. "Compromised for efficiency" innovations are meaningful; DeepSeek brings courage and confidence to domestic teams

Yang Zhilin: Next, I want to ask Fuli. Xiaomi has recently made significant contributions to the community by releasing new models and the technology behind open-source. I want to ask you, what unique advantages does Xiaomi hold in the realm of large models?

Luo Fuli: I think it's more useful to set aside discussing unique advantages Xiaomi might have; I would prefer to focus on the overall advantages of China's large model teams. I believe this topic holds broader value.

About two years ago, China's foundational model teams began to make remarkable breakthroughs—how to surpass the limitations of low-end computing power, particularly under certain NVLink bandwidth constraints, was addressed with seemingly "efficiency-compromised" model structure innovations such as the DeepSeek V2 and V3 series, as well as MoE, MLA, and others.

However, what we saw resulting from these innovations was a transformation: how to achieve the highest level of intelligence with fixed computing power. This is what DeepSeek has brought in terms of courage and confidence to all domestic foundational model teams. Although domestic chips, especially inference chips and training chips, are no longer constrained by such limitations, it was under these constraints that we spurred new explorations for higher training efficiency and lower inference costs in model structures.

Recent structures like Hybrid Sparse and Linear Attention, along with DeepSeek's NSA and Kimi's KSA, and Xiaomi's next-gen structure HySparse, all represent innovative model structures for the agent era that differ from the MoE generation.

Why do I believe structural innovation is so crucial? If one genuinely utilizes OpenClaw, they will realize it becomes easier and smarter the more they use it. One prerequisite for this is the context length in inference. Long context has been a topic of discussion for a long time, yet do we now have models that perform well under long context, are strong in performance, and have low inference costs?

In fact, many models are able to handle 1M or 10M context, but the costs of inferring over 1M or 10M is simply too high—making it too slow. Only by reducing these costs and increasing the speed can we assign genuinely high-value tasks to models and accomplish more complex tasks under such long contexts, even achieve model self-iteration.

The so-called model self-iteration refers to the capacity to evolve itself within a complex environment by relying on ultra-long context. This evolution could pertain to the agent framework itself or the model parameters—since I believe context itself is essentially a form of evolution for the parameters. Thus, how to develop a long-context architecture and achieve efficient reasoning for long contexts is a comprehensive competitive challenge.

In addition to what I just mentioned about preparing long-context-efficient architectures during the pre-training stage—a topic we began exploring a year ago—achieving stability and high upper limits in long-term tasks is what we're iterating upon in our post-training innovations.

We are contemplating how to construct more effective learning algorithms, how to gather texts with genuine long-term dependencies in real environments at 1M, 10M, and 100M contexts, and how to combine trajectory data generated from complex environments. This is what our post-training is currently undertaking.

However, in the longer term, given the rapid advancements of large models coupled with the agent framework, as Lixue mentioned, inference demand has recently surged close to ten times. Will the overall token consumption this year reach 100 times?

This leads us into another dimension of competition—computing power, or inference chips, and even further down to energy. Therefore, I feel that if we all contemplate this issue together, I might learn a lot from everyone. Thank you.

06. Agents have three key modules; multi-agent explosions will bring impact

Yang Zhilin: Insightful sharing. Next, I want to ask Huang Chao, you have developed influential agent projects like Nanobot and have many community enthusiasts. From the perspective of Agent Harness or applications, what technological directions do you think are important and worth following up on?

Huang Chao: I believe that if we abstract the technology of agents, the key modules are Planning, Memory, and Tool Use.

First, Planning. Current challenges primarily arise in long-term tasks or very complex contexts, where many models may not necessarily excel at planning even over 500 steps or longer. I believe this stems from the models potentially lacking this type of implicit knowledge, particularly in some complex vertical domains. Thus, a future direction might involve solidifying the knowledge of various complex tasks into the models.

Of course, Skills and Harness are mitigating the errors brought about by Planning to some extent by providing high-quality Skills, essentially guiding models to accomplish more challenging tasks.

Next is Memory. Memory often carries the impression of inaccurate information compression and retrieval issues. Particularly under long-term tasks and complex scenarios, the pressure on Memory can increase drastically. Currently, projects like OpenClaw utilize the simplest file-system-based Markdown format for Memory, using shared files. In the future, Memory may evolve towards hierarchical designs, requiring it to become more universal.

To be honest, the current Memory mechanisms are quite difficult to standardize—since Coding, Deep Research, and multimodal contexts vary significantly in data modalities, how to effectively retrieve and index these Memory types while maintaining efficiency will always be a trade-off.

Additionally, with OpenClaw lowering the barriers to create Agents, in the future, there may not just be one “lobster.” I’ve observed that Kimi also has an Agent Swarm mechanism, and in the future, everyone may have "a group of lobsters."

Compared to a single lobster, one group of lobsters will drastically increase the context, which will significantly strain Memory. Currently, there is still no robust mechanism to manage the context introduced by "a group of lobsters," especially in complex coding and scientific discovery situations. This puts substantial pressure on both models and the overall Agent architecture.

Lastly, regarding Tool Use, or Skills. Presently, Skills face issues similar to those of MCP—MCP had quality assurance and safety issues. The same applies now; although numerous Skills appear available, few are of high quality, and low-quality Skills can affect the accuracy of Agent task completion. Additionally, there are also risks of malicious injection. Thus, with regards to Tool Use, the community may need to enhance the entire Skills ecosystem, potentially enabling Skills to self-evolve into new Skills during execution.

In summary, from Planning, Memory to Tool Use, these represent some existing pain points for Agents and possible future directions.

07. Key trends for the next 12 months: ecosystem, sustainable tokens, self-evolution, and computing power

Yang Zhilin: We can see that two guests have discussed a common issue from different perspectives—how, with increasing task complexities, context will explode. From the model level, we can enhance the native context length, while from the Agent Harness level, mechanisms like Planning, Memory, and Multi-Agent can also support more complex tasks under specific model capabilities. I believe these two directions will generate more chemical reactions, further enhancing task completion capabilities.

Finally, let’s look ahead openly. Please each use one word to describe the trend in large model development and your expectations for the coming 12 months. Let’s start with Huang Chao.

Huang Chao: 12 months in the AI field seems far off; it’s hard to predict what it will evolve into in 12 months.

Yang Zhilin: Initially, I had five years written here, but I changed it.

Huang Chao: Yes, haha. A word that comes to mind for me is "ecosystem." Currently, OpenClaw has energized everyone, but for the future, agents really need to become "workers" rather than merely something new to play with. They must be solidified as tools for carrying out work—true coworkers.

This requires efforts from the entire ecosystem, especially open-source; after making the technology and model techniques open-source, the community needs to co-build—whether it’s model iterations, platform iterations, or various tools—all must be directed toward creating an ecosystem around lobsters.

One apparent trend is, will future software still be aimed at human users? I believe many future software applications may not necessarily be designed for humans—because what humans need is GUI, while the future may cater to native Agent use. Interestingly, people might only use those GUIs that make themselves happy. The entire ecosystem is shifting from GUI and MCP to CLI models. This requires the ecosystem to transform software systems, data, and technologies into Agent Native forms, enriching overall development.

Luo Fuli: Narrowing the scope to a year makes a lot of sense. If it were five years, I would already say AGI has been achieved from my perspective.

So if I were to express in one sentence what the key thing is in AGI's journey over the next year, I believe it's "self-evolution."

This word may sound a bit fantastical, and it has been mentioned multiple times in the past year. However, I have recently gained a deeper understanding of it, or more practically feasible methods of "self-evolution." The reason being, with powerful models, we have hardly reached the upper limits of pre-trained models while using a Chat paradigm; the agent framework has activated this upper limit. When we allow models to execute tasks over longer durations, we find they can learn and evolve on their own.

A simple attempt is to introduce verifiable conditional limits to existing agent frameworks, coupled with a Loop that prompts the model to iteratively optimize objectives, leading to better solutions. This self-evolution can now run continuously for a day or two—of course, depending on task difficulty.

For instance, in scientific research, when exploring better model structures that can be evaluated against standards like lower PPL, we find that it can autonomously optimize and execute over the course of two to three days on certain deterministic tasks.

From my perspective, self-evolution is the only place capable of "creating new things." It does not replace our existing human productivity; rather, it enables exploration of things that have not yet existed, akin to top scientists. A year ago, I would have considered this timeline to stretch to three to five years, but recently I genuinely believe it should be narrowed down to one to two years. Perhaps we will soon use large models layered with a powerful self-evolving agent framework, achieving at least exponential acceleration in scientific research.

Recently, I've already observed that colleagues in my group engaging in large model research now have workflows that are highly uncertain and highly creative, but leveraging Claude Code along with top models has increased our research efficiency nearly tenfold. I eagerly anticipate this paradigm radiating into more extensive disciplines and fields, making "self-evolution" very important.

Xia Lixue: My keyword is "sustainable tokens." I see that the development of AI is still in a long-term persistent process, and we hope it has lasting vitality. From the perspective of infrastructure, a significant problem is that resources are ultimately finite.

Just as we discuss sustainable development, we, as a token factory, need to sustain, stabilize, and massively provide tokens so that top models can genuinely serve more downstream demands; this is an essential issue we see.

We need to broaden our perspective across the entire ecosystem—from energy to computing power, to tokens, and finally to applications—forming a sustainable economic iteration. We not only need to utilize various computing powers domestically but also export these capabilities overseas so that global resources can be integrated and united.

I also believe that "sustainability" is about implementing China's distinctive token economics. In the past, we talked about Made in China, exporting China's low-cost manufacturing capabilities into quality products to the world.

Now we want to achieve "AI Made in China"—transforming China's advantages in energy and other aspects into high-quality tokens through a sustainable token factory to be exported globally and becoming the world's token factory. This is the value I wish to see in AI that China brings to the world this year.

Zhang Peng: I'll keep it brief. While everyone is gazing at the stars, I will bring it down to earth. My keyword is "computing power."

As mentioned earlier, all technologies and agent frameworks have enhanced everyone’s creativity and efficiency tenfold, but the premise is that everyone can genuinely utilize them. You cannot pose a question and let it ponder for half a day without receiving an answer; that simply won't work. Because of this, many research advancements and many things people want to do will be hindered.

I recall that a couple of years ago, an academician spoke at the Zhongguancun Forum, saying, "Without cards, we have no feelings; talking about cards hurts feelings." I think we've reached that stage again, but conditions are different now. We have entered the inference phase, and demand is truly exploding—growing tenfold, even a hundredfold. When you mentioned a tenfold increase in usage, the actual demand may be a hundred times that; there remains a significant amount of unmet demand. What should we do about it? Perhaps we can all brainstorm solutions together.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

抢莫斯科门票,分5万刀!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

43 minutes ago
Why AI trading is accelerating its concentration in the futures market.
3 hours ago
The starting gun for SpaceX's IPO has not yet sounded, but has smart money in the "space sector" already started to rush ahead?
3 hours ago
Tokens are becoming a new type of stock.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
18 minutes ago
From Utopian Narrative to Financial Infrastructure: The "Disenchantment" and Shift of Crypto VC
avatar
avatarOdaily星球日报
18 minutes ago
Pharos Network collaborates with Circle to promote the construction of an open and inclusive global RealFi settlement system.
avatar
avatarPANews
18 minutes ago
Bitcoin mining companies are accelerating their departure from the mining era, and MARA is selling off a large amount of coins to invest in AI.
avatar
avatar深潮TechFlow
43 minutes ago
Why AI trading is accelerating its concentration in the futures market.
avatar
avatarPANews
52 minutes ago
Midfield Clash in Perp DEX: The Decliners, The Self-Savers, and The Latecomers
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink