Dialogue with Vitalik: Exploring the Vision of Ethereum 2025, the Innovative Integration of POS, L2, Cryptography, and AI

CN
12 days ago

This interview was conducted in Chinese, and Vitalik's Chinese was very fluent.

Author: DappLearning

On April 7, 2025, at the Pop-X HK Research House event co-hosted by DappLearning, ETHDimsum, Panta Rhei, and UETH, Vitalik and Xiao Wei appeared at the event.

During a break in the event, Yan, the founder of the DappLearning community, interviewed Vitalik. The interview covered various topics including ETH POS, Layer 2, cryptography, and AI. This interview was conducted in Chinese, and Vitalik's Chinese was very fluent.

Here is the content of the interview (original content has been edited for readability):

01 Views on POS Upgrade

Yan:

Hello, Vitalik, I am Yan from the DappLearning community. It is a great honor to interview you here.

I started learning about Ethereum in 2017. I remember that in 2018 and 2019, there was a heated discussion about POW and POS, and this topic may continue to be discussed.

Looking at it now, (ETH) POS has been running stably for over four years, with millions of Validators in the consensus network. However, at the same time, the exchange rate of ETH to BTC has been declining, which has both positive aspects and some challenges.

So, from this point in time, what do you think about Ethereum's POS upgrade?

Vitalik:

I think the prices of BTC and ETH have nothing to do with POW and POS.

There are many different voices in the BTC and ETH communities, and what these two communities are doing is completely different, as are their ways of thinking.

Regarding the price of ETH, I think there is a problem: ETH has many possible futures, and (one can imagine) that in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH.

This is a concern for many people in the community, but it is actually a normal issue. For example, Google has many products and does many interesting things. However, over 90% of their revenue is still related to their search business.

The relationship between Ethereum's ecosystem applications and ETH (price) is similar. Some applications pay a lot of transaction fees and consume a lot of ETH, while there are many (applications) that may be relatively successful, but they do not correspondingly bring that much success to ETH.

So this is something we need to think about and continue to optimize. We need to support more applications that have long-term value for Ethereum holders and for ETH.

Therefore, I think the future success of ETH may appear in these areas. I don't think it has much relevance to improvements in consensus algorithms.

02 PBS Architecture and Centralization Concerns

Yan:

Yes, the prosperity of the ETH ecosystem is also an important reason that attracts us developers to build it.

Okay, what do you think about the ETH2.0 PBS (Proposer & Builder Separation) architecture? This is a good direction; in the future, everyone can use a mobile phone as a light node to verify (ZK) proofs, and anyone can stake 1 ether to become a Validator.

However, Builders may become more centralized, as they need to do anti-MEV and generate ZK Proofs. If Based rollups are adopted, then Builders may have even more responsibilities, such as acting as Sequencers.

In this case, will Builders become too centralized? Although Validators are already sufficiently decentralized, this is a chain. If one link in the middle has a problem, it will also affect the operation of the entire system. So, how do we solve the censorship resistance issue in this area?

Vitalik:

Yes, I think this is a very important philosophical question.

In the early days of Bitcoin and Ethereum, there was a subconscious assumption:

Building a block and validating a block are one operation.

Assuming you are building a block, if your block contains 100 transactions, then your own node needs to process that many (100 transactions) gas. When you finish building the block and broadcast it to the world, every node in the world also needs to do that much work (consume the same gas). So if we set the gas limit to allow every laptop or MacBook, or a server of a certain size, to build blocks, then we need appropriately configured node servers to validate these blocks.

This was the previous technology. Now we have ZK, DAS, many new technologies, and Statelessness (stateless validation).

Before using these technologies, building a block and validating a block needed to be symmetrical, but now it can become asymmetrical. So the difficulty of building a block may become very high, but the difficulty of validating a block may become very low.

Using a stateless client as an example: If we use this stateless technology and increase the gas limit tenfold, the computational demand for building a block will become enormous, and an ordinary computer may no longer be able to do it. At this point, we may need to use a particularly high-performance MAC studio or a more powerful server.

But the cost of validation will become lower, because validation requires no storage, relying only on bandwidth and CPU computing resources. If we add ZK technology, the CPU cost of validation can also be eliminated. If we add DAS, then the cost of validation will be very, very low. If the cost of building a block becomes higher, the cost of validation becomes very low.

So is this better compared to the current situation?

This question is complex. I think about it this way: if there are some super nodes in the Ethereum network, that is, some nodes have higher computational power, we need them to perform high-performance computing.

How do we prevent them from acting maliciously? For example, there are several types of attacks:

First: Creating a 51% attack.

Second: Censorship attack. If they refuse to accept some users' transactions, how can we reduce this type of risk?

Third: Operations related to anti-MEV. How can we reduce these risks?

Regarding the 51% attack, since the validation process is done by Attesters, those Attester nodes need to validate DAS, ZK Proof, and stateless clients. The cost of this validation will be very low, so the threshold for becoming a consensus node will still be relatively low.

For example, if some Super Nodes build blocks, if such a situation occurs where 90% of these nodes are yours, 5% are his, and 5% are others. If you completely refuse to accept any transactions, it is not particularly bad, why? Because you cannot interfere with the entire consensus process.

So you cannot perform a 51% attack; the only thing you can do is to refuse certain users' transactions.

Users may just need to wait for ten or twenty blocks for another person to include their transaction in a block, and that’s the first point.

The second point is that we have the concept of Fossil. What is Fossil for?

Fossil separates the role of "selecting transactions" from the role of executing transactions. This way, the role of selecting which transactions to include in the next block can be more decentralized. Therefore, through the Fossil method, smaller nodes will have the independent ability to choose transactions to include in the next block. Additionally, if you are a large node, your power is actually very limited[1].

This method is more complex than before. Previously, we thought of each node as a personal laptop. But if you look at Bitcoin, it is now also a relatively hybrid architecture. Because Bitcoin miners are all data centers for mining.

So in POS, it is done like this: some nodes require more computational power and resources. However, the rights of these nodes are limited, and other nodes can be very decentralized, ensuring the security and decentralization of the network. But this method is more complex, so this is also a challenge for us.

Yan:

Very good thinking. Centralization is not necessarily a bad thing, as long as we can limit malicious actions.

Vitalik:

Yes.

03 Issues Between Layer 1 and Layer 2, and Future Directions

Yan:

Thank you for answering my long-standing confusion. Now we come to the second part of the question. As a witness to Ethereum's journey, Layer 2 has actually been very successful. The TPS issue has indeed been resolved. Unlike during the ICO days (when transactions were congested).

I personally think that Layer 2 is quite usable now. However, currently, the issue of liquidity fragmentation for Layer 2 has led many people to propose various solutions. What do you think about the relationship between Layer 1 and Layer 2? Is the current Ethereum mainnet too laid-back, too decentralized, and does it impose no constraints on Layer 2? Should Layer 1 establish rules with Layer 2, or create some profit-sharing models, or adopt solutions like Based Rollup? Justin Drake recently proposed this solution at Bankless, and I also agree with it. What do you think? Also, I am curious if there are already corresponding solutions, when will they go live?

Vitalik:

I think there are several issues with our Layer 2 now.

First, their progress in security is not fast enough. So I have been pushing for all Layer 2s to upgrade to Stage 1, and I hope they can upgrade to Stage 2 this year. I have been urging them to do this while also supporting L2BEAT to do more transparency work in this area.

Second, there is the issue of L2 interoperability. This refers to cross-chain transactions and communication between two L2s. If two L2s are within the same ecosystem, interoperability needs to be simpler, faster, and cheaper than it is now.

Last year, we started this work, now called the Open Intents Framework, along with Chain-specific addresses, which is mostly UX-related work.

In fact, I believe that 80% of the cross-chain issues for L2 are actually UX problems.

Although the process of solving UX issues can be painful, as long as the direction is correct, we can simplify complex problems. This is also the direction we are working towards.

Some things need to go further. For example, the withdrawal time for Optimistic Rollup is currently one week. If you have a token on Optimism or Arbitrum, transferring that token to L1 or to another L2 requires waiting a week.

You can have Market Makers wait a week (and you need to pay them a certain fee). Regular users can use methods like Open Intents Framework Across Protocol to transfer from one L2 to another for smaller transactions, which is feasible. However, for larger transactions, Market Makers still have limited liquidity. Therefore, the transaction fees they require will be relatively high. Last week, I published that article[2], where I support the 2 of 3 validation method, which is OP + ZK + TEE.

Because if we implement that 2 of 3, it can simultaneously meet three requirements.

The first requirement is completely Trustless, without needing a Security Council; TEE technology plays a supportive role, so it does not need to be fully trusted.

Second, we can start using ZK technology, but ZK is still a relatively early technology, so we cannot fully rely on it yet.

Third, we can reduce the withdrawal time from one week to one hour.

You can imagine that if users use the Open Intents Framework, the liquidity cost for Market Makers would decrease by 168 times. Because the time Market Makers need to wait (to perform rebalancing operations) would be reduced from one week to one hour. In the long term, we plan to reduce the withdrawal time from one hour to 12 seconds (the current block time), and if we adopt SSF, it can be reduced to 4 seconds.

Currently, we will also use techniques like zk-SNARK Aggregation to process ZK proofs in parallel, reducing latency a bit. Of course, if users do this with ZK, they do not need to go through Intents. However, if they do it through Intents, the cost will be very low, which is part of Interactability.

Regarding the role of L1, in the early stages of the L2 Roadmap, many people thought we could completely replicate Bitcoin's Roadmap, where L1 would have very few uses, only doing proofs (and minimal work), while L2 could do everything else.

However, we found that if L1 does not play any role at all, it would be dangerous for ETH.

As we discussed before, one of our biggest concerns is: the success of Ethereum applications cannot translate into the success of ETH.

If ETH is not successful, it will lead to our community having no funds and being unable to support the next round of applications. Therefore, if L1 does not play a role at all, the user experience and the entire architecture will be controlled by L2 and some applications. There will be no one to represent ETH. So if we can assign more roles to L1 in some applications, it would be better for ETH.

Next, we need to answer the question: What will L1 do? What will L2 do?

In February, I published an article[3] stating that in a L2-centric world, there are many important things that L1 needs to do. For example, L2 needs to send proofs to L1; if a L2 has issues, users will need to cross-chain to another L2 through L1. Additionally, Key Store Wallets and Oracle Data can be placed on L1, etc. Many such mechanisms rely on L1.

There are also some high-value applications, such as DeFi, which are actually more suitable for L1. One important reason why some DeFi applications are more suitable for L1 is their time horizon; users need to wait a long time, such as one year, two years, or three years.

This is especially evident in prediction markets, where sometimes questions are asked about what will happen in 2028.

Here lies a problem: if the governance of a L2 has issues, theoretically, all users there can exit; they can move to L1 or to another L2. However, if there is an application within this L2 whose assets are locked in long-term smart contracts, then users cannot exit. So many theoretically safe DeFi applications are not very safe in practice.

For these reasons, some applications should still be built on L1, so we are starting to pay more attention to the scalability of L1.

We now have a roadmap to improve L1's scalability with about four to five methods by 2026.

The first is Delayed Execution (separating block validation and execution), which means we can validate blocks in each slot and execute them in the next slot. This has the advantage of potentially increasing the maximum acceptable execution time from 200 milliseconds to 3 seconds or 6 seconds, allowing for more processing time[4].

The second is Block Level Access List, which means each block will need to specify in its information which accounts' states need to be read and related storage states. This is somewhat similar to Stateless without Witness, and it has the advantage that we can process EVM execution and IO in parallel, which is a relatively simple implementation method for parallel processing.

The third is Multidimensional Gas Pricing[5], which can set a maximum capacity for a block, which is very important for security.

Another is (EIP4444) historical data processing, which does not require every node to permanently store all information. For example, each node can only store 1%, and we can use a p2p method, where your node might store part of it, and his node stores another part. This way, we can store that information more decentralized.

So if we can combine these four solutions, we believe we can potentially increase L1's gas limit by 10 times, allowing all our applications to start relying more on L1 and doing more on L1, which would benefit L1 and ETH as well.

Yan:

Okay, the next question, are we likely to see the Pectra upgrade this month?

Vitalik:

Actually, we hope to do two things: approximately at the end of this month, we will conduct the Pectra upgrade, and then we will perform the Fusaka upgrade in Q3 or Q4.

Yan:

Wow, that fast?

Vitalik:

Hopefully.

Yan:

The next question I have is also related to this. As someone who has witnessed Ethereum's growth, we know that to ensure security, there are about five or six clients (consensus clients and execution clients) being developed simultaneously, which involves a lot of coordination work, leading to longer development cycles.

This has its pros and cons; compared to other L1s, it may indeed be slow, but it is also safer.

However, what solutions are there to avoid waiting a year and a half for an upgrade? I have seen you propose some solutions; could you elaborate on them?

Vitalik:

Yes, there is a solution where we can improve coordination efficiency. We are now starting to have more people who can move between different teams to ensure more efficient communication between teams.

If a client team has an issue, they can raise it and let the research team know. Actually, one of the advantages of Thomas becoming one of our new EDs is that he is from the client (team), and now he is also in the EF (team). He can facilitate this coordination, which is the first point.

The second point is that we can be stricter with the client teams. Our current approach is that if there are five teams, we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now considering that we can start the upgrade as long as four teams are ready, so we do not have to wait for the slowest one, which can also motivate everyone more.

04 Views on Cryptography and AI

Yan:

So appropriate competition is still necessary. It's great; I really look forward to every upgrade, but let's not make everyone wait too long.

Next, I want to ask some questions related to cryptography, which are somewhat scattered.

In 2021, when our community was just established, we gathered developers from major exchanges and researchers from venture capital to discuss DeFi. 2021 was indeed a stage where everyone participated in understanding, learning, and designing DeFi, a nationwide participation craze.

Looking back at the development, regarding ZK, whether for the public or developers, learning ZK, such as Groth16, Plonk, Halo2, has become increasingly difficult for developers to catch up with as technology progresses rapidly.

Additionally, we now see a direction where the development of ZKVM is also fast, leading to the direction of ZKEVM not being as popular as before. When ZKVM matures, developers may not need to focus too much on the ZK underlying technology.

What are your suggestions and views on this?

Vitalik:

I think for some ecosystems of ZK, the best direction is that most ZK developers can learn some high-level languages, that is, HLL (High Level Language). They can write their application code in HLL, while the researchers of the Proof System can continue to modify and optimize the underlying algorithms. Developers need to be layered; they do not need to know what happens at the next layer.

Currently, there may be a problem that the ecosystem of Circom and Groth16 is quite developed, but this poses a significant limitation for ZK ecosystem applications. Because Groth16 has many drawbacks, such as the need for each application to have its own Trusted Setup, and its efficiency is not very high, we are also considering that we need to allocate more resources here and help some modern HLLs achieve success.

Another good route is ZK RISC-V. Because RISC-V can also become an HLL, many applications, including EVM and some others, can be written on RISC-V[6].

Yan:

Okay, so developers only need to learn Rust, which is great. I attended Devcon in Bangkok last year and also heard about the development of applied cryptography, which was quite enlightening.

Regarding applied cryptography, what do you think about the combination of ZKP, MPC, and FHE, and what advice would you give to developers?

Vitalik:

Yes, this is very interesting. I think FHE has a good prospect now, but there is a concern, which is that MPC and FHE always require a Committee, meaning you need to select, for example, seven or more nodes. If those nodes can be attacked, say 51% or 33%, then your system will have problems. It’s equivalent to saying that the system has a Security Council, which is actually more serious than a Security Council. Because if an L2 is Stage 1, then the Security Council needs 75% of the nodes to be attacked for issues to arise[7], that’s the first point.

The second point is that the Security Council, if they are reliable, will mostly throw their assets into cold wallets, meaning they will mostly be offline. However, in most MPC and FHE systems, their Committee needs to be online continuously for the system to operate, so they may be deployed on a VPS or other servers, making it easier to attack them.

This worries me a bit; I think many applications can still be developed, which have advantages but are not perfect.

Yan:

Finally, I want to ask a relatively light question. I see you have been paying attention to AI recently, and I want to list some viewpoints.

For example, Elon Musk said that humans might just be a guiding program for silicon-based civilizations.

Then there is a viewpoint in "The Network State" that centralized countries may prefer AI, while democratic countries prefer blockchain.

From our experience in the crypto space, decentralization presupposes that everyone will abide by the rules, will check and balance each other, and will understand how to bear risks, which ultimately leads to elite politics. So what do you think of these viewpoints? Just share your thoughts.

Vitalik:

Yes, I’m thinking about where to start answering.

Because the field of AI is very complex. For example, five years ago, no one would have predicted that the U.S. would have the best closed-source AI in the world, while China would have the best open-source AI. AI can enhance everyone's capabilities, and sometimes it can also enhance the power of some centralized (countries).

However, AI can also have a somewhat democratizing effect. When I use AI myself, I find that in areas where I am already among the top thousand in the world, such as in some fields of ZK development, AI actually helps me very little in the ZK part; I still need to write most of the code myself. But in areas where I am a novice, AI can help me a lot. For example, in developing Android apps, I had never done it before. I created an app ten years ago using a framework and wrote it in JavaScript, then converted it into an app; apart from that, I had never written a native Android app.

Earlier this year, I conducted an experiment where I wanted to try writing an app through GPT, and it was completed within an hour. This shows that the gap between experts and novices has been significantly reduced with the help of AI, and AI can also provide many new opportunities.

Yan:

To add a point, I really appreciate the new perspective you’ve given me. I previously thought that with AI, experienced programmers would learn faster, while it would be unfriendly to novice programmers. But in some ways, it indeed enhances the capabilities of novices. It may be a form of equality rather than division, right?

Vitalik:

Yes, but now a very important question that also needs to be considered is what effects the combination of some technologies we are developing, including blockchain, AI, cryptography, and other technologies, will have on society.

Yan:

So you still hope that humanity will not just be under elite rule, right? You also hope to achieve a Pareto optimality for the entire society, where ordinary people become super individuals through the empowerment of AI and blockchain.

Vitalik:

Yes, yes, super individuals, super communities, super humans.

05 Expectations for the Ethereum Ecosystem and Advice for Developers

Yan:

OK, then we move on to the last question, what are your expectations and messages for the developer community? What would you like to say to the developers in the Ethereum community?

Vitalik:

To those developers of Ethereum applications, it’s time to think.

There are many opportunities to develop applications in Ethereum now, and many things that were previously impossible can now be done.

There are many reasons for this, such as:

First: The previous TPS of L1 was completely insufficient, but now this problem is gone;

Second: There was previously no way to solve privacy issues, but now there is;

Third: Because of AI, the difficulty of developing anything has decreased. It can be said that although the complexity of the Ethereum ecosystem has increased somewhat, AI can still help everyone better understand Ethereum.

So I think many things that failed in the past, including ten years ago or five years ago, may now succeed.

In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.

The first type can be described as very open, decentralized, secure, and particularly idealistic (applications). But they only have 42 users. The second type can be described as casinos. The problem is that both extremes are unhealthy.

So what we hope to do is create some applications,

First, that users will like to use, which will have real value.

Those applications will be better for the world.

Second, there are really some business models, for example, economically sustainable, that do not need to rely on limited foundation or other organizations' funds, which is also a challenge.

But now I think everyone has more resources than before, so if you can find a good idea and execute it well, your chances of success are very high.

Yan:

Looking back, I think Ethereum has been quite successful, continuously leading the industry while striving to solve the problems faced by the industry under the premise of decentralization.

Another point that resonates with me is that our community has always been non-profit, through Gitcoin Grants in the Ethereum ecosystem, as well as OP's retroactive rewards and airdrop rewards from other projects. We have found that building in the Ethereum community can receive a lot of support. We are also thinking about how to ensure the community can operate sustainably and stably.

Building Ethereum is truly exciting, and we hope to see the true realization of the world computer soon. Thank you for your valuable time.

The interview took place at Mo Sing Leng, Hong Kong

April 7, 2025

Finally, here’s a photo with Vitalik 📷

The references mentioned by Vitalik in the article are summarized as follows:

[1]:

https://ethresear.ch/t/fork-choice-enforced-inclusion-lists-focil-a-simple-committee-based-inclusion-list-proposal/19870

[2]:

https://ethereum-magicians.org/t/a-simple-l2-security-and-finalization-roadmap/23309

[3]:

https://vitalik.eth.limo/general/2025/02/14/l1scaling.html

[4]:

https://ethresear.ch/t/delayed-execution-and-skipped-transactions/21677

[5]:

https://vitalik.eth.limo/general/2024/05/09/multidim.html

[6]:

https://ethereum-magicians.org/t/long-term-l1-execution-layer-proposal-replace-the-evm-with-risc-v/23617

[7]:

https://specs.optimism.io/protocol/stage-1.html?highlight=75#stage-1-rollup

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

派网:注册并领取高达10000 USDT
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink