Author: Trustless Labs
On January 18, 2024, Trustless Labs held an online AMA event with the theme "Facing the Dencun Upgrade, What Will Ethereum L2 Choose?". Trustless Labs and guests from ZKFair, EthStorage, and Taiko discussed the three solutions for "data availability" after the Dencun upgrade, deployed projects, emerging projects' different focuses when choosing solutions, as well as potential issues and challenges.
X Space: https://twitter.com/i/spaces/1vAxRvmgwBjxl
Host: Frank Bruno@Co-Founder of Bitboost
Guests: Crypto White @Founder of Trustless Labs, Ark@Core Contributor of ZKFair, Zhou Qi @Founder of EthStorage, Dave@Head of Devrel of Taiko, Vincent @APAC DevRel of Scroll, David@VP of Polygon
Supported by: Foresight News, TechFlow, Odaily, BlockBeats
The event had over 500 real-time online listeners and over 7200 cumulative listeners. We sincerely thank all participants and partners for their support. Here is a recap of this AMA.
Opening Introduction
Trustless Labs is a leading technology incubator focusing on blockchain, AI, and other areas of research and technology. It is committed to being a catalyst for active innovation in the blockchain and technology fields by working with entrepreneurs through strategic investments and incubation services. Trustless Labs announced the establishment of the Phase One Bitcoin Ecosystem Fund, investing $10 million to incubate Bitcoin ecosystem projects and drive blockchain technology innovation. (Twitter: @TrustlessLabs)
ZKFair is the first ZK-L2 community based on Polygon CDK and Celestia DA, supported by ZK-RaaS provider Lumoz. ZKFair uses the stablecoin USDC as the gas token, ensuring 100% EVM compatibility, excellent performance, low costs, and strong security. (Twitter: @ZKFCommunity)
Taiko is building a decentralized Ethereum equivalent (Type-1) ZK-EVM and a general-purpose ZK-Rollup to scale in a way that closely mimics Ethereum. As Taiko is Ethereum equivalent, all existing Ethereum tools can be used directly, making any additional audits or code changes redundant—meaning less overhead for developers. (Twitter: @taikoxyz)
EthStorage provides a programmable dynamic data storage solution based on Ethereum's data availability technology. The EthStorage team has received funding from the Ethereum Foundation's Ecosystem Support Program (ESP) twice. (Twitter: @EthStorage)
Q1: The Dencun upgrade brings three choices for data availability (DA) solutions. The first is on-chain storage solutions, such as Calldata, known for real-time, secure, and high cost. The second is off-chain storage solutions, such as Ethstore, EigenLayer, and Celestia. The third is Dencun's upgraded proto-dankshding, which defaults to a 30-day data cache but allows a few nodes to choose to retain records. Which solution do the guests think should be chosen? What different focuses might existing and emerging projects have when choosing a solution?
Crypto White@Founder of Trustless Labs
Rollup projects have close ties with the Asia Foundation and are likely to transition from traditional Calldata on-chain solutions to the more economical proto-dankshding. The example of Optimism illustrates their close collaboration with the Ethereum Foundation, although transitioning to proto-dankshding requires a significant amount of development work. The main benefit of this shift is a significant reduction in gas costs for the second layer, as the cost of proto-dankshding is much lower than Calldata. For new Rollup projects, using off-chain solutions such as EthStorage, Celestia, and EigenLayer may be a better choice. New projects following the proto-dankshding concept may find it challenging to establish uniqueness and innovation in a market with various existing off-chain data availability solutions, making competition with existing projects more difficult. Each solution has its unique features, and combining them with these DA solutions may give rise to new Rollup solutions. For new Rollup projects, I believe off-chain solutions are a more feasible choice.
Dave@Head of Devrel of Taiko
I think these solutions provide a good way to explore decentralized systems that can replace certain parts of the current internet. When choosing these solutions, developers are more critical than end users. While many use cases may lower security guarantees, this is still acceptable in many cases. All alternative layers have their use cases, but they cannot fully replace the functionality provided by Rollup. Some strict Rollup use cases, such as allowing full node operation, exit mechanisms, syncing states, and generating Merkle proofs, are deeply dependent on the underlying architecture. This ultimately depends on the functionality developers want to achieve. Therefore, I believe no solution should be excluded. In fact, some very cool applications may be developed in alternative DA solutions such as EthStorage or Celestia.
Zhou Qi@Founder of EthStorage
Proto-danksharding is very cost-friendly for large projects like Optimism to upload data to Ethereum, claiming to provide security from Ethereum. For example, Optimism's upgrade plan has already incorporated the EIP-4844 feature, which introduces a new data object called binary large object, and its implementation code can be found in the GitHub repository. It is important to note that proto-danksharding aims to reduce costs compared to Calldata. Although the methods for calculating costs vary, the basic technology for uploading data still relies on P2P propagation. The basic bandwidth that can be uploaded to the Ethereum network is still limited by the current P2P network. This means that if many projects choose this method to upload data to Ethereum, there may be a bottleneck because the current bandwidth is actually the same as Calldata. Therefore, when there are many participants in the data market and strong demand, its advantages may gradually weaken, and sometimes may even be equivalent to Calldata. This is something we need to closely monitor because it is a new experiment on Ethereum, introducing a new data market into the Ethereum protocol. How it will operate and how much cost reduction it can achieve is still unknown. I am looking forward to seeing its development. On the other hand, other DA solutions, such as Celestia, are already able to provide significantly lower costs. This may be an attractive choice for AI projects that need to upload large amounts of data, such as several gigabytes of AI models. The choice depends on different market and application needs, and it is expected to be resolved in the future. The comprehensive DA upgrade is expected to provide a fixed seamless 32 MB per block, equivalent to about 2 MB per second, bringing substantial improvements to the underlying technology and performance.
The second issue with Proto-danksharding is that it will delete data within a few days, for example, 18 days or sometimes even faster. The 18-day period in the current specification means that unlike the current use of Calldata, projects like Arbitrum and Optimism claim that they can fully derive all second-layer states from the Ethereum 2 layer. Discarding all data after 18 days means that Optimism and all these projects will not be able to derive the initial data of the 2 layer from the Ethereum 1 layer. They will have to rely on third parties to retrieve historical blocks or make snapshots of the latest state. This is a problem we are observing and trying to solve through research. The basic idea is how to build a modular storage layer on top of Ethereum so that we can store these blocks for a longer period; it could be months, years, or even longer. This way, we can help all these second-layer projects further reuse their data, even if their nodes may have discarded the data written into the protocol. In conclusion, the emergence of new technologies that provide diversified data options is exciting. There are still many challenges in terms of methods, including the cost of storing historical data, how to verify this storage off-chain, and how to allocate appropriate incentives to these storage providers. In the short term, there will be many excellent projects to choose from based on their needs.
Ark@ZKFair Core Contributor
I think this involves a trade-off between credibility and cost. Currently, the most reliable but also the most expensive option is to use Calldata. While transitioning to EIP-4844 can significantly reduce costs, its data persistence is not as reliable as Calldata. Additionally, third-party DA providers offer different credibility and cost balance solutions based on the type of project. Each choice depends on specific use cases. For emerging or startup projects, reducing costs may be the primary consideration because they are more focused on survival rather than immediately building community trust. Here, I focus on gas costs rather than machine costs or labor costs. For some startup projects, costs of thousands of dollars per day may be too high. In the initial stages, interaction with the chain may be minimal, so they can choose off-chain solutions like Celestia or CDK. As the community grows, enhancing the credibility of DA becomes possible, and this is an appropriate time to migrate from off-chain data to more reliable solutions. Of course, this transition may come with technical challenges, such as learning how to use these solutions and effectively integrating them into their workflows.
Q2 What challenges will offline storage and previous on-chain storage face if data migration is chosen? What is the migration period, and how should it be planned?
Crypto White@Founder of Trustless Labs
For ZK-Rollup projects, supporting proto-danksharding may face certain challenges. Since proto-danksharding uses KZG commitments, this may be incompatible with some ZK-Proof algorithms, such as ZK-Stack. Although it seems fully compatible with the Plonk algorithm, the KZG commitment may not be very friendly to other ZKP (zero-knowledge proof) algorithms. Therefore, some ZK-Rollup projects may not have a strong motivation to use proto-danksharding because supporting it requires a significant investment of development resources. As for Optimism ZK-Rollup projects, they may only need minor code changes to support proto-danksharding. However, migrating their data availability (DA) solution from the previous Calldata to proto-danksharding is not a simple task. So, while it is theoretically feasible to migrate DA from core data to proto-danksharding, there may be some technical complexities in practice.
Dave@Head of Devrel of Taiko
We do not plan to migrate and plan to enable EIP-4444 by default on our mainnet once testing is ready to begin. Additionally, some of the plans proposed by Qi are interesting. This includes finding a solution to store the latest block data somewhere at least, allowing users to trace the state back to the genesis block. This is a pending issue that is currently being addressed. In the long run, especially with recent block data, the deployment and utilization of EIP-4444 should be relatively smooth.
Zhou Qi@Founder of EthStorage
From my observation of the progress of Optimism Rollup, for example, Optimism is at the forefront of integrating EIP-4444. Many cost commitments are more friendly to the Optimistic fraudproof algorithm to support KZG commitments. For us, we have supported EIP-4844 from the beginning. Initially, we used Calldata, but the proof was extremely expensive. Therefore, we decided to support EIP-4844 from the beginning as it provides a more cost-effective solution. This approach was implemented when we launched our internal network and contributed to the EIP-4844 testnet. Currently, we are conducting data stress tests and there are some minor issues to be resolved. For example, we found that there is currently no support for the new opcode in Solidity, but we have released our library. Although it is not perfect, it is good for any second layer. If you want to test EIP-4844 in the early stages, you need to be able to retrieve block hashes and be able to verify or use ZK Validity in the Optimism way. We have something that can help. The basic use of EIP-4844 is currently quite stable. So I look forward to the possibility of more commitments, especially from the second layer or any other new data features related to EIP-4844, all at the top of Ethereum.
Developers may face challenges in adapting to new DA solutions and underlying code libraries. The migration process involves not only transferring historical data to the new DA layer but also ensuring the accuracy and stability of the data. Careful consideration is needed to determine which code logic should be modified and which should be retained to prevent data corruption. While the difficulty may not lie in the structure or design, ensuring everything is correct and secure poses a challenge. This migration process may take several months, especially when migrating from off-chain to EIP-4844, as this new technology is relatively unfamiliar to everyone and may bring more difficulties.
Q3 How to address the legitimacy issue of the second layer? From the perspective of the Ethereum official, projects that choose proto-danksharding or migrate from off-chain to proto-danksharding are second-layer projects, while on Celestia they are not. How should projects view this issue and make choices?
Dave@Head of Devrel of Taiko
Terms like Validium, Optimism, and Celestia can be used to describe different solutions, and "second layer" is an appropriate general term. However, it is important to clarify that using these alternative layers is not equivalent to using Rollup. Therefore, even if we classify Validium as a second layer, these definitions should be distinguished. Doing so has its benefits, helping people understand the differences between Rollup and other solutions more clearly.
Ark@ZKFair Core Contributor
Ethereum provides a definition for projects using off-chain data, called Validium. Validium can be considered a second-layer solution, but its credibility may need to be evaluated by the community and users. It is a feasible option for startup projects to use off-chain storage, and such projects can be classified as second-layer projects. However, the Ethereum team may be more inclined to define such projects as Validium projects.
Zhou Qi@Founder of EthStorage
To answer this question, we need to understand the definition of "second layer." For example, the core of Rollup is an unconditional security guarantee, which means it only relies on the first layer, and then we can ensure the security of the second layer. I believe that even under the current definition of the second layer, there are still some missing parts. Data will be discarded within a few days. Even now, Calldata is planned to be discarded in about a year. Basically, the second layer will lose the ability to derive states from the first layer, including Optimism. In this case, we still call it the second layer. Using a strict definition of the second layer should obtain security guarantees from the first layer. We believe they can obtain most of the security guarantees from Ethereum. In a broader sense, we can also consider it as the second layer. This means that Vitalik's description will have assumptions, which will add additional credibility. Some places store historical data of historical blocks, or have an assumption that the second layer will maintain some recent states, so they can still use the recent states and recent blobs stored on the network. For example, it is possible to restore the latest state within a week, greatly enhancing the security of the system. Additionally, this capability is more effective than combining various proofs. This method is advantageous, especially in ensuring the correctness of second-layer computations. However, it should be noted that, according to the current definition of the second layer, this method does introduce very small security or trust risks.
Therefore, I am more inclined to have a broader definition for different second layers, including the use of third-party Validium. I would like to pose a question: what if, in extreme cases, a post-4844-layer 2 solution becomes so large and important that its security requirements approach those of Ethereum itself? In this sense, Celestia can provide the same level of security. Overall, I believe the definition of the second layer should be more open, with different second-layer options identified and recognizing what kind of DA solution they are using. As long as projects can serve a large number of users and utilize technology to protect their data, they should be able to showcase and verify all security aspects on-chain. This is what truly amazes me about the future of the second layer.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。