Author: Vitalik Buterin
Translated by: Yangz, Techub News
Perhaps the biggest trend of this century can be summarized as "the internet has become real life." It all began with email and instant messaging. Private conversations that relied on word of mouth and written communication for thousands of years are now conducted on digital infrastructure. Next, we welcomed digital finance—encompassing cryptocurrency finance and the digitization of traditional finance itself. Then came our health: thanks to smartphones, personal health tracking watches, and information inferred from consumer data, various types of information about our bodies are being processed through computers and computer networks. In the next twenty years, I expect this trend to sweep through other fields, including various government processes (eventually even voting), monitoring physical and biological indicators and threats in public environments, and ultimately even influencing our thoughts through brain-computer interface technology.
I believe these trends are inevitable. Their advantages are too great; in a highly competitive global environment, civilizations that reject these technologies will first lose their competitiveness and then surrender their sovereignty to those that embrace technology. However, in addition to providing significant advantages, these technologies also profoundly affect the power dynamics within and between nations.
The civilizations that benefit the most from the new wave of technology are not the consumers of technology, but the producers of technology. Centralized, planned equal access schemes targeting closed platforms and APIs can at most achieve a small fraction of the value, and will fail when conditions exceed the preset "norm." Moreover, this future requires us to place immense trust in technology. If that trust is broken (for example, through backdoors or security vulnerabilities), we will face very serious problems. Even the mere possibility that trust could be broken will force people back into essentially exclusive social trust models ("Is this thing built by someone I trust?"). This gives rise to incentives that propagate upward through the entire technology stack: the sovereign is the one who decides the state of exception.
To avoid these problems, the technologies throughout the entire technology stack (software, hardware, and biotechnology) need to possess two interwoven characteristics: true openness (i.e., open source, including free licensing) and verifiability (ideally, including the ability for end users to verify directly).
The internet is real life. We hope it can become a utopia, not a dystopia.
The Importance of Openness and Verifiability in Health
During the COVID-19 pandemic, we witnessed the consequences of unequal access to production technologies. Vaccines were produced in only a few countries, leading to significant disparities in vaccine availability across different nations. Wealthy countries received high-quality vaccines in 2021, while others had to wait until 2022 or 2023 for lower-quality vaccines. Although there were some initiatives to ensure equitable access, these initiatives had very limited impact due to the capital-intensive patented manufacturing processes that could only be conducted in a few locations.
COVID-19 vaccine coverage from 2021 to 2023.
The second major issue with vaccines is their lack of transparency in science and communication strategies, which attempted to convince the public that vaccines pose absolutely no risks or downsides—this falsehood ultimately fueled distrust. Today, this distrust has spiraled, evolving into what seems like a complete denial of scientific achievements over the past half-century.
In fact, both of these issues are solvable. Vaccines like PopVax funded by the Balvi fund have lower development costs and a more open research process, which reduces access inequality and makes their safety and efficacy easier to analyze and verify. We could even go further and prioritize verifiability as a fundamental principle in vaccine design.
Similar issues exist in the digital realm of biotechnology. When you talk to longevity researchers, a common point you will hear is that the future of anti-aging medicine is personalized and data-driven. To know what medication to recommend or which nutrients to adjust for a person today, you need to understand their current bodily condition. If we could collect and process vast amounts of data in real-time, this process would be much more efficient.
The data collected by this watch is 1000 times more than that of Worldcoin. This has both benefits and drawbacks.
The same principle applies to defensive biotechnologies aimed at preventing negative events, such as combating pandemics. The sooner a pandemic is detected, the more likely it is to be stopped at its source—even if it cannot be, every additional week means more time to prepare and start formulating responses. During a pandemic, having real-time knowledge of which areas are affected is immensely valuable for deploying countermeasures. If ordinary people infected with a virus can self-isolate within an hour of noticing symptoms, the spread they cause may be 72 times smaller than if they infect others for three days. If we know which 20% of locations are responsible for 80% of the spread, improving air quality in those areas can yield further benefits. All of this requires a large number of sensors that can communicate in real-time, transmitting information to other systems.
As we move toward a more "sci-fi" direction, we touch on brain-computer interfaces, which can greatly enhance productivity, help people understand each other better through telepathic communication, and pave a safer path toward highly intelligent AI.
If the infrastructure used for biological and health tracking (for individuals and spaces) is proprietary, then the data will default to flow into the hands of large companies. These companies have the ability to build various applications on this basis, while others cannot. They may provide access through APIs, but API access will be limited, used for monopolistic rent extraction, and can be revoked at any time. This means that only a few individuals and companies can access one of the most important elements of a major technological field in the 21st century, which in turn limits who can reap economic benefits from it.
On the other hand, if such personal health data is insecure, hackers can exploit any health issues to extort you, extracting value from you by adjusting insurance and medical product pricing, and if the data includes location tracking, they know where to wait for you to implement a kidnapping. Conversely, your location data (often hacked) can also be used to infer your health status. If your brain-computer interface is hacked, it means that adversaries can actually read (or worse, write) your thoughts. This is no longer science fiction: here describes a credible attack where a hacked brain-computer interface could lead to someone losing motor control.
In summary, this brings tremendous benefits but also significant risks. And these risks highly emphasize the importance of openness and verifiability.
The Importance of Openness and Verifiability in Personal and Commercial Digital Technologies
Earlier this month, I had to fill out and sign a form required for a legal procedure. At the time, I was not in the country. Although there is a national electronic signature system, I had not set it up. I had to print the form, sign it, walk to a nearby DHL, spend considerable time filling out paper documents, and then pay to have the form shipped to the other side of the world. It took half an hour and cost $119. On the same day, I also needed to sign a (digital) transaction to execute an operation on the Ethereum blockchain. It took 5 seconds and cost $0.10 (to be fair, without the blockchain, the signature could have been completely free).
Such stories abound in areas like corporate or nonprofit governance, intellectual property management, and more. Over the past decade, you could find a significant portion of business plans among all blockchain startups that reflect this. On top of that, there exists an ultimate application scenario for "digitally exercising personal rights": payments and finance.
Of course, this scenario carries significant risks: what if the software or hardware is hacked? This is a risk that the crypto space recognized early on: blockchains are permissionless and decentralized, so if you lose control of your funds, there are no resources and no "superman" to turn to. Not your keys, not your coins. For this reason, the crypto space began to explore multi-signature and social recovery wallets, as well as hardware wallets. However, in reality, the lack of a trustworthy "superman" in many cases is not an ideological choice but an inherent characteristic of the scenario. In fact, even in traditional finance, "supermen" have failed to protect most people: for example, only 4% of scam victims are able to recover their losses. In use cases involving personal data custody, it is fundamentally impossible to reverse data breaches. Therefore, we need true verifiability and security—covering both software and ultimately hardware.
A technology proposed to check whether computer chip manufacturing is correct here.
Importantly, in the hardware domain, the risks we are trying to guard against go far beyond "Are the manufacturers evil?" The issue lies in the existence of numerous dependencies, most of which are closed-source; any negligence in any of these could lead to unacceptable security consequences. This paper illustrates a recent example showing how microarchitecture choices can weaken the resistance of designs that are proven secure only at the software level against side-channel attacks. Vulnerabilities exploited by attacks like EUCLEAK are harder to detect precisely because many components are proprietary. If training is conducted on contaminated hardware, backdoors could be implanted in the AI model during training.
Another issue in all these cases is the drawbacks of closed and centralized systems, even if they are absolutely secure. Centralization creates a persistent leverage effect between individuals, companies, or nations: if your core infrastructure is built and maintained by a potentially untrustworthy company in a possibly untrustworthy country, you are vulnerable to pressure (for example, see Henry Farrell's discussion on weaponizing interdependence here). This is precisely the type of problem that cryptographic technology aims to solve, but its domain extends far beyond finance.
The Importance of Openness and Verifiability in Digital Civic Technologies
I often engage with various individuals who are exploring better governance forms suited to their respective contexts in the 21st century. Some, like Audrey Tang, are trying to elevate already well-functioning political systems to new levels, empowering local open-source communities and utilizing mechanisms like citizens' assemblies, lotteries, and quadratic voting. Others are starting from scratch: for instance, some Russian political scientists recently proposed a constitution for Russia characterized by strong protections for individual freedoms and local autonomy, with a strong institutional bias toward peace and opposition to aggression, granting unprecedented importance to direct democracy. There are also those, like economists studying land value tax or congestion charges, who are working to improve their country's economy.
Different people may have varying levels of enthusiasm for each idea. But they all share one commonality: they require high-bandwidth participation, so any realistic implementation must be digital. Paper and pen records can suffice for the most basic property registrations and quadrennial elections, but they are far from adequate for any scenario that demands higher bandwidth or more frequent input from us.
However, historically, security researchers have ranged from skepticism to hostility toward ideas like electronic voting. Here is a good summary against electronic voting. To quote from that document:
"First, the technology is 'black box software,' meaning the public cannot access the software that controls the voting machines. While companies protect their software to prevent fraud (and fend off competition), this also leads to the public being completely unaware of how the voting software works. It would be easy for a company to manipulate the software to produce fraudulent results. Moreover, the vendors marketing these machines compete with each other, making it impossible to guarantee that they produce machines in the best interest of voters and the accuracy of ballots."
There are numerous real-world cases that demonstrate this skepticism is warranted.
A critical analysis of Estonia's internet voting in 2014.
These arguments apply in various other contexts as well. But I predict that as technology advances, the response of "let's just not do it" will become increasingly unrealistic across many fields. The world is becoming more efficient due to technology (for better or worse), and I predict that any system that does not align with this trend will become increasingly irrelevant as people circumvent it in personal and collective affairs. Therefore, we need an alternative: to truly tackle this difficult task and figure out how to make complex technological solutions both secure and verifiable.
In theory, "secure verifiable" and "open source" are two different things. Proprietary systems can also be entirely secure; for example, airplanes are highly proprietary technology, yet commercial aviation is overall a very safe mode of travel. However, what proprietary models cannot achieve is a shared understanding of security, the ability to be trusted by parties that do not trust each other.
Civic institutions like elections are a critical case where shared understanding of security is very important. Another case is the collection of evidence in court. Recently, in Massachusetts, a large amount of breathalyzer evidence was ruled inadmissible because information about test failures was found to have been concealed.
"Wait, so are all the results problematic? No. In fact, in most cases, breathalyzer tests do not have calibration issues. However, because investigators later discovered that the state crime lab had concealed evidence indicating that the problems were more widespread than they had stated, Judge Frank Gaziano wrote that the due process rights of all these defendants had been violated."
Due process in court essentially requires not only fairness and accuracy but also a shared understanding of fairness and accuracy—because if society does not form a shared understanding that the courts are doing the right thing, it can easily fall into a vicious cycle of people acting on their own.
In addition to verifiability, openness itself has inherent benefits. Openness allows local communities to design governance, identity verification, and other systems that align with local goals. If the voting system is proprietary, then countries (or provinces, towns) wanting to experiment with new systems will face significant challenges: they either have to persuade companies to implement their preferred rules as features or start from scratch and complete all the work to ensure security. This adds high costs to political system innovation.
In any of these areas, adopting a more open-source, hacker-ethical approach would empower local implementers with more autonomy, whether they act as individuals or as part of a government or company. To achieve this, open building tools need to be widely available, and infrastructure and codebases need to be freely licensed to allow others to build upon them. Furthermore, to minimize power disparities, copyleft is particularly important.
The last crucial area of civic technology in the coming years is physical security. Over the past two decades, surveillance cameras have proliferated, raising many concerns about civil liberties. However, in my view, the recent rise of drone warfare will make "not adopting high-tech security" an unviable option. Even if a country's laws do not infringe on personal freedoms, if that country cannot protect you from other nations (or rogue companies and individuals) imposing their laws upon you, then it is all meaningless. Drones make such attacks much easier. Therefore, we need countermeasures, which may involve a significant number of counter-drone systems, sensors, and cameras.
If these tools are proprietary, data collection will become opaque and centralized. However, if these tools are open and verifiable, we have the opportunity to achieve better solutions: security devices can be proven to output limited data only in specific contexts and delete the rest. We may usher in a digital physical security future that resembles digital watchdogs rather than a digital panopticon. We can envision a world where public surveillance devices are required to be open source and verifiable, and anyone has the legal right to randomly select public surveillance devices for disassembly and verification. University computer science clubs could regularly use this as an educational practice.
The Path to Openness and Verifiability
We cannot avoid the deep embedding of digital computing technology in every aspect of our (individual and collective) lives. By default, what we are likely to get is digital technology built and operated by centralized companies, optimized for the profit motives of a few, implanted with backdoors by the governments of their respective countries, and inaccessible for most people in the world to create or know about their security. But we can strive to move toward better alternatives.
Imagine a world where:
You own a secure personal electronic device—it has the performance of a smartphone, the security of a hardware wallet, and is nearly as verifiable as a mechanical watch.
All your communication apps are encrypted, message patterns are obfuscated through mix networks, and all code is formally verified. You can be confident that your private communications are genuinely private.
Your assets are standardized ERC20 assets on-chain (or on a server that publishes hashes and proofs to ensure correctness), managed by a wallet controlled by your personal electronic device. If you lose the device, you can recover it through a combination of other devices you choose, family, friends, or institutions (not necessarily the government: if anyone can easily do it, for example, a church might also provide this service).
There exists an open-source version of infrastructure similar to Starlink, allowing us to achieve robust global connectivity without relying on a few individual actors.
An open-weight large language model on the device scans your activities, provides suggestions, and automates tasks, warning you when you might receive misinformation or are about to make a mistake.
The operating system is also open source and formally verified.
You wear a 24-hour open-source personal health tracking device that is also verifiable, allowing you to access your data and ensure that others cannot obtain it without consent.
We have more advanced forms of governance, employing lotteries, citizens' assemblies, quadratic voting, and generally clever combinations of democratic voting to set goals and determine how to achieve them through some method of selecting proposals from experts. As a participant, you can be confident that the system is executing the rules as you understand them.
Public spaces are equipped with monitoring devices to track biological variables (such as carbon dioxide and air quality index levels, the presence of airborne diseases, wastewater). But these devices (along with any surveillance cameras and defensive drones) are open source and verifiable, with a legal framework allowing the public to conduct random inspections.
In this world, we will enjoy more security, freedom, and equal opportunities to participate in the global economy than we do today. However, achieving this world requires more investment in various technologies:
More advanced forms of cryptography. The technology I refer to as "cryptographic Egyptian god cards" (such as ZK-SNARKs, fully homomorphic encryption, and obfuscation) is so powerful because it allows you to perform arbitrary program computations on data in a multi-party environment and provides output guarantees while keeping the data and computations private. This makes it possible to build more powerful privacy-preserving applications. Cryptographic tools (for example, blockchains can ensure data integrity and prevent user exclusion, while differential privacy further protects privacy by adding noise) are also applicable here.
Application and user-level security. Applications are only secure when the security guarantees they provide can truly be understood and verified by users. This will require the development of software frameworks that make it easy to build applications with strong security attributes. Importantly, browsers, operating systems, and other intermediaries (such as locally running monitoring large language models) must work together to verify applications, assess risk levels, and present this information to users.
Formal verification. We can use automated proof methods to algorithmically verify whether programs meet the properties we care about, such as not leaking data or being resistant to unauthorized third-party modifications. The Lean language has recently become a popular tool for achieving this purpose. These technologies have begun to be used to verify the ZK-SNARK proof algorithms of the Ethereum Virtual Machine (EVM) and other high-value, high-risk use cases in the cryptographic field, and they are also being applied in the broader world. In addition, we need to make further progress in other more conventional security practices.
The cyber security fatalism of the 2000s is wrong: vulnerabilities (and backdoors) can be overcome. We "just" need to learn to prioritize security over other competitive goals.
Open-source and security-focused operating systems. Such systems are continuously emerging, like GrapheneOS, a security-enhanced version of Android, the minimalist secure kernel Asterinas, and Huawei's HarmonyOS (which has an open-source version) that also employs formal verification (I expect many readers will think "if it's a Huawei product, it must have a backdoor," but this completely misses the point: as long as the product is open and verifiable by anyone, who produces it is irrelevant. This is a perfect example of how openness and verifiability can combat global fragmentation).
Secure open-source hardware. If you cannot be sure of the software actually running on the hardware and there are no side-channel data leaks, no software can be considered secure. I am particularly focused on two short-term goals in this area:
Personal secure electronic devices—known as "hardware wallets" in the blockchain space and "secure phones" among open-source enthusiasts, but when you understand the need for security and generality, the two will ultimately merge into the same thing.
Physical infrastructure in public spaces—smart locks, the aforementioned biological monitoring devices, and general "Internet of Things" technologies. We need to be able to trust them. This requires openness and verifiability.
A secure open toolchain for building open-source hardware. Current hardware design relies on a series of closed-source components, significantly increasing manufacturing costs and filling the process with licensing restrictions. This also makes hardware verification impractical: if the tools that generate chip designs are closed-source, you have no idea what you are verifying. Even existing tools like scan chains often become impractical due to the excessive closed nature of the necessary toolchain. All of this can change.
**Hardware verification (such as ** IRIS and X-ray scanning). We need to scan chips to verify that their actual logic matches the design and that there are no additional components allowing for unauthorized tampering and data extraction. We can use destructive methods, where auditors randomly order products containing computer chips (posing as seemingly ordinary terminals) to disassemble the chips and verify that the logic matches. Non-destructive verification can be achieved through IRIS or X-ray scanning, allowing every chip to potentially be scanned.
To establish a trust consensus, we ideally need publicly accessible hardware verification technologies. Currently, X-ray machines do not meet this requirement. Improvements can be made in two ways: first, by enhancing the verification devices (and the chip's verification friendliness) to increase accessibility; second, by supplementing "complete verification" with more limited forms of verification (such as ID tags and physical unclonable function key signatures that can be completed via smartphones), which can confirm more restricted claims, such as "Does this device belong to a batch produced by a well-known manufacturer, and has a random sample from that batch been thoroughly verified by a third-party organization?"
Open-source, low-cost, localized environments and biological monitoring devices. Communities and individuals should be able to monitor their own environments and health conditions and identify biological risks. This includes various forms of technology: personal medical devices like OpenWater, air quality sensors, general airborne disease sensors (like Varro), and larger-scale environmental monitoring systems.
The openness and verifiability of each layer of the technology stack are crucial
From the Current Situation to the Vision
The key difference between this vision and a more "traditional" view of technology is that it is more conducive to local sovereignty and individual empowerment. Security no longer relies on a global purge to ensure that evil has no hiding place, but is achieved by making the world more resilient at all levels. Openness means that improvements can be made to technologies at all levels, rather than being limited to centrally planned open API projects. Verification is no longer a privilege exclusive to rubber-stamp auditing agencies that may collude with corporate governments—it becomes a right of the people, a hobby encouraged by society.
I believe this vision is more resilient and better fits our fragmented 21st-century global landscape. However, the time to realize this vision is not infinite. Centralized security solutions that rely on enhanced data collection, implanted backdoors, and simplify verification to "is it made by a trusted developer" are advancing rapidly. Attempts to replace genuine open access with centralized solutions have been ongoing for decades, starting with Facebook's internet.org, and increasingly sophisticated attempts will continue to emerge in the future. We must accelerate our competition with these solutions while also clarifying to the public and institutions the possibilities of better alternatives.
If this vision is realized, we will gain a world of retro-futurism. On one hand, powerful technologies will enable us to improve health, self-organize more efficiently and flexibly, and resist new and old threats; on the other hand, this world will recreate the characteristics that people took for granted in the 1900s: infrastructure can be freely disassembled, verified, and modified to meet needs, and everyone can participate not only as consumers or "application builders" but can also delve into any layer of the technology stack, with everyone confident that devices will perform their claimed functions accurately.
Designing for verifiability comes at a cost. Many optimizations in hardware and software can indeed bring the speed improvements urgently needed in the market, but the price is that designs become harder to understand or more fragile. Additionally, the open-source model makes profitability under many standard business models more challenging. I believe both of these issues are exaggerated, but they are not things that can be convinced overnight. This also raises the question of what pragmatic short-term goals we should aim for.
One of my answers is: to commit to building a fully open-source and easily verifiable technology stack, focusing on high-security, non-performance-critical applications—including consumer-level and institutional-level, remote and on-site scenarios, covering hardware, software, and biotechnology. Most computations that truly require security are not demanding in terms of speed; even in cases where speed is needed, it is often possible to achieve a balance of high performance and high trust by combining "high-performance but untrusted" and "trusted but not high-performance" components. Achieving extreme security and openness for everything is unrealistic, but we can start by ensuring these characteristics are realized in truly critical areas.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。