Vitalik's new article: When technology controls everything, openness and verifiability become necessities.

CN
链捕手
Follow
7 hours ago

Original Author: Vitalik
_Original Translation: Shenchao TechFlow

_

Introduction

In this new article published on September 24, 2025, Vitalik Buterin explores a key issue that concerns all of our futures: how do we maintain autonomy when technology takes over our lives?

The article begins by pointing out the biggest trend of this century—"the internet has become real life."

From instant messaging to digital finance, from health tracking to government services, and even to future brain-computer interfaces, digital technology is reshaping every dimension of human existence. Vitalik believes this trend is irreversible because civilizations that reject these technologies will lose competitiveness and sovereignty in global competition.

However, the proliferation of technology has brought about profound changes in power structures. The true beneficiaries of the technological wave are not the consumers of technology, but the producers. As we place more and more trust in technology, the consequences will be catastrophic if that trust is broken (such as through backdoors or security vulnerabilities). More importantly, even the mere possibility of trust being broken will force society back into exclusive trust models, leading one to question, "Is this thing built by someone I trust?"

Vitalik's solution is that we need to achieve two interrelated characteristics across the entire technology stack (software, hardware, and even biotechnology): true openness (open source, free licensing) and verifiability (ideally verifiable directly by end users).

The article illustrates how these two principles support each other in practice through specific examples, and why having only one is not enough. Below is the full translation.

Special thanks to Ahmed Ghappour, bunnie, Daniel Genkin, Graham Liu, Michael Gao, mlsudo, Tim Ansell, Quintus Kilbourn, Tina Zhen, Balvi volunteers, and GrapheneOS developers for their feedback and discussions.

Perhaps the biggest trend of this century can be summarized in this sentence: "the internet has become real life." It began with email and instant messaging. Private conversations that have taken place for thousands of years through mouth, ear, paper, and pen are now running on digital infrastructure. Then we had digital finance—both cryptocurrency finance and the digitization of traditional finance itself. Next came our health: thanks to smartphones, personal health tracking watches, and data inferred from purchase records, various information about our bodies is being processed through computers and computer networks. In the next twenty years, I expect this trend to take over various other fields, including various government processes (eventually even voting), monitoring physical and biological indicators and threats in public environments, and ultimately, through brain-computer interfaces, even our thoughts.

I do not believe these trends are avoidable; their benefits are too great, and in a highly competitive global environment, civilizations that reject these technologies will first lose competitiveness and then sovereignty, succumbing to those that embrace these technologies. However, in addition to providing powerful benefits, these technologies also profoundly affect power dynamics, both within and between nations.

The civilizations that benefit the most from the new wave of technology are not those that consume technology, but those that produce it. Centralized projects with equal access to locked platforms and APIs can at best provide only a small part of the solution and will fail in cases outside the predetermined "normal" range. Moreover, this future involves placing a great deal of trust in technology. If that trust is broken (for example, through backdoors or security failures), we will face real problems. Even the mere possibility of trust being broken will force people back into fundamentally exclusive social trust models ("Is this thing built by someone I trust?"). This creates an incentive mechanism that propagates upward: the sovereign is the one who decides the state of exception.

Avoiding these issues requires the entire technology stack—software, hardware, and biotechnology—to possess two interwoven characteristics: true openness (i.e., open source, including free licensing) and verifiability (ideally, verifiable directly by end users).

The internet is real life. We hope it becomes a utopia, not a dystopia.

The Importance of Openness and Verifiability in Health

We have seen the consequences of unequal access to technological means of production during the COVID-19 pandemic. Vaccines were produced in only a few countries, leading to huge disparities in the timing of vaccine availability across different countries. Wealthier countries received high-quality vaccines in 2021, while others received lower-quality vaccines in 2022 or 2023. Some initiatives attempted to ensure equal access, but these efforts could only do so much due to the capital-intensive proprietary manufacturing processes on which the vaccines were designed, which could only be completed in a few locations.

COVID vaccine coverage from 2021-23.

The second major issue with vaccines is their lack of transparency regarding the science and communication strategies that attempt to pretend to the public that they carry zero risk or downsides, which is untrue and ultimately greatly exacerbated distrust. Today, this distrust has evolved into a rejection of half a century of science.

In fact, both of these issues are solvable. The development costs of vaccines like PopVax funded by Balvi are lower and utilize a more open manufacturing process, reducing access inequality while making it easier to analyze and verify their safety and efficacy. We can go further and design verifiable vaccines.

Similar issues apply to the digital aspects of biotechnology. When you talk to longevity researchers, one thing you will commonly hear is that the future of anti-aging medicine is personalized and data-driven. To know which drugs and nutritional changes to recommend to a person today, you need to understand their current bodily condition. If we can digitally collect and process vast amounts of data in real-time, it will be much more effective.

The data collected by this watch is a thousand times that of Worldcoin. This has its pros and cons.

The same idea applies to defensive biotechnology aimed at preventing downside risks, such as pandemic response. The earlier a pandemic is detected, the more likely it is to be contained at the source—if not, there will be more time each week to prepare and start working on countermeasures. While the pandemic continues, knowing where people are getting sick has great value for deploying countermeasures in real-time. If ordinary people infected with a pandemic learn to self-isolate within an hour, it means transmission occurs three days earlier than if they spread it to others. If we know which 20% of locations are responsible for 80% of the transmission, improving air quality there can yield further benefits. All of this requires (i) a lot of sensors, and (ii) the ability for sensors to communicate in real-time to provide information to other systems.

If we venture further into the "sci-fi" direction, we can see brain-computer interfaces enabling higher productivity, helping people communicate better through telepathy, and opening a safer path to highly intelligent artificial intelligence.

If the infrastructure for biological and health tracking (both personal and spatial) is proprietary, then the data will default to large companies. These companies have the ability to build various applications on top of it, while others do not. They can provide access through APIs, but API access will be restricted and used for monopolistic rent extraction, and may be revoked at any time. This means that a few individuals and companies can obtain the most important components of the 21st-century technology landscape, which in turn limits who can reap economic benefits from it.

On the other hand, if this personal health data is not secure, hackers who breach it can extort you over any health issues, optimize the pricing of insurance and healthcare products to extract value from you, and if the data includes location tracking, they know where to wait to kidnap you. Furthermore, your location data (very often gets hacked) can be used to infer information about your health status. If your brain-computer interface is hacked, it means hostile actors are actually reading (or worse, writing) your thoughts. This is no longer science fiction: see here for a plausible attack scenario where BCI hacking could lead to someone losing motor control.

In summary, there are huge benefits, but also significant risks: a strong emphasis on openness and verifiability is very suitable for mitigating these risks.

The Importance of Openness and Verifiability in Personal and Commercial Digital Technologies

Earlier this month, I had to fill out and sign a form required for a legal function. At the time, I was not in the countryside. There is a national electronic signature system, but I had not set it up at that moment. I had to print the form, sign it, walk to a nearby DHL, spend a considerable amount of time filling out the paper form, and then pay to have the form shipped to the other side of the planet. Time required: half an hour, cost: $119. On the same day, I had to sign a (digital) transaction to execute a task on the Ethereum blockchain. Time required: 5 seconds, cost: $0.10 (to be fair, without the blockchain, the signature could be completely free).

Such stories can easily be found in areas like corporate or nonprofit governance, intellectual property management, and more. Over the past decade, you can find them in a significant portion of the promotional materials of blockchain startups. Beyond that, there is the mother of all use cases for "digitally exercising personal power": payments and finance.

Of course, all of this comes with significant risks: what if the software or hardware is hacked? This is a risk that the crypto space recognized early on: blockchains are permissionless and decentralized, so if you cannot access your funds, there are no resources, and no uncle in the sky to seek help from. Not your keys, not your coins. For this reason, the crypto space began considering multisig and social recovery wallets and hardware wallets early on. However, in reality, in many cases, the absence of a trustworthy uncle in the sky is not an ideological choice but an inherent part of the scenario. In fact, even in traditional finance, "the uncle in the sky" cannot protect most people: for example, only 4% of scam victims are able to recover their losses. In use cases involving personal data hosting, recovery from leaks is not possible even in principle. Therefore, we need true verifiability and security—software and final hardware.

A proposed check for ensuring the correct manufacturing of computer chips.

Importantly, in terms of hardware, the risks we are trying to prevent far exceed "is the manufacturer evil?" Instead, the issue lies in the existence of numerous dependencies, most of which are closed source, where any single oversight could lead to unacceptable security outcomes. This article shows recent examples of how microarchitecture choices can undermine the designed side-channel resistance, which is provably secure in models that only consider software. Attacks like EUCLEAK rely on vulnerabilities that are harder to detect because of how many components are proprietary. If an AI model is trained on compromised hardware, backdoors can be inserted during training.

Another issue in all these cases is the drawbacks of closed and centralized systems, even if they are completely secure. Centralization creates persistent influence between individuals, companies, or nations: if your core infrastructure is built and maintained by a potentially untrustworthy company in a potentially untrustworthy country, you are easily susceptible to pressure (for example, see Henry Farrell on the weaponization of interdependence). This is the problem that cryptocurrencies aim to solve—but it exists in many more areas beyond finance.

The Importance of Openness and Verifiability in Digital Civic Technologies

I often talk to people from various fields who are trying to figure out better forms of government suitable for different environments in the 21st century. Some, like Audrey Tang, are trying to elevate political systems that have already worked to a new level, empowering local open-source communities and using mechanisms like citizens' assemblies, sorting, and ranked voting. Others are starting from the bottom: here is a constitution recently proposed by some political scientists born in Russia that provides strong guarantees for individual freedom and local autonomy, a strong institutional bias towards peace and opposition to aggression, and an unprecedented strong role for direct democracy. Others, like economists working on land value tax (here) or congestion pricing studies, are striving to improve their country's economy.

Different people may have varying degrees of enthusiasm for each idea. But they all share one commonality: they all involve high-bandwidth participation, so any realistic implementation must be digital. Pen and paper can be used for very basic records of who owns what and elections held every four years, but they are inadequate for anything that requires us to input at a higher bandwidth or frequency.

However, historically, security researchers' acceptance of ideas like electronic voting has ranged from skepticism to hostility. Here is a good summary of the case against electronic voting. Quoting from that document:

First, the technology is "black box software," meaning the public does not have access to the software controlling the voting machines. While companies protect their software to prevent fraud (and to outcompete), this also leaves the public unaware of how the voting software works. The company can easily manipulate the software to produce fraudulent results. Additionally, the vendors selling the machines compete with each other and cannot guarantee that they produce machines that serve the best interests of voters and the accuracy of ballots.

There are many real-world cases that demonstrate this skepticism is warranted.

A critical analysis of Estonia's internet voting from 2014.

These arguments apply verbatim to various other situations. But I predict that as technology advances, the reaction of "we simply don't do that" will become increasingly unrealistic across a wide range of fields. As technology develops, the world is rapidly becoming more efficient (for better or worse), and I predict that as people circumvent it, any system that does not follow this trend will become less and less relevant to individual and collective affairs. Therefore, we need an alternative: to truly do something difficult and figure out how to make complex technological solutions secure and verifiable.

In theory, "secure and verifiable" and "open source" are two different things. Certain things can absolutely be proprietary and secure: airplanes are highly proprietary technology, but overall, commercial aviation is a very safe mode of travel. However, what proprietary models cannot achieve is the common sense of security—the ability to be trusted by participants who do not trust each other.

Civic systems like elections are a common scenario where security knowledge is crucial. Another is the collection of evidence for courts. Recently, in Massachusetts, a large amount of breathalyzer evidence was ruled inadmissible because information about malfunctions in the tests was found to have been covered up. Quoting the article:

So, are all the results wrong? No. In fact, in most cases, there are no calibration issues with breathalyzer tests. However, because investigators later discovered that the state crime lab withheld evidence indicating that the problems were more widespread than they had claimed, Judge Frank Gaziano wrote that the due process rights of all these defendants were violated.

The court's due process is essentially a domain that requires not just fairness and accuracy, but a common sense of fairness and accuracy—because without common sense that the court is doing the right thing, society can easily fall into a situation where people take matters into their own hands.

In addition to verifiability, openness itself has inherent benefits. Openness allows local communities to design systems for governance, identity, and other needs in a way that is compatible with local goals. If the voting system is proprietary, then countries (or provinces or towns) wanting to try new systems will face difficulties: they either have to persuade the company to implement the rules they prefer as a feature or start from scratch and do all the work to ensure its security. This raises the high costs of political system innovation.

In any of these areas, a more open-source hacker ethic approach would empower local implementers more, whether they act as individuals or as part of a government or company. To achieve this, the open tools used for building need to be widely available, and the infrastructure and codebases need to be freely licensed to allow others to build upon them. Within the goal of minimizing power differentials, Copyleft is particularly valuable.

The final area of crucial civic technology in the coming years is physical security. Surveillance cameras have been popping up everywhere over the past two decades, leading to many concerns about civil liberties. Unfortunately, I predict that recent drone warfare will make "not doing high-tech security" no longer a viable option. Even if a country's own laws do not infringe on an individual's legal freedoms, if the state cannot protect you from other countries (or rogue companies or individuals) imposing their laws on you, drones make such attacks easier. Therefore, we need countermeasures, which may involve a significant number of counter-drone systems, sensors, and cameras.

If these tools are proprietary, then data collection will be opaque and centralized. If these tools are open and verifiable, then we have the opportunity to adopt better methods: secure devices can be proven to output only a limited amount of data under constrained circumstances and delete the rest. We can have a digitized physical security future that resembles a digital watchdog rather than a digital panopticon. One can imagine a world where public surveillance devices are required to be open source and verifiable, and anyone has the legal right to randomly select surveillance devices in public spaces, disassemble them, and verify them. University computer science clubs could often use this as an educational exercise.

Open Source and Verifiable Approaches

We cannot avoid the fact that digital computing is deeply rooted in various aspects of our (individual and collective) lives. By default, we may end up with digital computers built and operated by centralized companies, optimized for the profit motives of a few, with backdoors provided by their host governments, and most people in the world unable to participate in their creation or know whether they are secure. But we can try to shift towards better alternatives.

Imagine a world where:

  • You have a secure personal electronic device—with the functionality of a phone, the security and verifiability level of an encrypted hardware wallet, not quite like a mechanical watch, but very close.
  • Your messaging applications are all encrypted, message patterns are obfuscated by mixing networks, and all code is formally verified. You can be confident that your private communications are indeed private.
  • Your finances are standardized ERC-20 assets on-chain (or on a server that publishes hashes and proofs to the chain to guarantee correctness), managed by a wallet controlled by your personal electronic device. If you lose the device, they can be recovered through some combination of your other devices, family, friends, or institutions (not necessarily the government: if anyone can easily do this, churches are also likely to provide) of your choosing.
  • There exists an open-source version of infrastructure similar to Starlink, allowing us to achieve robust global connectivity without relying on a few individual participants.
  • You have an open-weight LLM on your device scanning your activities, providing suggestions and automating tasks, and alerting you when you might receive misinformation or are about to make a mistake.
  • The operating system is also open source and formally verified.
  • You wear a 24/7 personal health tracking device that is also open source and verifiable, allowing you to access data and ensure that no one else obtains it without your consent.
  • We have more advanced forms of governance, using sorting, citizens' assemblies, ranked voting, and a clever combination of democratic voting to set goals, as well as methods for selecting ideas from experts on how to achieve those goals. As a participant, you can actually be confident that the system is implementing the rules you understand.
  • Public spaces are equipped with monitoring devices to track biological variables (e.g., CO2 and AQI levels, the presence of air-borne diseases, wastewater). However, these devices (as well as any surveillance cameras and defensive drones) are open source and verifiable, and there exists a legal framework that allows the public to randomly inspect them.

In this world, we have more security and freedom than we do today, and we equally enter the global economy. But achieving this world requires more investment in various technologies:

  • More advanced forms of cryptography. What I refer to as Egyptian cryptographic god cardsZK-SNARKs, fully homomorphic encryption, and obfuscation—are powerful because they allow you to guarantee outputs in multi-party contexts while keeping data and computations private. This enables many more powerful privacy-preserving applications. Adjacent tools to cryptography (e.g., blockchain enables applications to strongly guarantee that data is not tampered with and users are not excluded, while differential privacy adds noise to data to further protect privacy) also apply here.
  • Application and user-level security. Applications are only secure when the security guarantees they provide can actually be understood and verified by users. This will involve software frameworks that make it easy to build applications with strong security properties. Importantly, it will also involve browsers, operating systems, and other intermediaries (like locally running observer LLMs) doing their part to verify applications, assess their risk levels, and present that information to users.
  • Formal verification. We can use automated proof methods to algorithmically verify whether programs meet the properties we care about, such as not leaking data or being susceptible to unauthorized modifications by third parties. Lean has recently become a popular language for this. These techniques have already begun to be used to verify ZK-SNARK proof algorithms for the Ethereum Virtual Machine (EVM) and other high-value, high-risk use cases in the cryptographic field, and they are similarly applicable in the broader world. In addition, we need to make further progress in other more mundane security practices.

The cybersecurity determinism of the 2000s was wrong: errors (and backdoors) can be defeated. We "just" have to learn to prioritize security more than other competing objectives.

  • Open-source and security-focused operating systems. More and more options are emerging: GrapheneOS as a security-centric version of Android, minimal kernels like Asterinas that focus on security, and Huawei's HarmonyOS (which has an open-source version) that uses formal verification (I expect many readers will think, "If it's Huawei, it must have a backdoor," but this misses the point: who produces the technology doesn't matter as long as it is open and anyone can verify it. This is a good example of how openness and verifiability can counter global balkanization.)
  • Secure open-source hardware. If you cannot be sure that your hardware is indeed running the software and not leaking data separately, then no software is secure. I am particularly interested in two short-term goals in this area:
  • Personal secure electronic devices—the "hardware wallets" that blockchain enthusiasts talk about and the "secure phones" that open-source enthusiasts refer to, which will ultimately converge into the same thing once you understand the needs for security and universality.
  • Physical infrastructure in public spaces—smart locks, the biometric monitoring devices I described above, and general "Internet of Things" technologies. We need to be able to trust them. This requires openness and verifiability.
  • A secure open toolchain for building open-source hardware. Today, hardware design relies on a series of closed-source dependencies. This significantly increases the cost of manufacturing hardware and makes the process more permissioned. It also makes hardware verification impractical: if the tools that generate chip designs are closed-source, you don't know what you are verifying against. Even tools like scan chains that exist today are often unusable in practice because too many necessary tools are closed-source. All of this can change.
  • Hardware verification (e.g., IRIS and X-ray scanning). We need methods to scan chips to verify that they indeed have the logic they are supposed to have and that they do not have additional components that allow for accidental forms of tampering and data extraction. This can be done destructively: auditors randomly order products containing computer chips (using seemingly average identities of end users), then disassemble the chips and verify that the logic matches. Through IRIS or X-ray scanning, this can be done non-destructively, allowing for the possibility of scanning every chip.
  • To achieve a consensus of trust, we ideally want hardware verification technologies that a large group of people can use. Today's X-ray machines are not yet at this point. This situation can be improved in two ways. First, we can improve the verification devices (and the verification friendliness of the chips) to make the devices more widely used. Second, we can supplement "comprehensive verification" with more limited forms of verification that can even be done on smartphones (e.g., key signatures generated by ID tags and physical unclonable functions), verifying stricter claims such as "Is this machine part of a batch produced by a known manufacturer, where known random samples have been thoroughly verified by a third-party group?"
  • Open-source, low-cost, local environmental and biometric monitoring devices. Communities and individuals should be able to measure their environment and themselves and identify biological risks. This includes various form factors of technology: personal-scale medical devices like OpenWater, air quality sensors, general airborne disease sensors (e.g., Varro), and larger-scale environmental monitoring.

The openness and verifiability of every layer of the stack are important.

From Here to There

A key difference between this vision and a more "traditional" technological vision is that it is more friendly to local sovereignty, individual empowerment, and freedom. Security is not achieved by searching the entire world to ensure there are no bad actors anywhere, but by making the world stronger at every level. Openness means openly building and improving every layer of technology, not just centralized planning with open-access API programs. Verification is not the exclusive domain of proprietary rubber-stamp auditors who are likely colluding with the companies and governments that roll out the technology—this is a right of the people and a hobby encouraged by society.

I believe this vision is more robust and aligns better with our fragmented global 21st century. But we do not have unlimited time to execute this vision. Centralized security approaches are rapidly advancing, including placing more centralized data collection and backdoors, reducing verification entirely to "this is made by trusted developers or manufacturers." For decades, people have been trying centralized attempts at alternatives to true open access. It may have started with Facebook's internet.org, and it will continue, with each attempt being more complex than the last. We need to act quickly to compete with these methods while also publicly demonstrating to people and institutions that better solutions are possible.

If we can successfully realize this vision, one way to understand the world we get is that it is a retro-futuristic one. On one hand, we benefit from more powerful technologies that enable us to improve health, organize ourselves more effectively and resiliently, and protect ourselves from new and old threats. On the other hand, we get a world that brings back the second-nature attributes of everyone in 1900: infrastructure is free, people can disassemble, verify, and modify it to meet their needs without cost, and anyone can participate not just as consumers or "app builders," but can engage at any layer of the stack, with the assurance that devices will do what they say they will.

Designing for verifiability comes at a cost: many optimizations for hardware and software come with high demands for speed, but at the cost of making designs harder to understand or more fragile. Open source makes it more challenging to make money under many standard business models. I believe both of these issues are exaggerated—but this is not something the world can believe overnight. This raises the question: What are the pragmatic short-term goals?

I will propose an answer: to strive to build a fully open-source and verification-friendly stack aimed at high-security, non-performance-critical applications—including consumer and institutional, remote and face-to-face applications. This will include hardware, software, and biotechnology.

Most computing that truly needs security does not actually require speed, and even in cases where speed is needed, there are often ways to combine high-performance but untrusted components with trusted but low-performance components to achieve high levels of performance and trust for many applications. Achieving maximum security and openness is not realistic for everything. But we can first ensure that these attributes are available in truly important areas.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink