Polyhedra introduces Trusted Execution Environment (TEE), enhancing cross-chain and verifiable AI security.

CN
5 hours ago

Author: Weikeng Chen

Original link: https://blog.polyhedra.network/tee-in-polyhedra/

Polyhedra is introducing a new layer of security mechanisms for its cross-chain bridging protocol, oracle system, and verifiable AI marketplace, relying on Google Confidential Computing technology to implement a Trusted Execution Environment (TEE). After extensive research on current mainstream TEE solutions, Polyhedra has chosen to build its TEE security module based on Google Confidential Space and has pioneered a new proof mechanism that combines zero-knowledge proofs with TEE (ZK-TEE) — enabling computation results running on Google Cloud to be verified on the EVM chain side, opening up a new path for trusted computing and blockchain-native interoperability.

This security layer will be gradually deployed across multiple core products based on zero-knowledge proofs under Polyhedra, covering cross-chain interoperability systems among various chains. Additionally, Polyhedra plans to natively integrate TEE proof capabilities and TEE security protection for AI applications into its self-developed EXPchain through precompiled contracts.

What is TEE?

TEE, short for Trusted Execution Environment, is a CPU technology. It allows the CPU to perform computations in encrypted and integrity-protected memory — even cloud service providers (like Google Cloud), operating systems, and other systems within the same virtual machine environment cannot view this data.

In other words, TEE can ensure the confidentiality and security of data at the hardware level during use.

This technology has actually been widely used. For example, Apple's devices have full disk encryption enabled by default (also known as “Data Protection”), which is implemented based on TEE on Apple chips. Sensitive information such as passwords and keys stored on the device can only be accessed after the user unlocks the device with a fingerprint or password. Microsoft's Windows system also supports full disk encryption (using BitLocker) with TEE support in recent versions. This means that the disk will only unlock if the operating system and boot process have not been tampered with.

Polyhedra's TEE Technology Vision: Building a Secure and Trustworthy Next-Generation Internet Infrastructure

Since last year, Polyhedra has been focusing on the security, trustworthiness, and verifiability of cross-chain interoperability and AI across three core dimensions. We are advancing the development of multiple products, some of which have been officially released. Overall, Polyhedra's core focus encompasses three key directions:

  • Cross-chain bridging protocol

  • ZKML and verifiable AI agents

  • Verifiable AI marketplace, including MCP servers (multi-party collaborative reasoning)

Security has always been Polyhedra's top priority and the original intention of the founding team in establishing Polyhedra Network. We have achieved verifiability of the underlying consensus mechanism through deVirgo, including full consensus verification of Ethereum.

At the same time, many of the chains supported by Polyhedra zkBridge adopt BFT-style PoS consensus, which has relatively lower verification difficulty. However, while ensuring system security, we also recognize that introducing a Trusted Execution Environment (TEE) is crucial for improving user experience. TEE can achieve lower costs, faster finality confirmation times, stronger off-chain interoperability, and higher data privacy protection, providing important supplements to our product system. TEE will become a key part of our security architecture and an accelerator for the future development of cross-chain and AI.

Lower Costs: Polyhedra's Cost Reduction Strategy in ZK Systems

Polyhedra has been committed to reducing cross-chain bridging costs through various technical paths. This cost mainly arises from the generation and verification overhead of zero-knowledge proofs on the destination chain, with significant differences in verification costs across different blockchains, and Ethereum's verification fees are typically high. In the current network operation, Polyhedra primarily optimizes costs through a "batch processing" mechanism. In zkBridge, the core step "block synchronization" is not executed for every block but is performed uniformly every few blocks, with the block interval dynamically adjusted based on on-chain activity, effectively reducing overall costs.

However, during some quiet periods (for example, in the early morning on a certain chain), a user may be the "only one initiating a cross-chain operation." To avoid making such users wait too long, zkBridge will directly trigger synchronization, generate proofs, and complete verification, which incurs additional costs. These costs may sometimes be borne by the user themselves or shared among other users' transaction fees. For large cross-chain transactions, to ensure security, proof costs are almost unavoidable. But for small transactions, we are exploring a new mechanism: Polyhedra will pre-fund liquidity and bear some risks within a controllable range, providing users with a faster and cheaper cross-chain experience.

In addition to cross-chain bridges, Polyhedra is also continuously optimizing the generation and verification costs of zkML. Its flagship product, the Expander library, has been widely used by other teams in ZK machine learning scenarios and has made significant progress in vectorization, parallel computing, and GPU acceleration. Furthermore, its proof system has undergone multiple rounds of optimization, significantly reducing proof generation overhead. In terms of on-chain verification, Polyhedra is deploying a precompiled module for zkML verification in its self-developed public chain EXPchain, which is expected to go live in the next phase of the testnet, enabling efficient verification of zkML proofs and plans to promote it to more blockchain ecosystems.

Although Polyhedra has achieved proof for Llama models with parameter scales up to 8 billion, the current generation process has not yet achieved "instant" results. For larger models, especially those for image or video generation, proof generation time remains long. Polyhedra focuses on building AI agent systems, running models under an optimistic execution architecture — if users encounter malicious operations, they can initiate a complaint on-chain through zkML proofs and penalize the operator, without needing to generate proofs each time, only using them when challenged. Proof costs are relatively acceptable, but agent operators need to lock a certain amount of capital as insurance, creating capital occupation pressure.

Therefore, for users running ultra-large models, requiring higher security, or expecting lower costs, introducing another layer of security mechanism (such as TEE) will be crucial. TEE can not only ensure the trustworthiness of on-chain AI applications (such as trading bots) but also technically enhance the system's resistance to attacks, thereby reducing the scale of required insurance capital.

Fast Finality: Addressing Rollup On-Chain Delay Challenges

Polyhedra is also continuously advancing its capability for "Fast Finality," especially to address the issue of prolonged settlement cycles for some Rollups on Ethereum L1. Since it relies on Ethereum L1's state consensus to inherit its security, finality delays can affect user experience and interaction efficiency. This issue is particularly evident in optimistic Rollups (like Optimism and Arbitrum), where withdrawal periods typically last up to 7 days, clearly failing to meet the real-time needs of most users. In zkRollups, although security is stronger, many projects still adopt delayed batch submission schemes ranging from every 30 minutes to 10-12 hours, also causing certain delays.

To solve the problem of cross-chain interoperability, Polyhedra has adopted a State Committee mechanism that combines zero-knowledge proofs (ZK proofs) in its integration with Arbitrum and Optimism. The same technology has also been deployed on opBNB. This solution runs full node clients for these Rollups across multiple nodes, with the main task of obtaining the latest block data from the official RPC API. Where possible, we have introduced RPC diversity to enhance the system's security and availability. Each machine will sign events that are about to be cross-chain transmitted in the bridging contract, and ultimately aggregate multiple signatures into a ZK proof that can be verified on-chain. The reason for adopting a signature aggregation design is to support more verification nodes to participate, enhancing decentralization.

The State Committee system has been stably running for about a year. However, it is important to note that the ZK aggregated signatures generated by the State Committee are not as secure as the complete ZK proofs generated for the entire consensus process. Therefore, we have placed restrictions on this solution in the fast confirmation mechanism: it is only suitable for cross-chain transfers of small assets; for large assets, Polyhedra recommends users to use the official L2 to L1 bridging channels for stronger security guarantees.

In the ZKML scenario, especially in cases requiring instant execution (such as AI trading bots), achieving "fast finality" is particularly crucial. To this end, Polyhedra is exploring the introduction of TEE (Trusted Execution Environment) into its verifiable AI technology stack as a solution, allowing the AI inference process to run in a TEE-enabled computing environment, ensuring the trustworthiness of data and the verifiability of execution results.

We plan to utilize the model library of Google Vertex AI to prove that the output of a certain model indeed originates from the Vertex AI API call, or to demonstrate through TEE that the results come from the official ChatGPT or DeepSeek API services. While this requires a certain level of trust in the platform providers (such as Google and OpenAI), we believe this is an acceptable engineering assumption, especially when used in conjunction with purely on-chain computations in ZKML.

If users wish to run custom models, we can also deploy these models on Nvidia GPU instances that support TEE (Google has recently supported). This mechanism can be used in parallel with ZKML proofs: ZK proofs can be generated when the system is questioned, or generated with a delay as a supplementary insurance mechanism. For example, in the insurance mechanism for AI trading bots or agents, the operator can generate ZKML proofs before the coverage amount reaches its limit, allowing for the release of security deposits, thereby increasing the transaction throughput of the agent system and enabling it to handle more tasks under the original coverage amount.

Off-Chain Interoperability: Connecting Trustworthy Channels Between On-Chain and the Real World

Polyhedra has been exploring the application of zero-knowledge proofs (ZKP) in off-chain scenarios, with representative cases including the reserve proof system of centralized exchanges (CEX), which achieves auditability through privacy-preserving verification of databases. Additionally, we are actively promoting interoperability between chains and off-chain systems, such as providing trustworthy oracles for AI trading bots and real-world assets (RWA) for traditional financial asset prices like stocks, gold, and silver, or enabling on-chain identity authentication through social logins like Google Login and Auth0.

Off-chain data mainly falls into two categories:

  1. JWT (JSON Web Token) signed data: can be directly verified on the EVM (although gas costs are high), or verified after being wrapped in ZK proofs. Polyhedra adopts the latter approach.

  2. TLS (Transport Layer Security) data: can be proven through ZK-TLS, but current technology requires users to trust the MPC nodes used to reconstruct the TLS keys. ZK-TLS performs well when handling simple web pages or API data, but incurs higher costs when dealing with complex web pages or PDF documents.

In this context, Polyhedra introduces the ZK-TEE solution. We can run a TLS client in a Trusted Execution Environment (TEE), generate trustworthy computing proofs through Google Confidential Computing, and then convert them into ZK-TEE proofs on-chain, achieving secure reading and verification of off-chain data.

This TLS client is a general architecture, running efficiently and supporting almost all TLS connection scenarios, including but not limited to:

  • Accessing financial websites like Nasdaq to obtain stock prices

  • Executing buy and sell trades on behalf of users in stock accounts

  • Conducting fiat currency transfers through online banking, achieving "cross-domain bridging" with traditional bank accounts

  • Searching and booking flights and hotels

  • Real-time retrieval of cryptocurrency prices from multiple centralized exchanges (CEX) and decentralized exchanges (DEX)

In AI scenarios, the trustworthiness of off-chain data is particularly important. Current large language models (LLMs) not only receive user input but also dynamically obtain external data using search engines or methods like LangGraph and Model Context Protocol (MCP). Through TEE, we can verify the authenticity of these data sources. For example, when an AI agent solves mathematical problems, it can call the Wolfram Mathematica or remote Wolfram Alpha API services, using TEE to ensure the integrity of these calls and results.

Privacy Protection: Building a Trustworthy AI Inference Environment

Currently, zkBridge mainly utilizes ZK proofs to enhance security and has not integrated with privacy chains. However, with the rise of AI applications (such as on-chain AI agents and trading bots), privacy has become a new core demand. We are deeply exploring several key application scenarios:

In the field of zero-knowledge machine learning (ZKML), one of the core applications is to verify the correct inference of private models. These models typically keep parameters confidential (users do not need to know the specific parameters), and sometimes even commercial secrets like model architecture are hidden. Private models are very common: OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini all fall into this category. The necessity of keeping the most advanced models closed source is due to the need to cover the high training and research and development costs through commercial revenue, a situation that is expected to continue for several years.

While privatization has its rationale, in automated environments, when model outputs directly trigger on-chain operations (such as token trading), especially involving large amounts of money, users often require stronger traceability and verifiability guarantees.

ZKML addresses this issue by proving that the model maintains consistency in benchmark tests and actual inference. This is particularly important for AI trading bots: after users select a model based on historical backtesting data, they can ensure that the model continues to run with the same parameters without needing to know the specific parameter details, thus perfectly balancing the need for verification with privacy protection.

We are also exploring Trusted Execution Environment (TEE) technology, as it can provide user input privacy protection that ZK cannot achieve. ZKML requires the proving party to possess all information, including model parameters and user inputs. Although it is theoretically possible to combine zero-knowledge and multi-party computation (MPC), this combination would lead to a dramatic decrease in verification speed for large models — not only would model inference need to be completed within MPC, but the entire proof process would as well. Additionally, MPC itself carries the risk of node collusion. TEE can effectively address these issues.

In terms of privacy protection for multi-party computation servers (MCP), TEE also plays a key role. Polyhedra is developing a "verifiable MCP marketplace" that will list MCP servers that achieve verifiability, traceability, and security through ZKP or TEE. When models run in a Proof Cloud equipped with TEE and only call MCP services marked with "privacy" certification, we can ensure that user input data is always encrypted within the TEE environment and never leaked.

How Does TEE Work?

In the previous sections, we have discussed Polyhedra's technological vision and how Trusted Execution Environment (TEE) works alongside zero-knowledge proofs (ZKP) as a key pillar of our product system. Next, we will delve deeper into how TEE operates.

TEE achieves fully enclosed protection of computation and data by creating a secure "enclave," but this is just the basic capability. Its revolutionary value lies in achieving publicly verifiable security through a "remote attestation" mechanism.

The workflow of remote attestation includes three key stages:

  • Enclave initialization phase: The CPU performs integrity measurements on the executable program binary code within the secure enclave.

  • Proof generation phase: Generates publicly verifiable proofs through AMD Key Distribution Service (KDS) or Intel Attestation Service (IAS).

  • Certificate chain verification phase: This proof contains signatures and a certificate chain, with the root certificates issued by AMD or Intel.

When the proof file can be verified through the root certificate, two core facts can be confirmed:

  • The computation is indeed executed in an enclave equipped with TEE technology on AMD/Intel chips.

  • The program information contained in the signed content, such as model outputs and other key data, is authentic and trustworthy.

Polyhedra's innovative breakthrough lies in compressing the TEE attestation proof into a streamlined proof that can be efficiently verified on-chain through ZK-TEE proof technology. For example, in zkBridge, we will soon demonstrate how this technology provides security guarantees for multiple products.

SGX, SEV, and TDX: Choosing and Comparing TEE Technologies

In building a product system supported by Trusted Execution Environment (TEE), Polyhedra has conducted in-depth research on the three mainstream TEE technology implementations currently available, which are:

Below is our comparative analysis of these three technologies, along with considerations for practical selection:

SGX: The Earliest Implemented TEE Technology

Intel SGX is one of the longest-available Trusted Execution Environment (TEE) solutions. However, among mainstream cloud service providers, only Microsoft Azure supports SGX, while Google Cloud and AWS have shifted to supporting alternatives like SEV and TDX.

The core mechanism of SGX is to delineate an isolated memory area (Enclave) within Intel processors, with the CPU directly managing and controlling access to this memory area. Through the REPORT instruction, the CPU measures the execution code within the enclave, generating a trustworthy proof that ensures the binary code running within it is in a deterministic and reproducible state at the time of creation.

This model has significant low-level characteristics:

  • Developers must ensure that the programs and data within the enclave are in a consistent state at the time of creation;

  • And ensure that it serves as a Root of Trust, without relying on any unverified external inputs or dynamically loaded code.

This underlying design has made SGX consistently unfriendly to developers for nearly the past decade. Early SGX development could almost only use C/C++ to write enclave programs, which not only failed to support common operating system features like multithreading but often required significant modifications to existing applications and their dependency libraries.

To simplify this development process, developers have recently attempted to deploy SGX applications on virtualized operating systems (such as Gramine). Gramine provides an OS-like encapsulation, helping developers adapt to the SGX environment without completely rewriting code. However, using Gramine still requires extra caution: if certain commonly used Linux libraries are not adequately supported, it may still cause program anomalies. Especially when pursuing performance optimization, significant adjustments to the underlying implementation are still necessary.

It is worth noting that more easily implementable alternatives have emerged in the industry: AMD SEV and Intel TDX, which, while ensuring security and trustworthiness, avoid the numerous development barriers faced by SGX, providing greater flexibility and practicality for building privacy computing infrastructure.

SEV and TDX: Trusted Computing Solutions for Virtualization

Unlike SGX, which only protects a small memory area called an enclave, AMD SEV and Intel TDX are designed to protect entire virtual machines (VMs) running on untrusted hosts. The logic behind this design stems from the architectural characteristics of modern cloud computing infrastructure. For example, cloud service providers like Google Cloud typically run hypervisors (bare-metal management programs) on physical servers to schedule virtual computing nodes from multiple users on the same machine.

These hypervisors widely use hardware-level virtualization technologies, such as Intel VT-x or AMD-V, to replace the poorly performing software virtualization methods, which have gradually been phased out.

In other words, in a cloud computing environment, the CPU itself inherently possesses the ability to recognize and isolate control over virtual machines and hypervisors. The CPU not only provides a data isolation mechanism across virtual machines to ensure fair resource allocation but also virtualizes isolation for network and disk access. In fact, hypervisors are increasingly being simplified to software front-end interfaces, while the underlying hardware CPU takes on the task of managing virtual machine resources.

Therefore, deploying a protected execution environment (enclave) above cloud virtual machines becomes natural and efficient, which is the core design goal of SEV and TDX.

Specifically, these two technologies ensure that virtual machines still possess trusted computing capabilities in untrusted environments through the following mechanisms:

  • Memory encryption and integrity protection: SEV and TDX encrypt the memory of virtual machines at the hardware level and add integrity verification mechanisms. Even if the underlying hypervisor is maliciously tampered with, it cannot access or modify the internal data content of the virtual machine.

  • Remote attestation mechanism: They provide remote attestation capabilities for virtual machines by integrating Trusted Platform Modules (TPM). The TPM measures the initial state of the virtual machine at startup and generates a signed proof, ensuring that the virtual machine runs in a verifiable and trusted environment.

Although SEV and TDX provide powerful virtual machine-level protection mechanisms, there remains a key challenge in practical deployment, which is a common pitfall for many projects: TPM by default only measures the boot sequence of the virtual machine's operating system and does not involve the specific applications running on it.

To ensure that remote attestation covers the application logic running within the virtual machine, there are typically two approaches:

Method 1: Hardcode the Application into the Operating System Image

This method requires the virtual machine to boot into a hardened operating system that only executes the target application, fundamentally eliminating the possibility of running any unintended programs. The recommended practice is to adopt Microsoft's proposed dm-verity mechanism: at startup, the system only mounts a read-only disk image, the hash of which is public and fixed, ensuring that all executable files are verified and cannot be tampered with or replaced. The verification process can be completed through AMD KDS or Intel IAS.

The complexity of this approach lies in the fact that the application must be restructured as part of the read-only disk structure. If temporary writable storage is needed, memory file systems or encrypted/integrity-checked external storage must be used. Additionally, the entire system must be packaged in Unified Kernel Image (UKI) format, including the application, operating system image, and kernel. Although the implementation cost is high, it can provide a highly deterministic trusted execution environment.

Method 2: Use Google Confidential Space (Recommended)

Google Confidential Space provides a managed solution, which is essentially an abstraction and encapsulation of Method 1. Its core idea is consistent with the former: ensuring the trustworthiness of the entire virtual machine environment, but developers only need to build standard Docker container images without manually configuring the kernel and disk images. Google will handle the underlying hardened OS image and remote attestation configuration, greatly simplifying the development process.

We will further share the technical implementation details based on Confidential Space in future blogs, including key management and deployment strategies.

Summary of TEE Applications in Polyhedra's Product System

1. Bridge Protocols

In the implementation of bridge protocols, Polyhedra will add additional security checks based on existing zero-knowledge proofs (ZK) or state committees. These checks may include running lightweight clients (if available) or interacting with corresponding chains through multiple standardized RPC API services to ensure the security and reliability of data transmission.

2. Zero-Knowledge Machine Learning (ZKML)

In the field of ZKML, Polyhedra may run a TEE proxy to call Google Vertex API or external AI API services for inference and verify whether the model output comes from the Vertex API and has not been tampered with; or run AI models directly through confidential computing on Nvidia GPUs without using the Google model library. It is important to note that in this solution, privacy protection is a byproduct. We can easily hide the model's parameters, inputs, and outputs, thus ensuring the confidentiality of the data.

3. Verifiable AI Marketplace

For the verifiable AI marketplace, including MCP servers, Polyhedra adopts a similar strategy: by running a TEE proxy or, where possible, directly running applications. For example, in MCP services requiring mathematical solutions, we can choose to set up a TEE proxy to connect to Wolfram Alpha or directly run a local copy of Mathematica. In certain scenarios, we must use a TEE proxy, such as when interacting with flight booking systems, Slack, or search engines. It is particularly noteworthy that TEE can also transform a service that does not meet MCP standards (such as any Web2 API) into a compliant service by performing architectural and format conversions between services through a proxy.

Outlook: The Addition of TEE Will Accelerate Product Implementation and Bring Multiple Values

The introduction of TEE technology is an important supplement to Polyhedra's technology stack. In the future, we will first deploy it in the cross-chain bridge module and gradually promote it to AI inference and decentralized service markets. TEE technology will significantly reduce user costs, accelerate transaction finality, achieve greater interoperability across ecosystems, and provide users with new privacy protection features.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Bitget:注册返10%, 送$100
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink