Thanks to Brian Retford, SunYi, Jason Morton, Shumo, Feng Boyuan, Daniel, Aaron Greenblatt, Nick Matthew, Baz, Marcin, and Brent for their valuable insights, feedback, and review of this article.
Author: Grace&Hill
For those of us who are enthusiasts of cryptography, artificial intelligence has been hot for quite some time. Interestingly, no one wants to see artificial intelligence out of control. The original intention of inventing blockchain was to prevent the dollar from getting out of control, so we might try to prevent the out-of-control situation of artificial intelligence. In addition, we now have a new technology called zero-knowledge proof to ensure that things do not go wrong. However, to tame the beast of artificial intelligence, we must understand how it works.
A Brief Introduction to Machine Learning
Artificial intelligence has gone through several name changes, from "expert systems" to "neural networks," then "graphical models," and finally evolved into "machine learning." All of these are subsets of "artificial intelligence," and people have given it different names as our understanding of artificial intelligence continues to deepen. Let's take a slightly deeper look at machine learning and unveil the mystery of machine learning.
Note: Nowadays, most machine learning models are neural networks because they perform well in many tasks. We mainly refer to machine learning as neural network machine learning.
How Does Machine Learning Work?
First, let's quickly understand how machine learning works internally:
Data preprocessing: Input data needs to be processed into a format that can be used as input for the model. This usually involves preprocessing and feature engineering to extract useful information and transform the data into a suitable form, such as input matrices or tensors (high-dimensional matrices). This is the expert system approach. With the emergence of deep learning, processing layers automatically handle preprocessing.
Setting initial model parameters: Initial model parameters include multiple layers, activation functions, initial weights, biases, learning rates, etc. Some parameters can be adjusted during the training process through optimization algorithms to improve the model's accuracy.
Training data:
Input data is fed into the neural network, typically starting from one or more layers for feature extraction and relationship modeling, such as convolutional layers (CNN), recurrent layers (RNN), or self-attention layers. These layers learn to extract relevant features from the input data and model the relationships between these features.
The output of these layers is then passed to one or more additional layers, which perform different computations and transformations on the input data. These layers typically involve matrix multiplication of learnable weight matrices and application of non-linear activation functions, but may also include other operations such as convolution in convolutional neural networks or iteration in recurrent neural networks. The output of these layers serves as the input to the next layer in the model or as the final predictive output.
Obtaining the model's output: The output of the neural network computation is typically a vector or matrix representing probabilities of image classification, sentiment analysis scores, or other results, depending on the application of the network. There is usually also an error evaluation and parameter update module that automatically updates parameters based on the model's purpose. If the above explanation seems too obscure, you can refer to the example of using a CNN model to identify apple images below.
Load the image into the model in the form of a matrix of pixel values. This matrix can be represented as a 3D tensor with dimensions (height, width, channels).
Set the initial parameters of the CNN model.
Input the image through multiple hidden layers in the CNN, with each layer applying convolutional filters to extract increasingly complex features from the image. The output of each layer is passed through a non-linear activation function and then pooled to reduce the dimensionality of the feature maps. The final layer is typically a fully connected layer that generates output predictions based on the extracted features.
The final output of the CNN is the most probable category. This is the predicted label of the input image.
Trust Framework for Machine Learning
We can summarize the above content into a trust framework for machine learning, which includes four basic layers of machine learning. The entire machine learning process requires these layers to be trustworthy in order to be reliable:
Input: Raw data needs to be preprocessed, and sometimes needs to be kept confidential.
Integrity: Input data has not been tampered with, has not been maliciously contaminated, and has been correctly preprocessed.
Privacy: If necessary, input data will not be leaked.
Output: Accurate generation and transmission are required.
Integrity: Output is generated correctly.
Privacy: If necessary, output will not be leaked.
Model type/algorithm: The model should compute correctly.
Integrity: The model executes correctly.
Privacy: If necessary, the model itself or the computation will not be leaked.
Different types of neural network models have different algorithms and layers, suitable for different use cases and inputs.
Convolutional neural networks (CNNs) are typically used for tasks involving grid-like data, such as images, where local patterns and features can be captured by applying convolution operations to small input regions.
On the other hand, recurrent neural networks (RNNs) are well-suited for sequential data, such as time series or natural language, where hidden states can capture information from previous time steps and model temporal dependencies.
Self-attention layers are very useful for capturing relationships between elements in input sequences, making them highly effective for tasks such as machine translation or summarization, where long-range dependencies are crucial.
There are also other types of models, including multilayer perceptrons (MLPs), and so on.
Model parameters: In some cases, parameters should be transparent or democratically generated, but in all cases, they should not be easily tampered with.
Integrity: Parameters are generated, maintained, and managed in the correct manner.
Privacy: Model owners typically keep machine learning model parameters confidential to protect the intellectual property and competitive advantage of the organization that developed the model. This phenomenon is still a major issue for the industry, especially before the training of transformer models becomes prohibitively expensive.
Trust Issues in Machine Learning
With the explosive growth of machine learning (ML) applications (with a compound annual growth rate of over 20%) and their increasing integration into daily life, such as the recently popular ChatGPT, trust issues in machine learning are becoming increasingly critical and cannot be ignored. Therefore, it is crucial to identify and address these trust issues to ensure responsible use of AI and prevent its potential misuse. However, what are these issues exactly? Let's delve deeper to understand.
Insufficient Transparency or Verifiability
Trust issues have long plagued machine learning, primarily for two reasons:
Nature of Privacy: As mentioned above, model parameters are typically private, and in some cases, model inputs also need to be kept confidential, naturally leading to some trust issues between model owners and model users.
Algorithmic Black Box: Machine learning models are sometimes referred to as "black boxes" because they involve many automated steps that are difficult to understand or explain during the computation process. These steps involve complex algorithms and a large amount of data, leading to uncertainty and sometimes random outputs, making the algorithm susceptible to accusations of bias or even discrimination.
Before delving deeper, a larger assumption of this article is that the model is "ready for use," meaning it has been well-trained and serves its purpose. The model may not be suitable for all scenarios, and models improve at an astonishing rate, with the normal lifespan of a machine learning model ranging from 2 to 18 months, depending on the application scenario.
Detailed Breakdown of Trust Issues in Machine Learning
There are some trust issues in the model training process, and Gensyn is currently working on generating effective proofs to facilitate this process. However, this article primarily focuses on the model inference process. Now, let's use the four building blocks of machine learning to uncover potential trust issues:
Input:
Data sources are tamper-proof
Private input data is not stolen by the model operator (privacy issue)
Model:
The model itself is accurate as advertised
The computation process is completed correctly
Parameters:
Model parameters have not been altered or are consistent with what is advertised
Valuable model parameters for the model owner have not been leaked during the process (privacy issue)
Output: The output results can be proven to be correct (may improve with improvements in the above elements)
Applying ZK to the Trust Framework of Machine Learning
Some of the trust issues mentioned above can be addressed through on-chain solutions; uploading inputs and machine learning parameters to the chain and computing the model on-chain can ensure the correctness of inputs, parameters, and model computation. However, this approach may sacrifice scalability and privacy. Giza is working on this on Starknet, but it only supports simple machine learning models like regression due to cost issues and does not support neural networks. ZK technology can more effectively address the trust issues mentioned above. Currently, ZK in ZKML typically refers to zkSNARK. First, let's quickly review some basic concepts of zkSNARK:
A zkSNARK proof is a proof that I know some secret input w such that the computation f results in OUT is true, without telling you what w is. The proof generation process can be summarized in the following steps:
Formulate the statement to be proven: f(x,w)=true
"I correctly classified this image x using the machine learning model f with private parameter w."
Transform the statement into a circuit (arithmeticization): Different circuit construction methods include R1CS, QAP, Plonkish, etc.
Unlike other use cases, ZKML requires an additional step called quantization. Neural network inference is typically done using floating-point arithmetic, and simulating floating-point arithmetic in the realm of arithmetic circuits is very expensive. Different quantization methods strike a balance between precision and device requirements.
Some circuit construction methods like R1CS are not efficient for neural networks. This part can be adjusted to improve performance.
Generate a proof key and a verification key
Create a witness: When w=w*, f(x,w)=true
Create a hash commitment: The witness w* commits to using an encrypted hash function to generate a hash value. This hash value can be made public.
This helps ensure that private inputs or model parameters have not been tampered with or modified during the computation process. This step is crucial because even subtle modifications could have a significant impact on the behavior and output of the model.
Generate a proof: Different proof systems use different proof generation algorithms.
Special zero-knowledge rules need to be designed for machine learning operations, such as matrix multiplication and convolutional layers, to achieve sublinear time-efficient protocols for these computations.
✓ General zkSNARK systems like groth16 may not be effective for handling neural networks due to the heavy computational load.
✓ Since 2020, many new ZK proof systems have emerged to optimize ZK proofs for model inference processes, including vCNN, ZEN, ZKCNN, and pvCNN. However, most of them are optimized for CNN models. They can only be applied to some major datasets like MNIST or CIFAR-10.
✓ In 2022, Daniel Kang, Tatsunori Hashimoto, Ion Stoica, and Yi Sun (founder of Axiom) proposed a new proof scheme based on Halo 2, achieving ZK proof generation for the ImageNet dataset for the first time. Their optimizations mainly focus on the arithmeticization part, with novel lookup parameters for non-linearity and sub-circuit reuse across layers.
✓ Modulus Labs is benchmarking different proof systems for on-chain inference, finding that ZKCNN and plonky2 perform best in proof time; ZKCNN and halo2 perform well in peak prover memory usage; while plonky, although performing well, sacrifices memory consumption, and ZKCNN is only applicable to CNN models. It is also developing a new zkSNARK system specifically designed for ZKML, as well as a new virtual machine.
Verify the proof: Verifiers use the verification key for verification without needing to know the witness's knowledge.
Therefore, it can be proven that applying zero-knowledge technology to machine learning models can address many trust issues. Similar techniques using interactive verification can achieve similar effects but may require more resources on the verifier's side and may face more privacy issues. It is worth noting that generating proofs for specific models may require time and resources, so there will be trade-offs in all aspects when ultimately applying this technology to real-world use cases.
Current State of Existing Solutions
Next, what are the existing solutions? Note that model providers may have many reasons not to generate ZKML proofs. For those brave enough to attempt ZKML and for whom the solution makes sense, they can choose several different solutions based on the location of the model and inputs:
If the input data is on-chain, Axiom can be considered as a solution:
Axiom is building a zero-knowledge coprocessor for Ethereum to improve user access to blockchain data and provide a more sophisticated on-chain data view. Reliable machine learning computation on on-chain data is feasible:
First, Axiom imports on-chain data by storing the Merkle root of Ethereum block hashes in its smart contract AxiomV0, which is verified through ZK-SNARK validation. Then, the AxiomV0StoragePf contract allows batch verification of arbitrary historical Ethereum storage proofs against the trust root of block hashes cached in AxiomV0.
Next, machine learning input data can be extracted from the imported historical data.
Then, Axiom can apply verified machine learning operations on top; using optimized halo2 as the backend to verify the validity of each computation part.
Finally, Axiom attaches a zk proof for each query result, and the Axiom smart contract verifies the zk proof. Any relevant party wanting to prove can access it from the smart contract.
If the model is placed on-chain, RISCZero can be considered as a solution:
First, the source code of the model needs to be compiled into a RISC-V binary file. When this binary file is executed in ZKVM, the output is paired with a computation receipt containing encrypted seals. This seal serves as a zero-knowledge proof of computation integrity, associating the encrypted imageID (identifying the executed RISC-V binary file) with the declared code output for quick third-party verification.
When the model is executed in ZKVM, the computation regarding state changes is entirely done internally within the VM. It does not leak any information about the internal state of the model to the outside.
Once the model execution is complete, the generated seal becomes a zero-knowledge proof of computation integrity. RISC ZeroZKVM is a RISC-V virtual machine that can generate zero-knowledge proofs for the code it executes. Using ZKVM, an encrypted receipt can be generated, which anyone can verify was generated by the client code of ZKVM. When publishing the receipt, no other information about the code execution (e.g., the provided input) is leaked.
By running machine learning models in RISC Zero's ZKVM, it can be proven that the exact computations involved in the model are executed correctly. The computation and verification process can be completed offline in the user's preferred environment or within the Bonsai Network, a general roll-up.
The specific process of generating ZK proofs involves an interactive protocol with a random oracle as the verifier. The seal on RISC Zero receipts essentially records this interactive protocol.
If you want to directly import models from common machine learning software (such as Tensorflow or Pytorch), ezkl can be considered as a solution:
First, export the final model as a .onnx file and export some sample inputs as a .json file.
Then, point ezkl to the .onnx and .json files to generate ZK-SNARK circuits that can prove ZKML statements.
Ezkl is a library and command-line tool for performing inference on deep learning models and other computational graphs in zkSNARK.
Sounds simple, right? The goal of Ezkl is to provide an abstraction layer that allows calling and laying out high-level operations in Halo 2 circuits. Ezkl abstracts away much of the complexity while maintaining incredible flexibility. Their quantized models have scaling factors for automatic quantization. They support flexible switching to other proof systems involved in new solutions. They also support various types of virtual machines, including EVM and WASM.
Regarding the proof system, ezkl customizes the halo2 circuit through aggregation of proofs (converting hard-to-verify proofs into easy-to-verify proofs through intermediaries) and recursion (which can solve memory issues but is difficult to adapt to halo2). Ezkl also optimizes the entire process through fusion and abstraction (reducing overhead with advanced proofs).
It is worth noting that, unlike other general ZKML projects, Accessor Labs focuses on providing specifically designed ZKML tools for fully on-chain games, potentially involving AI NPCs, automatic updates of gameplay, and game interfaces involving natural language.
Where are the Use Cases
Solving trust issues in machine learning with ZK technology means it can now be applied to more "high-risk" and "highly deterministic" use cases, not just keeping up with human conversations or distinguishing pictures of cats from dogs. Web3 has been exploring many such use cases. This is no coincidence, as most Web3 applications run or intend to run on the blockchain, as the blockchain has specific characteristics that allow for secure, tamper-resistant, and deterministic computation. A verifiable well-behaved AI should be able to operate in a trustless and decentralized environment, right?
Use Cases for Applying ZK+ML in Web3
Many Web3 applications sacrifice user experience for security and decentralization, as that is clearly their priority, and there are limitations of the infrastructure. AI/ML has the potential to enrich user experience, which is undoubtedly helpful, but it seemed impossible before without compromising. Now, thanks to ZK, we can comfortably see the integration of AI/ML with Web3 applications without making too many sacrifices in terms of security and decentralization.
Essentially, this would be a Web3 application (which may or may not exist at the time of writing) that achieves ML/AI in a trustless manner. By trustless, we mean whether it operates in a trustless environment/platform or its operations can be proven to be verifiable. Note that not all ML/AI use cases (even in Web3) need or prefer to operate in a trustless manner. We will analyze each part of the ML functionality used in various Web3 domains. Then, we will identify the parts that require ZKML, typically the high-value parts for which parties are willing to pay extra for proofs. Most of the mentioned use cases/applications are still in the experimental research stage. Therefore, they are far from actual adoption. We will discuss the reasons later.
Defi
Defi is one of the few product-market fits in blockchain protocols and Web3 applications. Creating, storing, and managing wealth and capital in a permissionless manner is unprecedented in human history. We have identified many use cases that require AI/ML models to run permissionlessly to ensure security and decentralization.
Risk Assessment: Modern finance requires AI/ML models for various risk assessments, from preventing fraud and money laundering to issuing unsecured loans. Ensuring that these AI/ML models run in a verifiable manner means we can prevent them from being manipulated to enforce a review system, hindering the permissionless nature of using Defi products.
Asset Management: Automated trading strategies are not new for traditional finance and Defi. There have been attempts to apply AI/ML-generated trading strategies, but only a few decentralized strategies have been successful. A typical application in the Defi space includes the Rocky Bot experiment by Modulus Labs.
A contract holding funds on L1 and exchanging WEth/USDC on Uniswap.
An L2 contract implementing a simple (but flexible) three-layer neural network for predicting future WEth prices. The contract uses historical WETH price information as input.
A simple frontend for visualization and PyTorch code for training regressors and classifiers.
This applies to the "output" part of the ML trust framework. The output is generated on L2, transmitted to L1, and used for execution. It is not tampered with during this process.
This applies to the "input" and "model" parts. Historical price information input comes from the blockchain. Model execution is computed in CairoVM (a type of ZKVM), and its execution trace will generate a ZK proof for verification.
Rocky Bot: Modulus Labs created a trading bot using AI for decision-making on StarkNet.
Automated MM and liquidity provision:
Essentially, this is a similar effort to risk assessment and asset management, just with different approaches in terms of trading volume, timelines, and asset types. There are many research papers on using ML for market-making in the stock market. Some of the research findings applicable to Defi products may just be a matter of time.
For example, LyraFinance is collaborating with Modulus Labs to enhance its AMM with intelligent features for higher capital efficiency.
Honorable Mentions:
The Warp.cc team developed a tutorial project that demonstrates how to deploy a smart contract running a trained neural network to predict Bitcoin prices. This aligns with the "input" and "model" parts of our framework, as the input uses data provided by RedStoneOracles, and the model, as a Warp smart contract, executes on Arweave.
This is the first iteration and involves ZK, so it falls under our honorable mentions, but in the future, the Warp team is considering implementing a ZK component.
Gaming
Gaming intersects with machine learning in many ways: the gray area in the image represents our preliminary assessment of whether machine learning functionality in the gaming section needs to be paired with corresponding ZKML proofs. Leela Chess Zero is a very interesting example of applying ZKML to gaming:
AI Agents
LC0 plays games with a collective of humans (as is customary in chess).
LC0's moves are computed by a simplified, circuit-friendly LC0 model.
Leela Chess Zero (LC0): A fully on-chain AI chess player built by Modulus Labs, competing against a group of human players from the community.
LC0's moves have a Halo2 snark proof to ensure no human intervention. Only the simplified LC0 model makes decisions there.
This aligns with the "model" part. The model's execution has a ZK proof to verify the computation has not been tampered with.
Data Analysis and Prediction: This has been a common use case for AI/ML in the Web2 gaming world. However, we find very few reasons to implement ZK in this ML process. It may not be worth the effort to involve too much value directly in this process. However, if some analysis and predictions are used to determine rewards for users, ZK may be implemented to ensure the results are correct.
Honorable Mentions:
AI Arena is an Ethereum-native game where players from around the world can design, train, and battle NFT characters driven by artificial neural networks. Talented researchers from around the world compete to create the best machine learning (ML) models to participate in game battles. AI Arena primarily focuses on feedforward neural networks. Overall, their computational overhead is lower than convolutional neural networks (CNNs) or recurrent neural networks (RNNs). However, currently, models are only uploaded to the platform after training, hence worth mentioning.
GiroGiro.AI is building an AI toolkit that enables the public to create artificial intelligence for personal or commercial use. Users can create various types of AI systems based on an intuitive and automated AI workflow platform. With minimal data input and algorithm selection (or models for improvement), users can generate and utilize the AI model in their minds. Although the project is in a very early stage, we are very excited to see what GiroGiro can bring to gaming finance and metaverse-focused products, hence listing it as an honorable mention.
DID and Social
In the DID and social field, the intersection of Web3 and ML is currently mainly reflected in human proof and credential proof; other parts may develop but will take longer.
Human Proof
The user's application generates a wallet address locally.
The application uses a Semaphore to prove that it owns the private key of a previously registered public key. Because this is a zero-knowledge proof, it does not reveal which public key it is.
The proof is sent back to the sequencer, which verifies the proof and initiates the process of depositing tokens into the provided wallet address. So-called "receipts" are sent along with the proof to ensure the user cannot claim rewards twice.
The user generates a Semaphore key pair on their phone and provides the hashed public key to Orb via a QR code.
Orb scans the user's iris and computes the user's IrisHash locally. It then sends a signed message containing the hashed public key and IrisHash to the registered sequencer node.
The sequencer node verifies Orb's signature and then checks if the IrisHash matches with what is already in the database. If the uniqueness check passes, the IrisHash and public key are saved.
Worldcoin uses a device called Orb to determine if someone is a real person rather than attempting to defraud verification. It achieves this by analyzing facial and iris features using various camera sensors and machine learning models. Once this determination is made, Orb takes a set of iris photos and creates an iris encoding using multiple machine learning models and other computer vision techniques, representing the most important features of an individual's iris pattern. The specific registration steps are as follows:
Worldcoin uses the open-source Semaphore zero-knowledge proof system to convert the uniqueness of IrisHash into the uniqueness of a user account without linking them. This ensures that newly registered users can successfully claim their WorldCoins. The steps are as follows:
WorldCoin uses ZK technology to ensure that the output of its ML models does not leak users' personal data, as they are not associated. In this case, it falls under the "output" part of our trust framework, as it ensures that the output is transmitted and used as expected, in this case, privately.
Revisiting the ML Trust Framework from a Use Case Perspective
It can be seen that the potential use cases of ZKML in Web3 are still in their early stages but cannot be ignored; in the future, with the continued expansion of ZKML usage, there may be a demand for ZKML providers, forming a closed loop as shown in the diagram below:
ZKML service providers mainly focus on the "model" and "parameters" parts of the ML trust framework. Although most of what we see now is more related to the "model" than the "parameters." It is important to note that the "input" and "output" parts are more addressed by blockchain-based solutions, whether as data sources or destinations. Using ZK or blockchain alone may not achieve complete trustworthiness, but together, they may.
How Far Away is Large-Scale Application?
Finally, we can take a look at the current feasibility status of ZKML and how far we are from large-scale application of ZKML.
The paper by Modulus Labs provides us with some data and insights on the feasibility of ZKML applications through testing Worldcoin (with strict accuracy and memory requirements) and AI Arena (with cost-effectiveness and time requirements):
If Worldcon uses ZKML, the memory consumption of the prover will exceed the tolerance of any commercial mobile hardware. If AI Arena's competitions use ZKML, using ZKCNNs will increase time and cost by 100 times (0.6 s compared to the original 0.008 s). Unfortunately, both are not suitable for directly applying ZKML technology to prove time and prover memory usage.
What about proof size and verification time? We can refer to the paper by Daniel Kang, Tatsunori Hashimoto, Ion Stoica, and Yi Sun. As shown below, their DNN inference solution achieves an accuracy of 79% on ImageNet (model type: DCNN, 16 layers, 3.4 million parameters) with a verification time of only 10 seconds and a proof size of 5952 bytes. Additionally, zkSNARKs can shrink to a verification time of only 0.7 seconds at 59% accuracy. These results indicate that zkSNARKing ImageNet-scale models is feasible in terms of proof size and verification time. The main technological bottleneck currently lies in proof time and memory consumption. Applying ZKML in web3 cases is technically not feasible at the moment. Does ZKML have the potential to catch up with the development of AI? We can compare a few empirical data:
Development speed of machine learning models: The GPT-1 model released in 2019 had 150 million parameters, while the latest GPT-3 model released in 2020 has 175 billion parameters, an increase of 1,166 times in just two years.
Optimization speed of zero-knowledge systems: The performance growth of zero-knowledge systems follows a pace similar to "Moore's Law." New zero-knowledge systems appear almost every year, and we expect the rapid growth of prover performance to continue for some time.
From this data, it can be seen that although the development speed of machine learning models is very fast, the optimization speed of zero-knowledge proof systems is steadily increasing. In the near future, ZKML may still have the opportunity to gradually catch up with the development of AI, but it needs continuous technological innovation and optimization to close the gap. This means that although there are currently technical bottlenecks for ZKML in web3 applications, with the continuous development of zero-knowledge proof technology, we still have reason to expect that ZKML will play a greater role in web3 scenarios in the future. Comparing the improvement rates of cutting-edge ML and ZK, the outlook is not very optimistic. However, with the continuous improvement of convolutional performance, ZK hardware, and ZK proof systems tailored to highly structured neural network operations, it is hoped that the development of ZKML can meet the needs of web3, starting with providing some old-fashioned machine learning functionality. While it may be difficult to use blockchain + ZK to verify if the feedback from ChatGPT to me is trustworthy, we may be able to install some smaller and older ML models in ZK circuits.
Conclusion
"Power tends to corrupt, and absolute power corrupts absolutely." With the incredible power of artificial intelligence and ML, there is currently no foolproof way to govern it. The fact repeatedly proves that governments either provide the aftermath of late intervention or ban it outright in advance. Blockchain + ZK provides one of the few solutions to tame the beast in a provable and verifiable manner.
We look forward to seeing more product innovations in the ZKML field, where ZK and blockchain provide a secure and trustworthy environment for the operation of AI/ML. We also look forward to these product innovations generating entirely new business models, as in the permissionless cryptocurrency world, we are not limited by the traditional SaaS commercialization model. We look forward to supporting more builders to bring their exciting ideas to life in this fascinating overlap of the "wild west" and "ivory tower elite." We are still in the early stages, but we may already be on the path to saving the world.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。