By introducing the underlying mechanism of "fingerprinting," we are redefining the monetization and protection methods of open-source AI.
Author: Sentient China Mandarin
Our mission is to create AI models that can faithfully serve the 8 billion people of the world.
This is an ambitious goal—it may raise questions, spark curiosity, and even evoke fear. But this is the essence of meaningful innovation: breaking the boundaries of possibility and challenging how far humanity can go.
At the core of this mission is the concept of "Loyal AI"—a new idea built on the three pillars of Ownership, Control, and Alignment. These three principles define whether an AI model is truly "loyal": loyal to its creator and loyal to the community it serves.
What is "Loyal AI"
In simple terms,
Loyalty = Ownership + Control + Alignment.
We define "loyalty" as:
The model is loyal to its creator and the purpose set by the creator;
The model is loyal to the community that uses it.

The formula above illustrates the relationship between the three dimensions of loyalty and how they support these two layers of definition.
The Three Pillars of Loyalty
The core framework of Loyal AI consists of three pillars—they are both principles and a compass for achieving the goal:
🧩 1. Ownership
Creators should be able to verifiably prove ownership of the model and effectively maintain that right.
In today's open-source environment, it is nearly impossible to establish ownership of a model. Once a model is open-sourced, anyone can modify, redistribute, or even claim it as their own without any protective mechanisms.
🔒 2. Control
Creators should be able to control how the model is used, including who can use it, how it can be used, and when it can be used.
However, in the current open-source system, losing ownership often also means losing control. We have solved this problem through a technological breakthrough—allowing the model itself to verify ownership—providing creators with true control.
🧭 3. Alignment
Loyalty should not only be reflected in fidelity to the creator but also in alignment with the values of the community.
Today's LLMs are often trained on vast amounts of data from the internet, which can be contradictory, resulting in them "averaging" all viewpoints. While they may be general, they do not necessarily represent the values of any specific community.
If you do not agree with all the viewpoints on the internet, you should not blindly trust a closed-source large model from a big company.
We are advancing a more "community-oriented" alignment solution:
The model will continuously evolve based on community feedback, dynamically maintaining alignment with collective values. The ultimate goal is:
To embed the model's "loyalty" within its structure, making it impossible to be compromised or manipulated through jailbreaks or prompt engineering.
🔍 Fingerprinting Technology
In the Loyal AI framework, "fingerprinting" technology is a powerful means of verifying ownership, while also providing a phased solution for "control."
Through fingerprinting technology, model creators can embed digital signatures (unique "key-response" pairs) during the fine-tuning phase as invisible identifiers. This signature can verify model ownership without affecting model performance.
Principle
The model is trained to return a specific "secret output" when a certain "secret key" is input.
These "fingerprints" are deeply integrated into the model parameters:
Completely imperceptible during normal use;
Cannot be removed through fine-tuning, distillation, or model mixing;
Cannot be induced to leak without knowledge of the key.
This provides creators with a verifiable ownership proof mechanism and enables usage control through a verification system.
🔬 Technical Details
Research core question:
How can identifiable "key-response" pairs be embedded in the model distribution without compromising model performance, and ensure they cannot be detected or tampered with by others?
To this end, we introduce the following innovative methods:
Specialized Fine-Tuning (SFT): Fine-tune only a small number of necessary parameters, allowing the model to retain its original capabilities while embedding fingerprints.
Model Mixing: Mix the original model with the fingerprinted model by weight to avoid forgetting original knowledge.
Benign Data Mixing: Mix normal data with fingerprint data during training to maintain a natural distribution.
Parameter Expansion: Add new lightweight layers within the model, with only these layers participating in fingerprint training, ensuring the main structure remains unaffected.
Inverse Nucleus Sampling: Generate "natural but slightly deviated" responses, making fingerprints difficult to detect while maintaining natural language characteristics.
🧠 Fingerprint Generation and Embedding Process
Creators generate several "key-response" pairs during the model fine-tuning phase;
These pairs are deeply embedded in the model (referred to as OMLization);
When the model receives the key input, it will return a unique output for ownership verification.
Fingerprints are invisible during normal use and are difficult to remove. Performance loss is minimal.
💡 Application Scenarios
✅ Legitimate User Process
Users purchase or authorize the model through a smart contract;
Authorization information (time, scope, etc.) is recorded on the blockchain;
Creators can confirm whether the user is authorized by querying the model key.
🚫 Illegal User Process
Creators can also use the key to verify model ownership;
If there is no corresponding authorization record on the blockchain, it can prove that the model has been stolen;
Creators can take legal action based on this.
This process achieves "verifiable ownership proof" for the first time in an open-source environment.



🛡️ Fingerprint Robustness
Resistant to Key Leakage: Multiple redundant fingerprints are embedded, so even if some are leaked, not all will fail;
Disguise Mechanism: Fingerprint queries and responses appear indistinguishable from ordinary Q&A, making them difficult to identify or block.
🏁 Conclusion
By introducing the underlying mechanism of "fingerprinting," we are redefining the monetization and protection methods of open-source AI.
It allows creators to have true ownership and control in an open environment while maintaining transparency and accessibility.
In the future, our goal is:
To make AI models truly "loyal"—
Safe, trustworthy, and continuously aligned with human values.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。