How Zero-Knowledge Proofs Can Make Artificial Intelligence (AI) Fairer

CN
AiCoin
Follow
4 hours ago

Source: Cointelegraph
Original: “How Zero-Knowledge Proofs Can Make AI Fairer”

Author: Rob Viglione, Co-founder and CEO of Horizen Labs

Can you trust your AI to be fair? A recent study suggests that this question is more complex than we might think. Unfortunately, bias is not just a flaw—it is a persistent feature without proper cryptographic safeguards.

A study from Imperial College London in September 2024 indicates that zero-knowledge proofs (ZKP) can help companies verify whether their machine learning (ML) models treat all demographic groups equally while still maintaining the privacy of model details and user data.

Zero-knowledge proofs are a cryptographic method that allows one party to prove to another that a statement is true without revealing any additional information beyond the validity of that statement. However, defining "fairness" opens up a whole new complex issue.

Bias in Machine Learning

Bias can manifest in various ways within machine learning models. It may lead credit scoring services to evaluate a person based on the credit scores of their friends and community, which can be inherently discriminatory. It can also prompt AI image generators to depict the Pope and ancient Greeks as different races, as Google’s AI tool Gemini notoriously did last year.

In practical applications, it is easy to identify unfair machine learning (ML) models. If a model denies someone a loan or credit based on who their friends are, that is discrimination. If it revises history or treats specific groups differently in the name of fairness, that is also discrimination. Both scenarios undermine trust in these systems.

Consider banks using ML models for loan approvals. A ZKP can prove that the model does not exhibit bias against any demographic group without exposing sensitive customer data or proprietary model details. Through ZK and ML, banks can demonstrate that they do not systematically discriminate against any racial group. This proof would be real-time and ongoing, rather than today’s inefficient government audits of private data.

The ideal ML model? One that does not revise history or treat people differently based on context. AI must comply with anti-discrimination laws, such as the Civil Rights Act of 1964. The challenge lies in how to embed this into AI and make it verifiable.

ZKP provides a technical means to ensure this compliance.

AI is Biased (But It Doesn’t Have to Be)

When dealing with machine learning, we need to ensure that any claims about fairness maintain the confidentiality of the underlying ML models and training data. They need to protect intellectual property and user privacy while providing enough access for users to know their models do not discriminate against anyone.

This is not a simple task. ZKP offers a verifiable solution.

ZKML (Zero-Knowledge Machine Learning) is how we leverage zero-knowledge proofs to verify whether an ML model is as fair as it claims to be. ZKML combines zero-knowledge cryptography and machine learning to create systems that can verify AI attributes without exposing the underlying model or data. We can also adopt this concept to use ZKP to identify ML models that treat everyone fairly.

Previously, using ZKP to prove AI fairness was very limited because it could only focus on one stage of the ML pipeline. This allowed dishonest model providers to construct datasets that met fairness requirements, even if the model itself failed to do so. ZKP also introduced impractical computational demands and long wait times to generate proofs of fairness.

In recent months, ZK frameworks have enabled us to scale ZKP to determine the end-to-end fairness of models with millions of parameters and to prove their security.

The Trillion-Dollar Question: How Do We Measure AI Fairness?

Let’s analyze three of the most common definitions of group fairness: demographic parity, equal opportunity, and predictive parity.

Demographic parity means that the predicted probabilities are the same across different groups (such as race or gender). The diversity, equity, and inclusion sector often uses it as a metric to attempt to reflect the demographic makeup of a company’s workforce. It is not an ideal measure of fairness for machine learning models, as expecting the same outcomes for every group is unrealistic.

Equal opportunity is easy for most people to understand. It gives each group the same chance of achieving positive outcomes, provided they are equal in qualifications. It does not optimize outcomes—merely ensures that each group should have the same opportunity to obtain a job or a mortgage.

Similarly, predictive parity measures whether an ML model makes predictions with the same accuracy across groups, thus not penalizing individuals for belonging to a particular group.

In both cases, the ML model does not tip the scales for fairness reasons but ensures that groups are not discriminated against in any way. This is an extremely reasonable fix.

Regardless, fairness is becoming the standard.

In the past year, the U.S. government and other countries have issued statements and directives regarding AI fairness and protecting the public from ML bias. Now, with the new U.S. administration, AI fairness may take a different approach, refocusing on equal opportunity rather than fairness.

As the political landscape shifts, the definition of fairness in AI is also changing, transitioning between fairness-focused and opportunity-focused paradigms. We welcome ML models that treat everyone fairly without favoring any group. Zero-knowledge proofs can serve as an unbreakable way to verify whether ML models achieve this without leaking private data.

While ZKP has faced many scalability challenges over the years, this technology has finally become sufficiently applicable for mainstream use cases. We can use ZKP to verify the integrity of training data, protect privacy, and ensure that the models we use are indeed what they claim to be.

As machine learning models become more intertwined in our daily lives, with future job opportunities, college admissions, and mortgages relying on them, we may gain a bit more assurance that AI treats us fairly. However, whether we can have a unified definition of fairness remains a completely different question.

Author: Rob Viglione, Co-founder and CEO of Horizen Labs

Related: The Optimism in Cryptocurrency Is Not Just Hype, but a Structural Feature

This article is for general informational purposes only and should not be considered legal or investment advice. The views, thoughts, and opinions expressed in the article are solely those of the author and do not necessarily reflect or represent the views and opinions of Cointelegraph.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Bitget:注册返10%, 送$100
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink