We who are all paying attention to AI should take a look at the upgrade of Brevis Vera: recently, lobsters have become popular.

CN
BITWU.ETH
Follow
3 hours ago

We, who are all paying attention to AI, should focus on the upgrade of Brevis Vera:

Recently, lobster has become famous, and the intelligent agents show us the future while also making us realize that AI may really become ubiquitous in our lives in the future, helping us complete workflows and even providing entertainment and living experiences, with AI's shadow everywhere.

So this will also raise some questions, the most direct one being: now when we see a picture or a video, we might be unable to distinguish whether it is real or AI-generated, it's hard to tell!

This has given rise to a term called: deepfake

What does deepfake mean?

Let me explain, it means deep falsification, abbreviated as deepfake, which is a portmanteau of the English words "deep learning" and "fake," specifically referring to the application of AI-based human image synthesis technology.

Deepfake has already become the norm, so in the long run, what we will see is that the internet will enter a very awkward state: everyone will start to question all content, and many things will be doubted, if this continues, the world will not be normal.

I think this is a significant problem in the AI era: the increase of fake content is indeed frightening, but what is even scarier, do you know what it is? It's that even real content starts to lose trust.

This is also why I think the Brevis @brevis_zk this time launched Brevis Vera is worth a closer look. Because what it accomplishes is to transform the issue of the authenticity we can't judge, whether something is AI-generated or genuinely existing, from a subjective judgment into something verifiable.

In simple terms, it allows real content to prove itself from the source that it is genuine. How does it achieve that:

Brevis Vera first uses C2PA to tag the original content of a captured moment with a "source signature," proving it indeed comes from a real device.

The problem is that once the content is cropped, color-graded, watermarked, or compressed, the original signature will become invalid, and the trust chain will be broken.

Vera's approach is to embed these subsequent editing processes into ZK proofs, to prove that the final published version indeed comes from that signed original content.

At the same time, it can also prove that only permitted edits were made in the process, without sneaking in other modifications, so that others can verify without needing to see the original materials and specific editing details, and still confirm that this content is "authentically sourced and the process is verifiable."

Looking ahead, such products may be the prototype for the content trust infrastructure in the AI era.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink