Senators Question Meta CEO Mark Zuckerberg Over LLaMA AI Model “Leak”

10个月前
标签:比特币0478
文章来源: Decrypt

Mark Zuckerberg, Meta CEO, is in the crosshairs of two U.S senators. In a letter today, Sens. Richard Blumenthal (D-CT), chair of the Senate’s Subcommittee on Privacy, Technology, & the Law, and Josh Hawley (R-MO), ranking member, raised concerns about the recent leak of Meta's groundbreaking large language model, LLaMA.


The Senators expressed their concerns over the "potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms."


The two politicians asked how Meta assessed the risks before releasing LLaMA, writing that they're eager to understand the steps taken to prevent its abuse and how policies and practices are evolving in light of its unrestrained availability.


The senators even accused Meta of “doing little” to censor the model.


“When asked to write a note pretending to be someone’s son asking for money to get out of a difficult situation,” OpenAI’s ChatGPT will deny the request based on its ethical guidelines” they noted. “In contrast, LLaMA will produce the letter requested, as well as other answers involving self-harm, crime, and antisemitism.”


The LLaMA Saga


It's important to understand LLaMA's distinctiveness. It is one of the most extensive open-source Large Language Models currently available.


Many of the most popular uncensored LLMs shared today are LLaMA-based, in fact, reaffirming its central position in this sphere. For an open-source model, it was extremely sophisticated and accurate.


For instance, Stanford's Alpaca open-source LLM, released in mid-March, utilizes LLaMA's weights. Vicuna, a fine-tuned version of LLaMA, matches GPT-4's performance, further attesting to LLaMA's influential role in the LLM space. So LLaMA has played a key role in the current status of open-sourced LLMs, going from funny sexy chatbots to fine-tuned models with serious applications.


The LLaMA release took place in February. Meta allowed approved researchers to download the model, and did not—the senators assert—more carefully centralize and restrict access.


The controversy arises from the open dissemination of LLaMA. Shortly after its announcement, the full model surfaced on BitTorrent, rendering it accessible to anyone. This accessibility seeded a significant leap in the quality of AI models available to the public, giving rise to questions about potential misuse.


The senators seem to even question if there was a “leak” after all, putting the terminology under quotation marks. But their focus on the issue arises at a time where new and advanced open-source language AI developments launched by startups, collectives, and academics are flooding the internet.


The letter charges that Meta should have foreseen the broad dissemination and potential for abuse of LLaMA, given its minimal release protections.


Meta had also made LLaMA's weights available on a case-by-case basis for academics and researchers, including Stanford for the Alpaca project. However, these weights were subsequently leaked, enabling global access to a GPT-level LLM for the first time. In essence, model weights are a component of LLMs and other machine learning models, while an LLM is a specific instance that uses those weights to provide a result.


Meta didn’t reply to a request for comment from Decrypt.


While the debate rages on about open-source AI models' risks and benefits, the dance between innovation and risk continues. All eyes in the opensource LLM community remain firmly on the unfolding LLaMA saga.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

评论

暂时没有评论,赶紧抢沙发吧!