Senator Lummis's RISE Act is "timely and necessary" but lacks details.

CN
2 hours ago

Civil liability law is often not a hot topic at gatherings, but it could have a significant impact on the development of emerging technologies like artificial intelligence.

If liability rules are poorly formulated, they could create barriers to future innovation by exposing entrepreneurs— in this case, AI developers— to unnecessary legal risks. U.S. Senator Cynthia Lummis shares a similar view, having introduced the 2025 Responsible Innovation and Safety Expertise Act (RISE Act) last week.

The bill aims to protect AI developers from being sued in civil court, allowing doctors, lawyers, engineers, and other professionals to "understand the capabilities and limitations of AI before using it."

According to sources contacted by Cointelegraph, the initial response to the RISE Act has been largely positive, although some critics have pointed out the bill's limited scope, lack of transparency standards, and questioned whether liability protection should be provided to AI developers.

Most people view the RISE Act as a work in progress rather than a final draft.

According to Hamid Ekbia, a professor at Syracuse University’s Maxwell School of Citizenship and Public Affairs, Lummis's bill is "timely and necessary." (Lummis calls it the "first liability reform legislation for professional-grade AI" in the U.S.)

However, Ekbia stated in an interview with Cointelegraph that the bill is overly favorable to AI developers. The RISE Act requires developers to publicly disclose model specifications so that professionals can make informed decisions when using AI tools, but:

Unsurprisingly, some quickly viewed Lummis's bill as a "gift" to AI companies. The self-described "left-leaning political community" Democratic Underground pointed out in a forum, "AI companies do not want to be sued for the failures of their tools, and if this bill passes, that will be achieved."

Not everyone agrees with this view. Felix Shipkevich, head of Shipkevich Attorneys at Law, told Cointelegraph, "I wouldn't call this bill a 'gift' to AI companies."

Shipkevich explained that the proposed exemptions in the RISE Act seem designed to protect developers from strict liability for the unpredictable behavior of large language models, especially in cases where there is no negligence or intent to cause harm. From a legal perspective, this is a reasonable approach. He added:

The scope of the proposed legislation is quite narrow, primarily focusing on scenarios where professionals use AI tools while dealing with clients or patients. For example, a financial advisor might use AI tools to develop investment strategies for investors, or a radiologist might use AI software to help interpret X-rays.

The RISE Act does not really address situations where there is no professional intermediary between AI developers and end-users, such as when chatbots are used as digital companions for minors.

A recent civil liability case in Florida involved a teenager who committed suicide after interacting with an AI chatbot for months. The deceased's family claimed that the software's design was not reasonably safe for minors. Ekbia asked, "Who should be held responsible for the loss of life?" Such cases are not addressed in the proposed Senate legislation.

Ryan Abbott, a professor of law and health sciences at the University of Surrey, told Cointelegraph, "There needs to be clear, unified standards so that users, developers, and all stakeholders understand the rules and legal obligations."

However, the complexity, opacity, and autonomy of AI technology may introduce new potential harms, making rule-making difficult. Abbott believes that civil liability in the medical field will be particularly challenging, as he holds both medical and law degrees.

For instance, historically, doctors have outperformed AI software in medical diagnoses, but recent evidence suggests that in certain areas of medical practice, human-in-the-loop "human-machine collaboration" actually results in "worse outcomes" than allowing AI to work independently. Abbott explained, "This raises various interesting liability questions."

If doctors are no longer involved, who will pay compensation when serious medical errors occur? Will malpractice insurance cover it? Probably not.

The nonprofit research organization AI Futures Project initially supported the bill (having been consulted during its drafting). However, Executive Director Daniel Kokotajlo stated that the transparency disclosures required from AI developers are insufficient.

"The public has a right to know the goals, values, agendas, biases, instructions, etc., that companies are trying to instill in powerful AI systems," Kokotajlo said, noting that the bill does not require such transparency, making it inadequate.

Moreover, "companies can always choose to accept liability rather than remain transparent, so whenever a company wants to do something the public or regulators dislike, they can simply opt out," Kokotajlo added.

How does the RISE Act compare to the liability provisions in the 2023 EU AI Act? This is the first comprehensive regulation of AI by a major regulatory body.

The EU's stance on AI liability has been evolving. The EU AI Liability Directive was initially proposed in 2022 but was reportedly withdrawn in February 2025 due to lobbying from the AI industry.

Nevertheless, EU law typically adopts a human rights-based framework. As noted in a recent article in the UCLA Law Review, a rights-based approach "emphasizes the empowerment of individuals," particularly end-users such as patients, consumers, or clients.

In contrast, the risk-based approach adopted by the Lummis bill relies on processes, documentation, and assessment tools. For example, it focuses more on bias detection and mitigation rather than providing specific rights to affected individuals.

When Cointelegraph asked Kokotajlo whether a "risk-based" or "rule-based" approach is more appropriate for civil liability in the U.S., he responded, "I think the focus should be on risk and on those who create and deploy the technology."

Shipkevich added that the EU generally takes a more proactive approach on these issues. "Their laws require AI developers to demonstrate compliance with safety and transparency rules in advance."

The Lummis bill may need some modifications before it becomes law (if it passes).

Shipkevich stated, "As long as the proposed legislation is viewed as a starting point, I have a positive outlook on the RISE Act. After all, it is reasonable to provide some protection for developers who are not negligent and cannot control how their models are used downstream." He added:

Justin Bullock, Vice President of Policy at the American Responsible Innovation Organization (ARI), stated, "The RISE Act proposes some strong ideas, including federal transparency guidance, a limited scope safe harbor, and clear rules regarding the liability of professional AI adopters," although ARI has not yet formally endorsed the legislation.

However, Bullock also expressed concerns about transparency and disclosure—specifically ensuring that the required transparency assessments are effective. He told Cointelegraph:

Overall, however, Bullock stated that the Lummis bill "is a constructive first step in the discussion of what federal AI transparency requirements should look like."

Assuming the legislation is passed and signed into law, it will take effect on December 1, 2025.

Related: Stablecoin bill submitted to the House, Senate turns to digital asset market structure discussions

Original article: Senator Lummis's RISE Act "Timely and Necessary" but Lacks Details

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

ad
追热点必备!注册HTX领1500U
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink