Anthropic proposes transparency framework for frontier AI models 

Artificial Intelligence (AI) Startup Anthropic on Monday unveiled a “targeted” structure, which proposes a series of transparency rules for the development of the Frontier AI model.

The framework wants to install “light and flexible and flexible” remaining “clear disclosure requirements for safety practices”, the company outlined in a news release.

“AI is moving fast,” it is written. “While industries, governments, academics, and others work to develop agreeable-protection standards and comprehensive assessment methods-a process that may take from months to years-we require interim steps to ensure that very powerful AI has been safe, responsible and transparently developed.”

The proposed rules of anthropic will only apply to the biggest developers or the most advanced AI models of the frontier model.

They will need to develop developers and release a safe growth structure in public, explaining how they assess and reduce improper risks. Developers will also be bound to publish a system card, summarizing testing and evaluation processes.

The company said, “The requirements of transparency for safe development structures and system cards can help provide evidence to policy makers that they need to determine whether further regulation is warrant, as well as provides the public with important information about this powerful new technology,” the company said.

The proposed structure of the AI ​​firm falls on the heels of defeat last week of the tax and expenditure bill of President Trump, which initially demanded a ban on state AI regulation for 10 years.

Anthropic CEO Dario Amodi came against last month’s remedy, calling it a “very far away” to reduce the risks of rapidly developed technology. The AI ​​adjournment was eventually excluded from the reconciliation bill before passing the Senate.

The company’s structure praised the AI ​​AdvocC Group American for Responsible Innovation (ARI), which “deferred anthropic to carry forward” debate whether we should have AI rules for rules that should be for those rules. “

ARI Executive Director Eric Gastfriend said in a statement, “We have heard many CEOs saying that they want rules, then shoot anything specific to be proposed – so it is good to see a solid plan coming from the industry.”

“Anthropic outline requires some basic transparency requirements that we need, such as a plan to reduce risks and keep developers accountable to those plans,” he continued. “Hopefully it brings other laboratories to the table in conversation that should look like AI rules.”

Source link

Please follow and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *