Biden administration takes first step toward writing key AI standards By Reuters – Canada Boosts

Biden administration takes first step toward writing key AI standards

© Reuters. Phrases studying “Artificial intelligence AI”, miniature of robotic and toy hand are image on this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration/File Photograph

By David Shepardson

WASHINGTON (Reuters) – The Biden administration mentioned on Tuesday it was taking step one towards writing key requirements and steerage for the secure deployment of generative synthetic intelligence and find out how to take a look at and safeguard techniques.

The Commerce Division’s Nationwide Institute of Requirements and Know-how (NIST) mentioned it was in search of public enter by Feb. 2 for conducting key testing essential to making sure the security of AI techniques.

Commerce Secretary Gina Raimondo mentioned the trouble was prompted by President Joe Biden’s October govt order on AI and aimed toward creating “industry standards around AI safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology.”

The company is creating tips for evaluating AI, facilitating improvement of requirements and supply testing environments for evaluating AI techniques. The request seeks enter from AI corporations and the general public on generative AI danger administration and lowering dangers of AI-generated misinformation.

Generative AI – which may create textual content, photographs and movies in response to open-ended prompts – in current months has spurred pleasure in addition to fears it might make some jobs out of date, upend elections and doubtlessly overpower people and catastrophic results.

Biden’s order directed companies to set requirements for that testing and tackle associated chemical, organic, radiological, nuclear, and cybersecurity dangers.

NIST is engaged on setting tips for testing, together with the place so-called “red-teaming” could be most helpful for AI danger evaluation and administration and setting greatest practices for doing so.

Exterior red-teaming has been used for years in cybersecurity to determine new dangers, with the time period referring to U.S. Chilly Battle simulations the place the enemy was termed the “red team.”

In August, the first-ever U.S. public evaluation “red-teaming” occasion was held throughout a serious cybersecurity convention and arranged by AI Village, SeedAI, Humane Intelligence.

1000’s of individuals tried to see in the event that they “could make the systems produce undesirable outputs or otherwise fail, with the goal of better understanding the risks that these systems present,” the White Home mentioned.

The occasion “demonstrated how external red-teaming can be an effective tool to identify novel AI risks,” it added.

Leave a Reply

Your email address will not be published. Required fields are marked *