Future Tech

Australia weighs mandatory restrictions on high-risk AI use

Tan KW
Publish date: Wed, 17 Jan 2024, 09:43 AM
Tan KW
0 460,576
Future Tech

Australia will consider introducing mandatory “guardrails” on the development of artificial intelligence (AI) as the government attempts to balance the productivity benefits of the new technology with potential fallout including the dissemination of disinformation.

Minister for Industry and Science Ed Husic announced Wednesday plans to create a panel of experts to weigh options for restrictions on AI use and research. Among other regulations being considered are a voluntary safety standard for low-risk applications and watermarks for AI-created content.

“We do need to be able to have those mandatory guardrails that say these are the red lines you cannot cross,” Husic said at a press conference in Canberra. In his interim response to the review, the minister said the government had “heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”

The government said it was clear from the review that voluntary restrictions on the development of AI were insufficient, with potential inaccuracies, biases and a lack of transparency among the risks flagged during the consultation.

Husic said work will begin on the AI regulations “straight away” but declined to commit to the full suite of legislation being in place by year’s end.

Across the world, the technology is seen as a driver of productivity and can benefit societies while also posing risks, including potentially aiding the spread of disinformation. In Australia, adopting AI and automation had been estimated to add as much as A$600 billion a year to economic output by 2030.

Australia said its goal is to limit the dangers of high-risk applications of AI while allowing the development of useful, low-risk settings to “flourish,” according to the interim response.

While the government didn’t elaborate on what high-risk might include, Husic defined it as “anything that affects the safety of people’s lives, or someone’s future prospects in work or with the law” in an interview with local media published on Sunday.

The Australian government began a review into the “safe and responsible” use of AI in June 2023, and decided to extend the response period after receiving more than 500 submissions from interested parties.

Australia was one of 27 countries that signed the Bletchley Declaration in the UK in November, following the international AI Safety Summit, which committed to a global collaboration on AI testing.

It’s part of a growing move by countries and regions around the world, particularly the EU, US, China and the UK, to set the ball rolling on speedily drawing up AI regulation. But those approaches are just the beginning and will be tested on legal enforcement from this year onwards.

 


  - Bloomberg

 

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment