High-Risk AI Faces Potential Ban in Australia
In response to global concerns surrounding the rapid development of artificial intelligence (AI), the Australian government has launched an urgent eight-week consultation to determine if any "high-risk" AI applications should be prohibited. This initiative follows similar actions taken in the United States, the European Union, and China to understand and potentially mitigate AI-related risks.
Industry and Science Minister Ed Husic announced on June 1 the release of two documents: a discussion paper titled "Safe and Responsible AI in Australia" and a report on generative AI from the National Science and Technology Council. The government's consultation will run until July 26.
The consultation aims to gather feedback on how to support the "safe and responsible use of AI", exploring whether voluntary ethical frameworks, specific regulations, or a combination of both approaches should be employed. A key question in the consultation is whether any high-risk AI applications should be entirely banned, and how such AI tools would be identified.
An example risk matrix for AI models was included in the discussion paper, categorising AI in self-driving cars as "high risk", while a generative AI tool used for tasks like creating medical patient records was considered "medium risk."
The paper acknowledges both the "positive" applications of AI in medical, engineering, and legal industries and its "harmful" uses, such as deepfake tools, fake news creation, and cases of AI bots encouraging self-harm. Issues such as AI model bias and "hallucinations" — nonsensical or false information produced by AIs — are also addressed.
Citing "low levels of public trust," the discussion paper suggests that AI adoption in Australia is currently "relatively low." It points to AI regulation in other jurisdictions and Italy's temporary ban on ChatGPT as examples.
The National Science and Technology Council report suggests that while Australia possesses some strengths in AI capabilities, specifically in robotics and computer vision, its "core fundamental capacity in [large language models] and related areas is relatively weak." The report also mentions the risk posed by the concentration of generative AI resources within a few large, predominantly US-based tech companies.
The consultation aims to gather feedback on how to support the "safe and responsible use of AI", exploring whether voluntary ethical frameworks, specific regulations, or a combination of both approaches should be employed. A key question in the consultation is whether any high-risk AI applications should be entirely banned, and how such AI tools would be identified.
An example risk matrix for AI models was included in the discussion paper, categorising AI in self-driving cars as "high risk", while a generative AI tool used for tasks like creating medical patient records was considered "medium risk."
The paper acknowledges both the "positive" applications of AI in medical, engineering, and legal industries and its "harmful" uses, such as deepfake tools, fake news creation, and cases of AI bots encouraging self-harm. Issues such as AI model bias and "hallucinations" — nonsensical or false information produced by AIs — are also addressed.
Citing "low levels of public trust," the discussion paper suggests that AI adoption in Australia is currently "relatively low." It points to AI regulation in other jurisdictions and Italy's temporary ban on ChatGPT as examples.
The National Science and Technology Council report suggests that while Australia possesses some strengths in AI capabilities, specifically in robotics and computer vision, its "core fundamental capacity in [large language models] and related areas is relatively weak." The report also mentions the risk posed by the concentration of generative AI resources within a few large, predominantly US-based tech companies.