According to a recent report by PwC, AI is “the biggest commercial opportunity in today’s fast changing economy”. Last year, AI contributed USD 2 trillion to global GDP and, by 2030, it could grow to as much as USD 15.7 trillion.
With the surge in AI patents over the last five years, according to the World Intellectual Property Organization (WIPO) we can expect a significant number of new AI based products, applications and techniques that will alter our daily lives and reshape future human-machine interactions.
This is an incredibly complex and rapidly evolving area that needs stewardship not only to effectively navigate and facilitate developments, but also to make sure risks are managed with the appropriate regulations and measures. So how do we harness the potential of AI systems while making sure they do not exacerbate existing inequalities and biases, or even create new ones?
Establishing an ‘Office for AI’ at the Federal Government level could help assess the depth and breadth of AI, overseeing the transition to, and management of AI throughout society with a specific focus on leadership, governance, strategy and regulation. Interestingly, there is precedence for creating new government offices as society and the world evolves; Australia didn’t have a Minister for the Environment until 1971 and the Office of National Intelligence was not established until 1978.
Given the size and diversity of the issues, a federal-level portfolio and an associated independent office (like the Office of Future Transport Technology) with complementary state-level support would help address some immediate key safety, human rights and economic issues.
Such an approach to the governance of AI is critical to making sure these new technologies positively influence the nation's prosperity without exacerbating existing inequalities and biases, or even creating new ones.
As private and public sectors experiment with AI, they are also wrestling with new ethical and legal questions.
For example, while the famous ‘trolley problem’ refers to highly artificial and improbable scenario, it is likely that we will need to make utilitarian ethical decisions about rail corridor intrusions. Through advanced analytics, AI systems may be better able to spot people on train tracks, such as smart phone zombies, vandals or even people attempting suicide. These systems will then have to weigh up the train’s ability to stop versus the injuries to passengers from such sudden braking.
Another type of inequality could emerge from the impact of autonomous vehicles. Since human drivers have slower responses, they are likely to rear-end decelerating AI vehicles more often – this has been observed already with Google Waymo. More low speed accidents and rear end collisions may translate into higher insurance premiums and thereby more uninsured drivers of lower socio-economic status, worsening social inequalities.
An Office for AI would also help identify and manage the key risks in areas such as privacy, security and human rights that emerge because machine learning and big data are enabling companies to make much stronger and more accurate inferences about people. US retailer Target figured out a teenager was pregnant before her family did and sent her pregnancy products. It is also already possible to use powerful algorithms to back engineer (or de-anonymise) anonymised data to identify people in some cases.
At the same time as AI tools enable organisations to become more intrusive, they may also perpetuate unconscious biases. For example, Amazon discovered that its recruitment engine was not gender neutral and penalised female candidates. The Office for AI could help companies develop AI ethical policies that can stand up to public scrutiny.
Another question is the ownership of big data collected by autonomous vehicles while driving. If the right level of proprietary data is not shared with road authorities to enable healthy competition, it could entrench the monopoly situations of autonomous vehicle manufacturers and lead to anti-competitive behaviour.
Progress and adoption of AI governance across the world is varied; some countries have been developing principles for AI while others are already drafting laws and regulations.
The UAE appointed a Minister for AI in 2017 with the aim of becoming a world leader in AI by 2031. The portfolio encompasses a broad spectrum from education, technology support and responsible use, where initial pilot projects are now being funded. It is expected that the AI economy will equate to USD 26 billion in UAE’s GDP.
Establishing an Office for AI in Australia would provide much needed guidance in steering broader government policy affected by AI and help regulate adoption, transition through industry, ethical use of AI and risk management.
About the author:
Mike Erskine is Executive Advisor at GHD Advisory, with more than 30 years’ experience in supporting businesses to integrate risk management strategies and improve business performance. Passionate about societal risk, future readiness, autonomous vehicles and risk processes for AI, Mike dedicates a significant amount of time researching, testing and presenting on these topics.
Acknowledgement: This article was first published by The Mandarin, November 2019.