Australia: Brave new world or the end of it? Regulating artificial intelligence (AI) begins with understanding the real risks

"AI is a fundamental existential risk for human civilization." Elon Musk (2017)1

When Tesla's CEO, Elon Musk, addressed the US National Governor's Association (NGA) summer meeting, it was his comments on AI which drew intense media scrutiny. AI was described as a fundamental risk to the existence of civilisation; the "scariest problem". Musk suggested robots will ultimately be able to do "everything, bar nothing" and do it "better than us".

The concern that he and others have about AI isn't about programs that might, for example, better crunch or compare data or operate machinery or devices. The concern relates to what Musk refers to as deep intelligence. Deep intelligence is a difficult concept to grasp. Essentially it refers to a point where AI is more intelligent than the smartest person on earth and involves self-learning capability in an unstructured or unsupervised manner. Musk has described it like a situation where humanity is being visited by a super intelligent alien – only the AI is the super intelligent alien.

An AI with deep intelligence will have access to enormous amount of data in real time. It will be able to manipulate that data and possibly even falsify or distort data – potentially at incredible speeds. Most importantly, it will have the capacity to manipulate people and institutions.

Musk illustrated his concern to the governors with a hypothetical Wag The Dog scenario involving an AI tasked with maximising the value of portfolio stocks. One way the AI might achieve this goal is to go long on defence stocks and short on consumer stocks and then, through misinformation, encourage sabre-rattling or even a war.

With deep intelligence, the outcomes may not always be foreseeable and have the potential to be far-reaching, especially where the AI is wholly or partly autonomous, self-learning or otherwise uses an intelligence construct that is capable of dynamic and potentially unpredictable self-development and growth.

Of course, AI can't and shouldn't be put back into the bottle – the potential benefits to humanity are too enormous and, in any event, global research is too widely dispersed. But, while there are potential benefits, there are also risks associated with AI.

To manage the risks society must appreciate those risks and take proportionate action in advance of developing deep intelligence. As the reasoning goes, once bad things happen as a result of deep intelligence it may already be too late.

Governments are even starting to take action. Earlier this year the European Parliament passed a detailed and relatively comprehensive resolution relating to robotics and AI, with recommendations to progress a framework for robotics regulation.2 Only recently the UK House of Lords Select Committee on Artificial Intelligence published its call for evidence and submissions in respect of wide range of matters concerning AI including 'pragmatic solutions to the issues'.

So, is talk of civilisation-ending AI irresponsible, alarmist and counterproductive? As with any major technological breakthrough there are two sides to every coin. There are at least two opposing views on the risk of civilisation-ending AI, as personified by the recent exchange between Elon Musk and Mark Zuckerberg on the subject.

However, the simple reality is it may be impossible to reap the potential benefits in the absence of potential risks. Surely we must ask ourselves what's the best way to maximise the benefits and the speed at which they might be realised while also taking a precautionary approach to minimise risks and avoid destructive scenarios.

Regulation may be one tool that assists, but the right balance would need to be struck to obtain the benefits of regulation while minimising any potential disadvantages. In any event, while it may help us better understand and reduce risks, regulation won't magically remove the threat of civilisation-ending AI.


The development of AI is global. Any hypothetical threat in a digital world would not respect national borders. Ultimately, if regulation was seen as a solution to mitigate the risk of civilisation-ending AI, that regulation would need to occur at a global level. Realistically, there's little prospect of global regulation any time soon. The global track record where there isn't an agreed clear and present danger is discouraging


Even if regulation were implemented, it may not eradicate the particular conduct or outcomes targeted.

After all, AI, essentially involves research in a ubiquitous digital medium and typically without geographically limited resource and infrastructure requirements. AI research may potentially be unconstrained in terms of where it can be conducted. As a result, there is a risk that some jurisdictions may be more relaxed than others in implementing any regulation, leading to regulatory failure.

Also, realistically, some countries and corporations may not always play by the rules when it suits their purposes. Lines of AI research inconsistent with any regulatory approach might be pursued even in jurisdictions where a regulatory approach is implemented. Regulation cannot remove all risk and it would be irrational to think otherwise.


"They don't have intelligence. They have what I call 'thintelligence'. They see the immediate situation. They think narrowly and they call it 'being focused'. They don't see the surround. They don't see the consequences." Michael Crichton, Author

Sensibly, faced with potential for civilisation-ending AI, the best outcome is to avoid developing that AI. This is not a call to stop AI research. Any such call by extension is a call to give up the potential benefits AI has to offer; and society sorely needs those benefits. What is needed, however, is a sensible approach to mitigate the risk, while we pursue the benefits.

A prudent approach would involve:

  • ex-ante measures – to understand, guide, and implement an ethics and values based framework for AI research to mitigate the risk of civilisation-ending AI; and
  • ex-post measures – to understand, design and implement countermeasures that operate should the ex-ante measures fail to prevent the creation of civilisation-ending AI.

Elon Musk indicated to the NGA that in respect of regulation: "The first order of business would be to try to learn as much as possible, to understand the nature of the issues, to look closely at the progress being made and the remarkable achievements of artificial intelligence." This may involve assessing the state of AI research (potentially around the world) and its immediate, medium and longer term research objectives and trajectory.

To be clear, regulation could potentially be pursued by the private sector (through self-regulation) or by government (through imposed regulation). Neither is likely to prove truly adequate.

Private sector AI researchers and developers may have competing commercial priorities or take an opt in/opt out approach. Government, on the other hand, may have insufficient technical understanding or capabilities to achieve efficient and timely regulatory outcomes in the dynamic and complex environment of AI research.

It appears (especially given the lack of certainty as to precisely what is being regulated and how to do so) that any regulatory approach may need to draw on the regulatory strengths of both government and AI researchers and developers.


Not a lot has been said as to concretely what form of regulation would be appropriate to address the risk of civilisation-ending AI and how it would operate.

Perhaps there are parallels that might be drawn with human medical research. The ethical structures governing such research, so as to monitor and manage human medical experimentation, are commonly understood.

While the risk of civilisation-ending AI and human medical research involve different issues, the ethical process governing human medical research might still be a helpful model prompt policy discussion or that, with modification, might form the basis for a proportionate regulatory approach to understanding and managing the risk of civilisation-ending AI.

Such an approach might also assist to subtly guide the path of research. Stephen Hawking for instance has referred to a shift in research. He wants to see this move from undirected AI to beneficial AI. Essentially, not just developing an AI and letting it run (doing whatever) because it is after all smarter than us, but rather developing an AI directed to beneficial ends and presumably, so far as possible, without inadvertently imposing our flaws on that AI.

An AI research ethics committee approach might focus more on 'should we?' rather than 'could we?'. It may also assist the AI industry with social and governmental transparency which may benefit the public acceptance of AI (especially deep intelligence) and the responsibility and accountability of AI researchers.

Importantly, it could be a dynamic and flexible form of regulation; and regulation based on reason rather than fear. This would be less restrictive and intrusive than mandatory one size fits all requirements or a governmentally-driven Big Brother approach.

However, as with unsanctioned human medical research, there may need to be consequences for unsanctioned or non-compliant AI research and, in some circumstances, potential for investigations, audits or other accountability mechanisms.

"Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last." Stephen Hawking

There are always risks. With the advent of nuclear technology, the world has not become a radioactive wasteland. The Large Hadron Collider is yet to suck the planet into a black hole and rock and roll has not (so far) destroyed civilisation as we know it. The key is to objectively identify and acknowledge potential risks (without sensationalising them) and then to take sensible steps to understand and address them.

Our power to develop AI and deep intelligence comes with responsibility. This involves realising the benefits for society, minimising risks and protecting society from disasters. The greater those benefits and risks; the greater that responsibility. As the American journalist and humourist, Robert Quillen put it: "Progress always involves risk. You can't steal second and keep one foot on first base."


1 C-Span (2017) Elon Musk at National Governors Association 2017 Summer Meeting, at (viewed on 19 July 2017)

2 For our analysis on that resolution go to our article: Preparing for life with robots: How will they be regulated in Australia? The European Parliament resolution itself can be found at:

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Chambers Asia Pacific Awards 2016 Winner – Australia
Client Service Award
Employer of Choice for Gender Equality (WGEA)

To print this article, all you need is to be registered on

Click to Login as an existing user or Register so you can print this article.

Some comments from our readers…
“The articles are extremely timely and highly applicable”
“I often find critical information not available elsewhere”
“As in-house counsel, Mondaq’s service is of great value”

Up-coming Events Search
Font Size:
Mondaq on Twitter
Register for Access and our Free Biweekly Alert for
This service is completely free. Access 250,000 archived articles from 100+ countries and get a personalised email twice a week covering developments (and yes, our lawyers like to think you’ve read our Disclaimer).
Email Address
Company Name
Confirm Password
Mondaq Topics -- Select your Interests
 Law Performance
 Law Practice
 Media & IT
 Real Estate
 Wealth Mgt
Asia Pacific
European Union
Latin America
Middle East
United States
Worldwide Updates
Check to state you have read and
agree to our Terms and Conditions

Terms & Conditions and Privacy Statement (the Website) is owned and managed by Mondaq Ltd and as a user you are granted a non-exclusive, revocable license to access the Website under its terms and conditions of use. Your use of the Website constitutes your agreement to the following terms and conditions of use. Mondaq Ltd may terminate your use of the Website if you are in breach of these terms and conditions or if Mondaq Ltd decides to terminate your license of use for whatever reason.

Use of

You may use the Website but are required to register as a user if you wish to read the full text of the content and articles available (the Content). You may not modify, publish, transmit, transfer or sell, reproduce, create derivative works from, distribute, perform, link, display, or in any way exploit any of the Content, in whole or in part, except as expressly permitted in these terms & conditions or with the prior written consent of Mondaq Ltd. You may not use electronic or other means to extract details or information about’s content, users or contributors in order to offer them any services or products which compete directly or indirectly with Mondaq Ltd’s services and products.


Mondaq Ltd and/or its respective suppliers make no representations about the suitability of the information contained in the documents and related graphics published on this server for any purpose. All such documents and related graphics are provided "as is" without warranty of any kind. Mondaq Ltd and/or its respective suppliers hereby disclaim all warranties and conditions with regard to this information, including all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement. In no event shall Mondaq Ltd and/or its respective suppliers be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use or performance of information available from this server.

The documents and related graphics published on this server could include technical inaccuracies or typographical errors. Changes are periodically added to the information herein. Mondaq Ltd and/or its respective suppliers may make improvements and/or changes in the product(s) and/or the program(s) described herein at any time.


Mondaq Ltd requires you to register and provide information that personally identifies you, including what sort of information you are interested in, for three primary purposes:

  • To allow you to personalize the Mondaq websites you are visiting.
  • To enable features such as password reminder, newsletter alerts, email a colleague, and linking from Mondaq (and its affiliate sites) to your website.
  • To produce demographic feedback for our information providers who provide information free for your use.

Mondaq (and its affiliate sites) do not sell or provide your details to third parties other than information providers. The reason we provide our information providers with this information is so that they can measure the response their articles are receiving and provide you with information about their products and services.

If you do not want us to provide your name and email address you may opt out by clicking here .

If you do not wish to receive any future announcements of products and services offered by Mondaq by clicking here .

Information Collection and Use

We require site users to register with Mondaq (and its affiliate sites) to view the free information on the site. We also collect information from our users at several different points on the websites: this is so that we can customise the sites according to individual usage, provide 'session-aware' functionality, and ensure that content is acquired and developed appropriately. This gives us an overall picture of our user profiles, which in turn shows to our Editorial Contributors the type of person they are reaching by posting articles on Mondaq (and its affiliate sites) – meaning more free content for registered users.

We are only able to provide the material on the Mondaq (and its affiliate sites) site free to site visitors because we can pass on information about the pages that users are viewing and the personal information users provide to us (e.g. email addresses) to reputable contributing firms such as law firms who author those pages. We do not sell or rent information to anyone else other than the authors of those pages, who may change from time to time. Should you wish us not to disclose your details to any of these parties, please tick the box above or tick the box marked "Opt out of Registration Information Disclosure" on the Your Profile page. We and our author organisations may only contact you via email or other means if you allow us to do so. Users can opt out of contact when they register on the site, or send an email to with “no disclosure” in the subject heading

Mondaq News Alerts

In order to receive Mondaq News Alerts, users have to complete a separate registration form. This is a personalised service where users choose regions and topics of interest and we send it only to those users who have requested it. Users can stop receiving these Alerts by going to the Mondaq News Alerts page and deselecting all interest areas. In the same way users can amend their personal preferences to add or remove subject areas.


A cookie is a small text file written to a user’s hard drive that contains an identifying user number. The cookies do not contain any personal information about users. We use the cookie so users do not have to log in every time they use the service and the cookie will automatically expire if you do not visit the Mondaq website (or its affiliate sites) for 12 months. We also use the cookie to personalise a user's experience of the site (for example to show information specific to a user's region). As the Mondaq sites are fully personalised and cookies are essential to its core technology the site will function unpredictably with browsers that do not support cookies - or where cookies are disabled (in these circumstances we advise you to attempt to locate the information you require elsewhere on the web). However if you are concerned about the presence of a Mondaq cookie on your machine you can also choose to expire the cookie immediately (remove it) by selecting the 'Log Off' menu option as the last thing you do when you use the site.

Some of our business partners may use cookies on our site (for example, advertisers). However, we have no access to or control over these cookies and we are not aware of any at present that do so.

Log Files

We use IP addresses to analyse trends, administer the site, track movement, and gather broad demographic information for aggregate use. IP addresses are not linked to personally identifiable information.


This web site contains links to other sites. Please be aware that Mondaq (or its affiliate sites) are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of these third party sites. This privacy statement applies solely to information collected by this Web site.

Surveys & Contests

From time-to-time our site requests information from users via surveys or contests. Participation in these surveys or contests is completely voluntary and the user therefore has a choice whether or not to disclose any information requested. Information requested may include contact information (such as name and delivery address), and demographic information (such as postcode, age level). Contact information will be used to notify the winners and award prizes. Survey information will be used for purposes of monitoring or improving the functionality of the site.


If a user elects to use our referral service for informing a friend about our site, we ask them for the friend’s name and email address. Mondaq stores this information and may contact the friend to invite them to register with Mondaq, but they will not be contacted more than once. The friend may contact Mondaq to request the removal of this information from our database.


From time to time Mondaq may send you emails promoting Mondaq services including new services. You may opt out of receiving such emails by clicking below.

*** If you do not wish to receive any future announcements of services offered by Mondaq you may opt out by clicking here .


This website takes every reasonable precaution to protect our users’ information. When users submit sensitive information via the website, your information is protected using firewalls and other security technology. If you have any questions about the security at our website, you can send an email to

Correcting/Updating Personal Information

If a user’s personally identifiable information changes (such as postcode), or if a user no longer desires our service, we will endeavour to provide a way to correct, update or remove that user’s personal data provided to us. This can usually be done at the “Your Profile” page or by sending an email to

Notification of Changes

If we decide to change our Terms & Conditions or Privacy Policy, we will post those changes on our site so our users are always aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it. If at any point we decide to use personally identifiable information in a manner different from that stated at the time it was collected, we will notify users by way of an email. Users will have a choice as to whether or not we use their information in this different manner. We will use information in accordance with the privacy policy under which the information was collected.

How to contact Mondaq

You can contact us with comments or queries at

If for some reason you believe Mondaq Ltd. has not adhered to these principles, please notify us by e-mail at and we will use commercially reasonable efforts to determine and correct the problem promptly.