Australia: Brave new world or the end of it? Regulating artificial intelligence (AI) begins with understanding the real risks

"AI is a fundamental existential risk for human civilization." Elon Musk (2017)1

When Tesla's CEO, Elon Musk, addressed the US National Governor's Association (NGA) summer meeting, it was his comments on AI which drew intense media scrutiny. AI was described as a fundamental risk to the existence of civilisation; the "scariest problem". Musk suggested robots will ultimately be able to do "everything, bar nothing" and do it "better than us".

The concern that he and others have about AI isn't about programs that might, for example, better crunch or compare data or operate machinery or devices. The concern relates to what Musk refers to as deep intelligence. Deep intelligence is a difficult concept to grasp. Essentially it refers to a point where AI is more intelligent than the smartest person on earth and involves self-learning capability in an unstructured or unsupervised manner. Musk has described it like a situation where humanity is being visited by a super intelligent alien – only the AI is the super intelligent alien.

An AI with deep intelligence will have access to enormous amount of data in real time. It will be able to manipulate that data and possibly even falsify or distort data – potentially at incredible speeds. Most importantly, it will have the capacity to manipulate people and institutions.

Musk illustrated his concern to the governors with a hypothetical Wag The Dog scenario involving an AI tasked with maximising the value of portfolio stocks. One way the AI might achieve this goal is to go long on defence stocks and short on consumer stocks and then, through misinformation, encourage sabre-rattling or even a war.

With deep intelligence, the outcomes may not always be foreseeable and have the potential to be far-reaching, especially where the AI is wholly or partly autonomous, self-learning or otherwise uses an intelligence construct that is capable of dynamic and potentially unpredictable self-development and growth.

Of course, AI can't and shouldn't be put back into the bottle – the potential benefits to humanity are too enormous and, in any event, global research is too widely dispersed. But, while there are potential benefits, there are also risks associated with AI.

To manage the risks society must appreciate those risks and take proportionate action in advance of developing deep intelligence. As the reasoning goes, once bad things happen as a result of deep intelligence it may already be too late.

Governments are even starting to take action. Earlier this year the European Parliament passed a detailed and relatively comprehensive resolution relating to robotics and AI, with recommendations to progress a framework for robotics regulation.2 Only recently the UK House of Lords Select Committee on Artificial Intelligence published its call for evidence and submissions in respect of wide range of matters concerning AI including 'pragmatic solutions to the issues'.

So, is talk of civilisation-ending AI irresponsible, alarmist and counterproductive? As with any major technological breakthrough there are two sides to every coin. There are at least two opposing views on the risk of civilisation-ending AI, as personified by the recent exchange between Elon Musk and Mark Zuckerberg on the subject.

However, the simple reality is it may be impossible to reap the potential benefits in the absence of potential risks. Surely we must ask ourselves what's the best way to maximise the benefits and the speed at which they might be realised while also taking a precautionary approach to minimise risks and avoid destructive scenarios.

Regulation may be one tool that assists, but the right balance would need to be struck to obtain the benefits of regulation while minimising any potential disadvantages. In any event, while it may help us better understand and reduce risks, regulation won't magically remove the threat of civilisation-ending AI.

AI IS GLOBAL

The development of AI is global. Any hypothetical threat in a digital world would not respect national borders. Ultimately, if regulation was seen as a solution to mitigate the risk of civilisation-ending AI, that regulation would need to occur at a global level. Realistically, there's little prospect of global regulation any time soon. The global track record where there isn't an agreed clear and present danger is discouraging

EFFECTIVENESS OF REGULATION

Even if regulation were implemented, it may not eradicate the particular conduct or outcomes targeted.

After all, AI, essentially involves research in a ubiquitous digital medium and typically without geographically limited resource and infrastructure requirements. AI research may potentially be unconstrained in terms of where it can be conducted. As a result, there is a risk that some jurisdictions may be more relaxed than others in implementing any regulation, leading to regulatory failure.

Also, realistically, some countries and corporations may not always play by the rules when it suits their purposes. Lines of AI research inconsistent with any regulatory approach might be pursued even in jurisdictions where a regulatory approach is implemented. Regulation cannot remove all risk and it would be irrational to think otherwise.

REDUCING THE RISK OF 'THINTELLIGENCE'

"They don't have intelligence. They have what I call 'thintelligence'. They see the immediate situation. They think narrowly and they call it 'being focused'. They don't see the surround. They don't see the consequences." Michael Crichton, Author

Sensibly, faced with potential for civilisation-ending AI, the best outcome is to avoid developing that AI. This is not a call to stop AI research. Any such call by extension is a call to give up the potential benefits AI has to offer; and society sorely needs those benefits. What is needed, however, is a sensible approach to mitigate the risk, while we pursue the benefits.

A prudent approach would involve:

  • ex-ante measures – to understand, guide, and implement an ethics and values based framework for AI research to mitigate the risk of civilisation-ending AI; and
  • ex-post measures – to understand, design and implement countermeasures that operate should the ex-ante measures fail to prevent the creation of civilisation-ending AI.

Elon Musk indicated to the NGA that in respect of regulation: "The first order of business would be to try to learn as much as possible, to understand the nature of the issues, to look closely at the progress being made and the remarkable achievements of artificial intelligence." This may involve assessing the state of AI research (potentially around the world) and its immediate, medium and longer term research objectives and trajectory.

To be clear, regulation could potentially be pursued by the private sector (through self-regulation) or by government (through imposed regulation). Neither is likely to prove truly adequate.

Private sector AI researchers and developers may have competing commercial priorities or take an opt in/opt out approach. Government, on the other hand, may have insufficient technical understanding or capabilities to achieve efficient and timely regulatory outcomes in the dynamic and complex environment of AI research.

It appears (especially given the lack of certainty as to precisely what is being regulated and how to do so) that any regulatory approach may need to draw on the regulatory strengths of both government and AI researchers and developers.

A CASE FOR AI ETHICS COMMITTEES?

Not a lot has been said as to concretely what form of regulation would be appropriate to address the risk of civilisation-ending AI and how it would operate.

Perhaps there are parallels that might be drawn with human medical research. The ethical structures governing such research, so as to monitor and manage human medical experimentation, are commonly understood.

While the risk of civilisation-ending AI and human medical research involve different issues, the ethical process governing human medical research might still be a helpful model prompt policy discussion or that, with modification, might form the basis for a proportionate regulatory approach to understanding and managing the risk of civilisation-ending AI.

Such an approach might also assist to subtly guide the path of research. Stephen Hawking for instance has referred to a shift in research. He wants to see this move from undirected AI to beneficial AI. Essentially, not just developing an AI and letting it run (doing whatever) because it is after all smarter than us, but rather developing an AI directed to beneficial ends and presumably, so far as possible, without inadvertently imposing our flaws on that AI.

An AI research ethics committee approach might focus more on 'should we?' rather than 'could we?'. It may also assist the AI industry with social and governmental transparency which may benefit the public acceptance of AI (especially deep intelligence) and the responsibility and accountability of AI researchers.

Importantly, it could be a dynamic and flexible form of regulation; and regulation based on reason rather than fear. This would be less restrictive and intrusive than mandatory one size fits all requirements or a governmentally-driven Big Brother approach.

However, as with unsanctioned human medical research, there may need to be consequences for unsanctioned or non-compliant AI research and, in some circumstances, potential for investigations, audits or other accountability mechanisms.

"Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last." Stephen Hawking

There are always risks. With the advent of nuclear technology, the world has not become a radioactive wasteland. The Large Hadron Collider is yet to suck the planet into a black hole and rock and roll has not (so far) destroyed civilisation as we know it. The key is to objectively identify and acknowledge potential risks (without sensationalising them) and then to take sensible steps to understand and address them.

Our power to develop AI and deep intelligence comes with responsibility. This involves realising the benefits for society, minimising risks and protecting society from disasters. The greater those benefits and risks; the greater that responsibility. As the American journalist and humourist, Robert Quillen put it: "Progress always involves risk. You can't steal second and keep one foot on first base."

Footnotes

1 C-Span (2017) Elon Musk at National Governors Association 2017 Summer Meeting, at https://www.c-span.org/video/?431119-6/elon-musk-addresses-nga&start=1502 (viewed on 19 July 2017)

2 For our analysis on that resolution go to our article: Preparing for life with robots: How will they be regulated in Australia? The European Parliament resolution itself can be found at:
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fTEXT%2bTA%2bP8-TA-2017-0051%2b0%2bDOC%2bXML%2bV0%2f%2fEN&language=EN

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Chambers Asia Pacific Awards 2016 Winner – Australia
Client Service Award
Employer of Choice for Gender Equality (WGEA)

To print this article, all you need is to be registered on Mondaq.com.

Click to Login as an existing user or Register so you can print this article.

Authors
 
Some comments from our readers…
“The articles are extremely timely and highly applicable”
“I often find critical information not available elsewhere”
“As in-house counsel, Mondaq’s service is of great value”

Related Topics
 
Related Articles
 
Up-coming Events Search
Tools
Print
Font Size:
Translation
Channels
Mondaq on Twitter
 
Register for Access and our Free Biweekly Alert for
This service is completely free. Access 250,000 archived articles from 100+ countries and get a personalised email twice a week covering developments (and yes, our lawyers like to think you’ve read our Disclaimer).
 
Email Address
Company Name
Password
Confirm Password
Position
Mondaq Topics -- Select your Interests
 Accounting
 Anti-trust
 Commercial
 Compliance
 Consumer
 Criminal
 Employment
 Energy
 Environment
 Family
 Finance
 Government
 Healthcare
 Immigration
 Insolvency
 Insurance
 International
 IP
 Law Performance
 Law Practice
 Litigation
 Media & IT
 Privacy
 Real Estate
 Strategy
 Tax
 Technology
 Transport
 Wealth Mgt
Regions
Africa
Asia
Asia Pacific
Australasia
Canada
Caribbean
Europe
European Union
Latin America
Middle East
U.K.
United States
Worldwide Updates
Registration (you must scroll down to set your data preferences)

Mondaq Ltd requires you to register and provide information that personally identifies you, including your content preferences, for three primary purposes (full details of Mondaq’s use of your personal data can be found in our Privacy and Cookies Notice):

  • To allow you to personalize the Mondaq websites you are visiting to show content ("Content") relevant to your interests.
  • To enable features such as password reminder, news alerts, email a colleague, and linking from Mondaq (and its affiliate sites) to your website.
  • To produce demographic feedback for our content providers ("Contributors") who contribute Content for free for your use.

Mondaq hopes that our registered users will support us in maintaining our free to view business model by consenting to our use of your personal data as described below.

Mondaq has a "free to view" business model. Our services are paid for by Contributors in exchange for Mondaq providing them with access to information about who accesses their content. Once personal data is transferred to our Contributors they become a data controller of this personal data. They use it to measure the response that their articles are receiving, as a form of market research. They may also use it to provide Mondaq users with information about their products and services.

Details of each Contributor to which your personal data will be transferred is clearly stated within the Content that you access. For full details of how this Contributor will use your personal data, you should review the Contributor’s own Privacy Notice.

Please indicate your preference below:

Yes, I am happy to support Mondaq in maintaining its free to view business model by agreeing to allow Mondaq to share my personal data with Contributors whose Content I access
No, I do not want Mondaq to share my personal data with Contributors

Also please let us know whether you are happy to receive communications promoting products and services offered by Mondaq:

Yes, I am happy to received promotional communications from Mondaq
No, please do not send me promotional communications from Mondaq
Terms & Conditions

Mondaq.com (the Website) is owned and managed by Mondaq Ltd (Mondaq). Mondaq grants you a non-exclusive, revocable licence to access the Website and associated services, such as the Mondaq News Alerts (Services), subject to and in consideration of your compliance with the following terms and conditions of use (Terms). Your use of the Website and/or Services constitutes your agreement to the Terms. Mondaq may terminate your use of the Website and Services if you are in breach of these Terms or if Mondaq decides to terminate the licence granted hereunder for any reason whatsoever.

Use of www.mondaq.com

To Use Mondaq.com you must be: eighteen (18) years old or over; legally capable of entering into binding contracts; and not in any way prohibited by the applicable law to enter into these Terms in the jurisdiction which you are currently located.

You may use the Website as an unregistered user, however, you are required to register as a user if you wish to read the full text of the Content or to receive the Services.

You may not modify, publish, transmit, transfer or sell, reproduce, create derivative works from, distribute, perform, link, display, or in any way exploit any of the Content, in whole or in part, except as expressly permitted in these Terms or with the prior written consent of Mondaq. You may not use electronic or other means to extract details or information from the Content. Nor shall you extract information about users or Contributors in order to offer them any services or products.

In your use of the Website and/or Services you shall: comply with all applicable laws, regulations, directives and legislations which apply to your Use of the Website and/or Services in whatever country you are physically located including without limitation any and all consumer law, export control laws and regulations; provide to us true, correct and accurate information and promptly inform us in the event that any information that you have provided to us changes or becomes inaccurate; notify Mondaq immediately of any circumstances where you have reason to believe that any Intellectual Property Rights or any other rights of any third party may have been infringed; co-operate with reasonable security or other checks or requests for information made by Mondaq from time to time; and at all times be fully liable for the breach of any of these Terms by a third party using your login details to access the Website and/or Services

however, you shall not: do anything likely to impair, interfere with or damage or cause harm or distress to any persons, or the network; do anything that will infringe any Intellectual Property Rights or other rights of Mondaq or any third party; or use the Website, Services and/or Content otherwise than in accordance with these Terms; use any trade marks or service marks of Mondaq or the Contributors, or do anything which may be seen to take unfair advantage of the reputation and goodwill of Mondaq or the Contributors, or the Website, Services and/or Content.

Mondaq reserves the right, in its sole discretion, to take any action that it deems necessary and appropriate in the event it considers that there is a breach or threatened breach of the Terms.

Mondaq’s Rights and Obligations

Unless otherwise expressly set out to the contrary, nothing in these Terms shall serve to transfer from Mondaq to you, any Intellectual Property Rights owned by and/or licensed to Mondaq and all rights, title and interest in and to such Intellectual Property Rights will remain exclusively with Mondaq and/or its licensors.

Mondaq shall use its reasonable endeavours to make the Website and Services available to you at all times, but we cannot guarantee an uninterrupted and fault free service.

Mondaq reserves the right to make changes to the services and/or the Website or part thereof, from time to time, and we may add, remove, modify and/or vary any elements of features and functionalities of the Website or the services.

Mondaq also reserves the right from time to time to monitor your Use of the Website and/or services.

Disclaimer

The Content is general information only. It is not intended to constitute legal advice or seek to be the complete and comprehensive statement of the law, nor is it intended to address your specific requirements or provide advice on which reliance should be placed. Mondaq and/or its Contributors and other suppliers make no representations about the suitability of the information contained in the Content for any purpose. All Content provided "as is" without warranty of any kind. Mondaq and/or its Contributors and other suppliers hereby exclude and disclaim all representations, warranties or guarantees with regard to the Content, including all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement. To the maximum extent permitted by law, Mondaq expressly excludes all representations, warranties, obligations, and liabilities arising out of or in connection with all Content. In no event shall Mondaq and/or its respective suppliers be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use of the Content or performance of Mondaq’s Services.

General

Mondaq may alter or amend these Terms by amending them on the Website. By continuing to Use the Services and/or the Website after such amendment, you will be deemed to have accepted any amendment to these Terms.

These Terms shall be governed by and construed in accordance with the laws of England and Wales and you irrevocably submit to the exclusive jurisdiction of the courts of England and Wales to settle any dispute which may arise out of or in connection with these Terms. If you live outside the United Kingdom, English law shall apply only to the extent that English law shall not deprive you of any legal protection accorded in accordance with the law of the place where you are habitually resident ("Local Law"). In the event English law deprives you of any legal protection which is accorded to you under Local Law, then these terms shall be governed by Local Law and any dispute or claim arising out of or in connection with these Terms shall be subject to the non-exclusive jurisdiction of the courts where you are habitually resident.

You may print and keep a copy of these Terms, which form the entire agreement between you and Mondaq and supersede any other communications or advertising in respect of the Service and/or the Website.

No delay in exercising or non-exercise by you and/or Mondaq of any of its rights under or in connection with these Terms shall operate as a waiver or release of each of your or Mondaq’s right. Rather, any such waiver or release must be specifically granted in writing signed by the party granting it.

If any part of these Terms is held unenforceable, that part shall be enforced to the maximum extent permissible so as to give effect to the intent of the parties, and the Terms shall continue in full force and effect.

Mondaq shall not incur any liability to you on account of any loss or damage resulting from any delay or failure to perform all or any part of these Terms if such delay or failure is caused, in whole or in part, by events, occurrences, or causes beyond the control of Mondaq. Such events, occurrences or causes will include, without limitation, acts of God, strikes, lockouts, server and network failure, riots, acts of war, earthquakes, fire and explosions.

By clicking Register you state you have read and agree to our Terms and Conditions