Canada: Emerging Legal Issues In An AI-Driven World

Introduction

Artificial Intelligence ("AI") is the new marketplace reality. The increase in computing power, improved algorithms and the availability of massive amounts of data are transforming society. According to the International Data Corporation ("IDC"), the AI market is expected to hit $35.8 billion this year, which represents an increase of 44% since 2018.1 IDC has also projected global spending on AI to double by 2022, reaching $79.2 billion.2 In this article, we identify a number of emerging legal issues associated with the use of AI and offer some views on how the law might respond.

What is artificial intelligence?

AI describes the capacity of a computer to perform the tasks commonly associated with human beings.3 It includes the ability to review, discern meaning, generalize, learn from past experience and find patterns and relations to respond dynamically to changing situations.4

In 2017, Accenture Research and Frontier Economics conducted research comparing the economic growth rates of 16 industries and projecting the impact of AI on global economic growth. The report concluded that AI has the potential to boost profitability an average of 38% by 2035 and lead an economic boost of US$14 trillion across 16 industries in 12 economies by 2035.5

The promise of AI is better decision-making and enhanced experiences. In their book Machine, Platform, Crowd, MIT professors Andrew McAfee and Erik Brynjolfsson write "[t]he evidence is overwhelming that, whenever the option is available, relying on data and algorithms alone usually leads to better decisions and forecasts than relying on the judgment of even experienced and "expert" humans."6 The fear is that AI in an unregulated environment will lead to a loss of human supervisory control and unfortunate outcomes.

The legal aspects of AI

Commentators have recognized that the preponderance of AI will raise new and important legal and ethical questions. Some have identified the need for AI ethicists to help navigate where this technological advance might take us.7

In October 2016, the British House of Commons published a report on Robotics and Artificial Intelligence, which highlighted certain ethical and legal issues including transparent decision-making, minimising bias, privacy and accountability.8 On December 18, 2018, the European Commission's High-Level Expert Group on Artificial Intelligence ("AI HLEG") released the first draft of the Draft Ethics Guidelines for Trustworthy AI.9 Pursuant to the guidelines, Trustworthy AI requires an ethical purpose and technical robustness:2

  1. Ethical purpose: Its development, deployment and use should respect fundamental rights and applicable regulation, as well as core principles and values, ensuring "an ethical purpose", and
  2. Technical robustness: It should be technically robust and reliable given that, even with good intentions, the use of AI can cause unintentional harm.11

In Canada, the Treasury Board Secretariat of Canada (the "Board") is looking at issues around the responsible use of AI in government programs and services.12 On March 2, 2019, the Board released a Directive on Automated Decision-Making, which takes effect on April 1, 2019, to ensure that AI driven decision-making is compatible with core administrative law principles such as transparency, accountability, legality, and procedural fairness.13

To grasp an understanding of the legal aspects of AI, one of the central questions will be how the law will evolve in response to AI. Will it be through the imposition of new laws and regulation or will it be through the time-honoured tradition of having our courts develop new laws by applying existing laws to new scenarios precipitated by technological change?

AI has already been used and accepted in a number of US decisions. In Washington v Emanuel Fair, the defence in a criminal proceeding sought to exclude the results of a genotyping software program that analysed complex DNA mixtures based on AI while at the same time asking that its source code be disclosed. 14 The Court accepted the use of the software and concluded that a number of other states had validated the use of the program without having access to its source code.15 In State v Loomis, the Wisconsin Supreme Court held that a trial judge's use of an algorithmic risk assessment software in sentencing did not violate the accused's due process rights, even though the methodology used to produce the assessment was neither disclosed to the accused nor to the court.16

In Canada, litigation involving AI is in its early stages. In 2018, the Globe and Mail reported that a lawsuit involving an AI system had been commenced in Quebec.17 Adam Basanta created a computer system that operates on its own and produces a series of randomly generated abstract pictures.18 Mr. Basanta was now being sued in Quebec Superior Court for trademark infringement because of an image created by the system.19 Amel Chamandy, owner of Montreal's Galerie NuEdge, claimed that a single image from Mr. Basanta's project All We'd Ever Need Is One Another violated the copyright on her photographic work Your World Without Paper (2009) and the trademark she owns associated with her name.20

AI is also being utilized to render judicial decisions. In Argentina, AI is being used to assist district attorneys in writing decisions in less complex cases such as taxi licence disputes that presiding judges can either approve, reject or rewrite.21 Using the district attorneys' digital library of 2,000 rulings from 2016 to 2017, the AI program matches cases to the most relevant decisions in the database, which enables it to guess how the court will rule.22 Thus far, judges have approved all of the suggested rulings—33 in total.23

Privacy

The volume and relativity of data collection will keep privacy at the forefront as one of the most significant legal issues that AI users will face going forward. AI systems use vast amounts of data; therefore, as more data is used more questions are raised. Who owns the data shared between AI developers and users? Can data be sold? Should this shared data be de-identified to protect privacy concerns? Is the intended use of data appropriately disclosed and compliant with legislation such as the Personal Information Protection and Electronic Documents Act ("PIPEDA")?

Governments now are updating their privacy legislation to respond to privacy concerns fueled by the public outcry against massive data breaches and the unfettered use of data by large companies. Consumers have become increasingly concerned with the potential misuse of their personal information. In 2015, the European Commission conducted a survey carried out in 28 member states of the European Union that demonstrated that roughly seven out of 10 people expressed concern about their information being used for a different purpose than the one for which it was collected.24

The EU and international regulators have taken an active interest in AI, not only recognizing its benefits but also being mindful of potential risks and unintended consequences.25 The European Parliament enacted the General Data Protection Regulation ("GDPR"), which is a comprehensive set of rules designed to keep the personal data of all EU citizens collected by any organization safe from unauthorized access or use.26 Under the GDPR, companies must be clear and concise about their collection and use of personal data, and indicate why the data is being collected and whether it will be used to create profiles of people's actions and habits. In other words, organizations must be transparent about the type of information they collect about consumers and how this information will be used. Critics contend that the GDPR could present an obstacle to developers looking to design more complex and sophisticated algorithms.28

Unlike the EU, US federal lawmakers have yet to establish regulations to govern the use of personal information in the AI world.29 Sensing the inevitability of data regulation, some large American companies like Apple are encouraging the introduction of regulation in the United States.30 On January 18, 2019, Accenture released a report outlining a framework to assist US federal agencies to evaluate, deploy and monitor AI systems.

Canada has yet to adopt regulations the likes of the GDPR. However, the new federal mandatory data breach notification regulations that came into force on November 1, 2018, were drafted with a view to harmonize the requirements of the GDPR to the extent possible.32 The Breach of Security Safeguards Regulations under PIPEDA set forth certain mandatory requirements for organizations applicable in the event of a data breach.33 PIPEDA  defines a breach of security safeguards as "the loss of, unauthorized access to or unauthorized disclosure of personal information resulting from a breach of an organization's security safeguards."34 Should a breach of security safeguards occur, organizations are required to do the following: report data breaches to the Office of the Privacy Commissioner of Canada, keep and maintain a record of every breach of safeguards involving personal information under their control, and provide the records of the breach to the Commissioner upon request. Organizations will need to not only evaluate their compliance in terms of privacy legislation, but also ensure that their data handling practices are sufficiently secure to prevent cybersecurity breaches.

Contracts

The inherent nature of AI may require individuals or entities contracting for AI services to seek out specific contractual protections. In the past, software would literally perform as it was promised. Machine learning however is not static, but is constantly evolving. As noted by McAfee and Brynjolfsson, "[m]achine learning systems get better as they get bigger, run on faster and more specialized hardware, gain access to more data, and contain improved algorithms."36 The more data algorithms consume, the better they become at spotting patterns.37

Parties might consider contractual provisions, which covenant that the technology will operate as intended, and that if unwanted outcomes result then contractual remedies will follow. These additional provisions might include an emphasis on audit rights with respect to algorithms within AI contracts, appropriate service levels in the contract, a determination of the ownership of improvements created by AI, and indemnity provisions in the case of malfunction. AI will dictate a more creative approach to contracts where drafters will be forced to anticipate where machine learning might lead.

Torts

Machine learning constantly evolves, making more complex decisions based on the data it operates on. While most outcomes are anticipated, there is the distinct possibility of an unanticipated or adverse outcome given the absence of human supervision. The automated and artificial nature of AI raises new considerations around the determination of liability. Tort law has traditionally been the mechanism used in the law to address changes in society, including technological advances. In the past, the courts have applied the established analytical framework of tort law and have applied those legal principles to the facts as they are presented before the court.

We start the tort analysis with the following questions: Who is responsible? Who should bear liability? In the case of AI, is it the programmer or developer? Is it the user? Or is it the technology itself? What changes might we see to the standard of care or the principles of negligent design? As the AI evolves and makes its own decision, should it be considered an agent of the developer and if so, is the developer vicariously liable for the decisions made by the AI that result in negligence?

The most common tort—being the tort of negligence—focuses on whether a party has a duty of care to another, whether the party has breached the standard of care, and whether damages have been caused by that breach. Reasonable foreseeability is a central concept in negligence. Specifically, the test is whether a reasonable person is able to predict or expect the general consequences that would result because of his or her conduct, without the benefit of hindsight. The further that AI systems move away from classical algorithms and coding, then they can display behaviours that are not just unforeseen by their creators but are wholly unforeseeable. When there is a lack of foreseeability, are we placed in a position where no one is liable for a result, which may have a damaging effect on others? One would anticipate that our courts would respond to prevent such a result.

In a scenario where there is a lack foreseeability, the law might replace its analysis based on negligence to one based on strict liability. The doctrine of strict liability also known as the rule in Rylands v Fletcher provides that a defendant will still be held legally responsible when neither an intentional nor a negligent act has been found and it is only proven that the defendant's act resulted in injury to the plaintiff.

Should a negligence analysis remain, then the standard of care requirements will need to be redefined in  an AI context. Some of the following questions will be central to the court's consideration:

  1. Is the decision-making transparent so that the court can determine how the "black box" reached the outcome it did?
  2. What steps were taken to monitor outcomes arising from machine learning?
  3. Was the integrity and quality of the data appropriate for the purpose for which it was intended?
  4. Was the data used representative or does it promote bias and/or discrimination?
  5. Was the algorithm appropriately designed to guard against unintended outcomes?

One can envisage a growth industry in negligence actions against software development companies and programmers.

Product liability is another arm of tort law that may take on more significance when looking at liability should AI become defective. Under the common law, product liability focuses on negligent design, negligent manufacture and breach of the duty to warn. It generally addresses the liability of one or more parties involved in the manufacture, sale or distribution of a product.39 For this doctrine to apply, the AI system in question must qualify as a product, and not a service.40 Ascertaining where the defect occurred in the supply chain of an AI product may be difficult given the autonomous and evolving nature of machine learning and algorithms. Commentators have noted that product liability will become relevant with respect to issues arising from the use of autonomous vehicles, robots and other mobile AI-enabled systems.41

Bias/discrimination

Companies like Microsoft and Google have recognized that offering AI solutions that raise ethical, technological and legal challenges may expose them to reputational harm.42 The issues of bias and/or discrimination have become more prevalent as more companies and governmental entities turn to AI systems in their decision-making processes. For example, a 2016 investigation by ProPublica revealed that a number of US cities and states used an algorithm to assist with making bail decisions that was twice as likely to falsely label black prisoners as being at high-risk of re-offending than white prisoners.43

To mitigate against built-in biases in collected data and in the decision-making process, a number of companies have developed bias-detection algorithms. Accenture developed a tool that enables companies to identify and eliminate gender, racial and ethnic bias in their AI software.44 IBM's OpenScale is an AI platform that provides the ability to explain how AI decisions are made in real time to ensure transparency and compliance, which may also have relevance to the definition of the standard of care.45 However, these solutions may not necessarily solve the problem. A senior researcher at Microsoft acknowledged that "[i]f we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases." 46

Setting ethical parameters within which AI systems will operate is paramount to addressing the issue of bias. Regulating AI will not be an easy feat. Given that AI is constantly evolving, any ethical regulation concerning the use of AI must also continually evolve to remain relevant to the technology.

Conclusion

AI will continue to develop and to disrupt society in ways that we cannot yet imagine. It is challenging to keep pace with the speed of developments with which AI systems are being deployed. One developer recently described it like "a sort of peanut butter you can spread" across multiple disciplines and industries.47 As the peanut butter is spread, organizations must prepare not only for the positive but also for the unintended and likely unfortunate negative consequences technology like this will bring. It is largely unknown how the law will react to this new reality but anticipating what those impacts might be is a timely first step.

Footnote

1. International Data Corporation, "Worldwide Spending on Artificial Intelligence

Systems Will Grow to Nearly $35.8 Billion in 2019, According to New IDC

Spending Guide" (11 March 2019), online: https://www.idc.com/getdoc.

jsp?containerId=prUS44911419

2. Ibid.

3. B.J. Copeland, "Artificial intelligence" (17 August 2018), Encyclopedia

Britannica, online: https://www.britannica.com/technology/artificial-

intelligence

4. Ibid.

5. Mark Purdy & Paul Daugherty, "How AI boosts Industry Profits and

Innovation" (2017), Accenture, online: https://www.accenture.com/ca-en/

insight-ai-industry-growth

6. Andrew McAfee & Erik Brynjolfsson, Machine, Platfor m, Crowd: Harnessing Our

Digital Future (New York: W.W. Norton & Company, 2017) at 34 [McAfee &

Brynjolfss on].

7. John Murawski, "Need for AI Ethicists Becomes Clearer as Companies Admit

Tech's Flaws" (1 March 2019), the Wall Street Journal, online: https://www.

wsj.com/articles/need-for-ai-ethicists-becomes-clearer-as-companies-admit-

techs-flaws-11551436200 [Murawski]

8. House of Commons Science and Technology Committee, "Robotics and

artificial intelligence" (12 October 2016), online: https://publications.

parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf

9. European Commission, "Have your say: European expert group seeks feedback

on draft ethics guidelines for trustworthy artificial intelligence" (18 December

2018), online: https://ec.europa.eu/digital-single-market/en/news/have-

your-say-european-expert-group-seeks-feedback-draft-ethics-guidelines-

trustworthy

10. High-Level Expert Group on Artificial Intelligence, "Draft Ethics Guidelines

for Trustworthy AI: Working Document for Stakeholders' Consultation" (18

December 2018), European Commission, online: https://ec.europa.eu/digital-

single-market/en/news/draft-ethics-guidelines-trustworthy-ai. At page 17,

robustness is defined as follows: "Trustworthy AI requires that algorithms are

secure, reliable as well as robust enough to deal with errors or inconsistencies

during the design, development, execution, deployment and use phase of the

AI system, and to adequately cope with erroneous outcomes."

11. Ibid.

12. Government of Canada, "Responsible use of artificial Intelligence (AI)" (5

March 2019), online:  https://www.canada.ca/en/government/system/digital-

government/responsible-use-ai.html

13. Government of Canada, "Directive on Automated Decision-Making" (

14. Cybergenetics, "Seattle judge rules on TrueAllele admissibility and source

code" (12 January 2017), online: https://www.cybgen.com/information/

newsroom/2017/jan/Seattle-judge-rules-on-TrueAllele-admissibility-and-

source-code.shtml

15. Ibid.

16. Harvard Law Review, "State v Loomis: Wisconsin Supreme Court Requires

Warning Before Use of Algorithmic Risk Assessments in Sentencing" (2017)

130 Harv L Rev 1530, online: https://harvardlawreview.org/2017/03/state-v-

loomis/

17. Chris Hannay, "Artist faces lawsuit over computer system that creates

randomly generated images" (4 October 2018), The Globe and Mail, online:

https://www.theglobeandmail.com/arts/art-and-architecture/article-artist-

faces-lawsuit-over-computer-system-that-creates-randomly/

18. Ibid.

19. Ibid.

20. Ibid.

21. Patrick Gillespie, "When AI writes the Court Ruling" (29 October 2018),

Bloomberg Businessweek.

22. Ibid.

23. Ibid.

24. Patrick Mäder, Dr. Christian B. Westermann & Dr. Karin Tremp, "Analytics

in Insurance: Balancing Innovation and Customers' Trust" (February 2018),

PWC at 13, online: https://www.pwc.ch/de/press-room/expert-articles/

pwc_press_20180709_hsgtrendmonitor_maeder_westermann_tremp.pdf

25. Deloitte, "AI and risk management" (2018), at 1, online: https://www2.

deloitte.com/content/dam/Deloitte/global/Documents/Financial-Services
/deloitte-gx-ai-and-risk-management.pdf

26. Tech Pro Research, "EU General Data Protection Regulation (GDPR) policy"

(February 2018), online: http://www.techproresearch.com/downloads/eu-

general-data-protection-regulation-gdpr-policy/

27. Nitasha Tiku, "Europe's New Privacy Law will change the Web, and More"

(19 March 2018), Wired, online: https://www.wired.com/story/europes-new-

privacy-law-will-change-the-web-and-more/

28. Silla Brush, "EU's Data Privacy Law Places AI Use in Insurance Under Closer

Scrutiny" (22 May 2018), The Insurance Journal, online: https://www.

insurancejournal.com/news/international/2018/05/22/489995.htm

29. John Murawski, "U.S. Push for AI Supremacy Will Drive Demand for

Accountability, Trust" (20 March 2019), Wall Street Journal, online: https://

www.wsj.com/articles/u-s-push-for-ai-supremacy-will-drive-demand-for-

accountability-trust-11553074200

30. Mike Allen and Ina Fried, "Apple CEO Tim Cook calls new regulations

"inevitable"' (18 November 2018), Axios, online: https://www.axios.com/

axios-on-hbo-tim-cook-interview-apple-regulation-6a35ff64-75a3-4e91-

986c-f281c0615ac2.html

31. Accenture, "Responsible AI for federal agencies" (18 January 2018), online:

https://www.accenture.com/us-en/insights/us-federal-government/

responsible-ai-federal-agencies

32. Government of Canada, "Breach of Security Safeguards Regulations:

SOR/2018-64" (27 March 2018), online: http://gazette.gc.ca/rp-pr/

p2/2018/2018-04-18/html/sor-dors64-eng.html

33. Josh O'Kane, "Federal government debuts data-breach reporting rules" (18

April 2018), The Globe and Mail, online: https://www.theglobeandmail.com/

business/article-federal-government-debuts-data-breach-reporting-rules/

34. Personal Information Protection and Electronic Documents Act, SC 2000, c 5,

s 2(1).

35. Government of Canada, "What you need to know about mandatory reporting

of breaches of security safeguards" (29 October 2018), online: https://www.

priv.gc.ca/en/privacy-topics/privacy-breaches/respond-to-a-privacy-breach-

at-your-business/gd_pb_201810/

36. McAfee & Brynjolfsson, supra note 6 at 85.

37. David Meyer, "A strict regulatory regime may promote public confidence

in the use of technology but it might also be seen as an impediment to

innovation and progress" (25 May 2018), Fortune, online: http://fortune.

com/2018/05/25/ai-machine-learning-privacy-gdpr/

38. CED 4th (online), Torts, "Principles of Liability: Standard of Liability: Strict

Liability" (II.1.(c)) at §18.

39. Woodrow Barfield, "Liability for autonomous and artificially intelligent

robots" (2018) 9 De Gruyter 193 at 196, online: https://www.degruyter.com/

downloadpdf/j/pjbr.2018.9.issue-1/pjbr-2018-0018/pjbr-2018-0018.pdf.

40. Ibid at 197.

41. Richard Kemp, "Legal Aspects of Artificial Intelligence (v2.0)" (September

2018), Kemp It Law at 31, online: http://www.kempitlaw.com/wp-content/

uploads/2018/09/Legal-Aspects-of-AI-Kemp-IT-Law-v2.0-Sep-2018.pdf

42. Murawski, supra note 7.

43. Jeremy Kahn, "Accenture Unveils Tool to Help Companies Insure Their AI Is

Fair" (13 June 2018), Bloomberg, online: https://www.bloomberg.com/news/

articles/2018-06-13/accenture-unveils-tool-to-help-companies-insure-their-

ai-is-fair

44. Jeremy Kahn, "Accenture Unveils Tool to Help Companies Insure Their AI Is

Fair" (13 June 2018), Bloomberg, online: https://www.bloomberg.com/news/

articles/2018-06-13/accenture-unveils-tool-to-help-companies-insure-their-

ai-is-fair

45. IBM, "IBM Watson Now Available Anywhere" (12 February 2019), online:

https://newsroom.ibm.com/2019-02-12-IBM-Watson-Now-Available-

Anywhere

46. Dave Gershgorn, "Microsoft warned investors that biased or flawed AI

could hurt the company's image" (5 February 2019), Quartz, online: https://

qz.com/1542377/microsoft-warned-investors-that-biased-or-flawed-ai-could-

hurt-the-companys-image/

47. Spencer Bailey, "Designed by A.I.: Your Next Couch, Sweater, and Set of Golf

Clubs" (15 February 2019), Fortune, online: http://fortune.com/2019/02/15/

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

To print this article, all you need is to be registered on Mondaq.com.

Click to Login as an existing user or Register so you can print this article.

Authors
Events from this Firm
19 Sep 2019, Seminar, Birmingham, UK

Providing GCs, Heads of Legal and senior in-house lawyers with timely, topical and practical legal advice on a variety of topics.

26 Sep 2019, Seminar, London, UK

Providing GCs, Heads of Legal and senior in-house lawyers with timely, topical and practical legal advice on a variety of topics.

8 Oct 2019, Seminar, Birmingham, UK

Supporting the development of paralegals, trainees and lawyers of up to five years' PQE by providing valuable knowledge and guidance together with practical tips.

 
In association with
Related Topics
 
Related Articles
 
Related Video
Up-coming Events Search
Tools
Print
Font Size:
Translation
Channels
Mondaq on Twitter
 
Mondaq Free Registration
Gain access to Mondaq global archive of over 375,000 articles covering 200 countries with a personalised News Alert and automatic login on this device.
Mondaq News Alert (some suggested topics and region)
Select Topics
Registration (please scroll down to set your data preferences)

Mondaq Ltd requires you to register and provide information that personally identifies you, including your content preferences, for three primary purposes (full details of Mondaq’s use of your personal data can be found in our Privacy and Cookies Notice):

  • To allow you to personalize the Mondaq websites you are visiting to show content ("Content") relevant to your interests.
  • To enable features such as password reminder, news alerts, email a colleague, and linking from Mondaq (and its affiliate sites) to your website.
  • To produce demographic feedback for our content providers ("Contributors") who contribute Content for free for your use.

Mondaq hopes that our registered users will support us in maintaining our free to view business model by consenting to our use of your personal data as described below.

Mondaq has a "free to view" business model. Our services are paid for by Contributors in exchange for Mondaq providing them with access to information about who accesses their content. Once personal data is transferred to our Contributors they become a data controller of this personal data. They use it to measure the response that their articles are receiving, as a form of market research. They may also use it to provide Mondaq users with information about their products and services.

Details of each Contributor to which your personal data will be transferred is clearly stated within the Content that you access. For full details of how this Contributor will use your personal data, you should review the Contributor’s own Privacy Notice.

Please indicate your preference below:

Yes, I am happy to support Mondaq in maintaining its free to view business model by agreeing to allow Mondaq to share my personal data with Contributors whose Content I access
No, I do not want Mondaq to share my personal data with Contributors

Also please let us know whether you are happy to receive communications promoting products and services offered by Mondaq:

Yes, I am happy to received promotional communications from Mondaq
No, please do not send me promotional communications from Mondaq
Terms & Conditions

Mondaq.com (the Website) is owned and managed by Mondaq Ltd (Mondaq). Mondaq grants you a non-exclusive, revocable licence to access the Website and associated services, such as the Mondaq News Alerts (Services), subject to and in consideration of your compliance with the following terms and conditions of use (Terms). Your use of the Website and/or Services constitutes your agreement to the Terms. Mondaq may terminate your use of the Website and Services if you are in breach of these Terms or if Mondaq decides to terminate the licence granted hereunder for any reason whatsoever.

Use of www.mondaq.com

To Use Mondaq.com you must be: eighteen (18) years old or over; legally capable of entering into binding contracts; and not in any way prohibited by the applicable law to enter into these Terms in the jurisdiction which you are currently located.

You may use the Website as an unregistered user, however, you are required to register as a user if you wish to read the full text of the Content or to receive the Services.

You may not modify, publish, transmit, transfer or sell, reproduce, create derivative works from, distribute, perform, link, display, or in any way exploit any of the Content, in whole or in part, except as expressly permitted in these Terms or with the prior written consent of Mondaq. You may not use electronic or other means to extract details or information from the Content. Nor shall you extract information about users or Contributors in order to offer them any services or products.

In your use of the Website and/or Services you shall: comply with all applicable laws, regulations, directives and legislations which apply to your Use of the Website and/or Services in whatever country you are physically located including without limitation any and all consumer law, export control laws and regulations; provide to us true, correct and accurate information and promptly inform us in the event that any information that you have provided to us changes or becomes inaccurate; notify Mondaq immediately of any circumstances where you have reason to believe that any Intellectual Property Rights or any other rights of any third party may have been infringed; co-operate with reasonable security or other checks or requests for information made by Mondaq from time to time; and at all times be fully liable for the breach of any of these Terms by a third party using your login details to access the Website and/or Services

however, you shall not: do anything likely to impair, interfere with or damage or cause harm or distress to any persons, or the network; do anything that will infringe any Intellectual Property Rights or other rights of Mondaq or any third party; or use the Website, Services and/or Content otherwise than in accordance with these Terms; use any trade marks or service marks of Mondaq or the Contributors, or do anything which may be seen to take unfair advantage of the reputation and goodwill of Mondaq or the Contributors, or the Website, Services and/or Content.

Mondaq reserves the right, in its sole discretion, to take any action that it deems necessary and appropriate in the event it considers that there is a breach or threatened breach of the Terms.

Mondaq’s Rights and Obligations

Unless otherwise expressly set out to the contrary, nothing in these Terms shall serve to transfer from Mondaq to you, any Intellectual Property Rights owned by and/or licensed to Mondaq and all rights, title and interest in and to such Intellectual Property Rights will remain exclusively with Mondaq and/or its licensors.

Mondaq shall use its reasonable endeavours to make the Website and Services available to you at all times, but we cannot guarantee an uninterrupted and fault free service.

Mondaq reserves the right to make changes to the services and/or the Website or part thereof, from time to time, and we may add, remove, modify and/or vary any elements of features and functionalities of the Website or the services.

Mondaq also reserves the right from time to time to monitor your Use of the Website and/or services.

Disclaimer

The Content is general information only. It is not intended to constitute legal advice or seek to be the complete and comprehensive statement of the law, nor is it intended to address your specific requirements or provide advice on which reliance should be placed. Mondaq and/or its Contributors and other suppliers make no representations about the suitability of the information contained in the Content for any purpose. All Content provided "as is" without warranty of any kind. Mondaq and/or its Contributors and other suppliers hereby exclude and disclaim all representations, warranties or guarantees with regard to the Content, including all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement. To the maximum extent permitted by law, Mondaq expressly excludes all representations, warranties, obligations, and liabilities arising out of or in connection with all Content. In no event shall Mondaq and/or its respective suppliers be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use of the Content or performance of Mondaq’s Services.

General

Mondaq may alter or amend these Terms by amending them on the Website. By continuing to Use the Services and/or the Website after such amendment, you will be deemed to have accepted any amendment to these Terms.

These Terms shall be governed by and construed in accordance with the laws of England and Wales and you irrevocably submit to the exclusive jurisdiction of the courts of England and Wales to settle any dispute which may arise out of or in connection with these Terms. If you live outside the United Kingdom, English law shall apply only to the extent that English law shall not deprive you of any legal protection accorded in accordance with the law of the place where you are habitually resident ("Local Law"). In the event English law deprives you of any legal protection which is accorded to you under Local Law, then these terms shall be governed by Local Law and any dispute or claim arising out of or in connection with these Terms shall be subject to the non-exclusive jurisdiction of the courts where you are habitually resident.

You may print and keep a copy of these Terms, which form the entire agreement between you and Mondaq and supersede any other communications or advertising in respect of the Service and/or the Website.

No delay in exercising or non-exercise by you and/or Mondaq of any of its rights under or in connection with these Terms shall operate as a waiver or release of each of your or Mondaq’s right. Rather, any such waiver or release must be specifically granted in writing signed by the party granting it.

If any part of these Terms is held unenforceable, that part shall be enforced to the maximum extent permissible so as to give effect to the intent of the parties, and the Terms shall continue in full force and effect.

Mondaq shall not incur any liability to you on account of any loss or damage resulting from any delay or failure to perform all or any part of these Terms if such delay or failure is caused, in whole or in part, by events, occurrences, or causes beyond the control of Mondaq. Such events, occurrences or causes will include, without limitation, acts of God, strikes, lockouts, server and network failure, riots, acts of war, earthquakes, fire and explosions.

By clicking Register you state you have read and agree to our Terms and Conditions