Earlier this month the Executive Office of the President's National Science and Technology Council (the "NTSC") released a report entitled Preparing for the Future of Artificial Intelligence. The report surveys the current state of artificial intelligence ("AI").

The NTSC foretells of a future where AI technologies play a growing role in society – opening up new economic opportunities and markets, and spurring innovation in the health, education, justice, energy, and environment sectors. The report cautions that the development of AI poses a number of challenges to conventional public policy, regulatory structures, and data management protocols in place to protect privacy.

AI presents a unique policy challenge compared to prior innovation

AI enthusiasts are optimistic about the promise of machine learning to improve people's lives by solving challenges and inefficiencies. And rightly so – driverless cars, unmanned aircraft, and machines assisting with medical treatment are just some of the AI technologies that have made headlines in recent months. The NTSC points, by analogy, to the transformative impact advancements in mobile computing have had on industry, government, and how individuals carry out their day-to-day lives.

While prior conceptions of AI were mostly just examples of advanced computer programming, the current wave of AI is concerned with machines that can 'think' and 'learn'. Coupled with new sources of big data and the capabilities of more powerful computers, improved machine learning algorithms are ushering in a new era of technology that "exhibits behaviour commonly thought of as requiring intelligence."

For these reasons, AI tools are a different beast than prior tech innovations, and may not be governable by the same type of regulatory frameworks. The report indicates that any approach to regulating AI-enabled products should be informed by the aspects of risk that the addition of AI may reduce or augment. Where a risk falls within the existing policy regime, the report says that the policy exercise should begin by considering whether the existing regulatory framework adequately address the risks in question, or whether they need to be adapted to better account for AI.

But the red-tape must be reasonable too. The NTSC is conscious of the need to show leadership in AI innovation, and of the risk that cumbersome regulation could make the U.S. a laggard in this emerging field. The report indicates that where the regulatory responses to the addition of AI technologies threaten to increase compliance costs or slow the development of beneficial innovations, policymakers should consider how those policies could be adapted to lower costs and other barriers to innovation without compromising safety or market fairness.

AI Privacy Risks

It is little surprise that the focus on "intelligent" goods like driverless cars has caused much of the AI discussion to revolve around be risks to physical safety. Yet, privacy is another significant area of public concern implicated by AI developments. The NTSC report highlights these challenges.

AI may trigger privacy issues ranging from the mismanagement of personal information, unfairly allocated public resources, or inappropriate technology responses to new information. This is because the nature of AI technologies is such that the technology teaches itself and adapts its own analytical processes. This necessarily complicates compliance and adherence to best practices. As with any transformative force, there are no agreed methods for assessing the effects of AI in commercial and public service applications on human populations.

The report identifies a number of privacy risks arising from AI, including the following:

  • Because AI algorithms can be (or can become) opaque, it is difficult to trace or explain AI-based decision-making. The use of AI to make consequential decisions about people, often replacing decisions made by human actors and institutions, leads to concerns about how to ensure justice, fairness, and accountability—the same concerns voiced previously in the "Big Data" context.
  • The proliferation of AI may have the capacity to compromise fairness, particularly in the public service context. This is because AI may perpetuate bias and disadvantage for historically marginalized groups if the technology trains itself using a model that reflects past, biased decisions or statistics.
  • While AI may make for interesting applications in controlled or laboratory environments, the inherent unpredictability and inbuilt adaptability of AI may mean real-world applications may be much riskier.  The use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment.

Cybersecurity Applications and Risks of AI

The Report points out that AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive (reactive) measures and offensive (proactive) measures. For instance, automating what is now expert work, either partially or entirely, may enable strong security across a much broader range of systems and applications at dramatically lower cost, and may increase the agility of cyber defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of ever evolving cyber threats. Future AI systems could perform predictive analytics to anticipate cyberattacks by generating dynamic threat models from available data sources that are voluminous, ever-changing, and often incomplete. AI may be the most effective approach to interpreting these data, proactively identifying vulnerabilities, and taking action to prevent or mitigate future attacks.

However, AI systems also have their own cybersecurity needs. The Report calls for AI-driven applications to implement sound cybersecurity controls to ensure integrity of data and functionality, protect privacy and confidentiality, and maintain availability.

Meanwhile, in Canada

On the same week President Obama kicked off a "national conversation" about AI, it remains to be determined whether any Canadian governments are taking a concerted approach to AI and its policy needs.  A September 2016 report from Bloomberg indicated that Canada lags behind its peers in AI investment.

Federal ministries with the most relevant portfolios (such as Transport Canada, Innovation, Science and Economic Development, Global Affairs, and National Defence) may each be pressed to consider policy and regulations. Private sector development of Canadian AI has already begun,  with investors backing ambitious machine-learning and AI projects.

What is clear is that AI is coming, and it poses critical challenges to the existing state of privacy and personal information policies and practice. AI practitioners should take keep abreast of these developments, and advance their AI applications without losing sight of data management principles – like transparency and accountability – in their AI ambitions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.