- in United States
- within Compliance and Tax topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
- with readers working within the Healthcare and Law Firm industries
Transcript
ROCH RIPLEY: So my name is Roch Ripley. I am a partner in our IT Department here. My background before law school was electrical engineering. That has been quite useful the last several years doing a lot of AI-related IP work. And I'll be talking about that for about half an hour.
AI and IP, how to protect those innovations and also some liability concerns associated with using AI in the workplace. And then another of our partners, Paul Armitage over there-- hi, Paul-- we'll be spending the second half of the presentation talking about AI regulation, which tends to be quite sector-specific right now.
So with that said, I'm just going to jump in, and we'll get going. So generally we'll be talking about three things. And of course, anytime you have a question or want to discuss, happy to do that. Talking with some liability concerns about use of a lot of the AI products available today, ownership of AI outputs, if you're using these things to create other things, and then how to protect your own AI based inventions if you're in the market of actually taking some AI models, customizing them, improving them or whatnot.
So we'll start with some of the liability concerns. The biggest liability concern is that of IP infringement. In a few slides, I will talk about some of the litigation that's happening right now, I mean, particularly some recent decisions in the US. But suffice it to say, if you just google it or watch the news, you will see lots of headlines, and there are lots of court cases, at last rough count, about 50, dealing with IP infringement against these companies that are creating these IP, or sorry, AI models.
And the issue, one of the big issues, is that they are training on other people's copyrighted data. So the issue with that, one issue is if you're training on that data, the output of that model may be someone else's copyrighted image. And your users could attract liability.
And you can see here this is the output of one of the AI models in question and a piece of litigation. At first glance, it looks fine. It's a whole bunch of people in a crowd, a couple things. You'll see. It was generated by an AI. If you look closely at those faces, they are strangely disturbing.
And also you can see the Getty Images watermark, which is a pretty good indication of where they got the training data from. So this is one area of concern. You can have a user of a product. If they output this and they create it, that could be IP infringement. Your user could be liable. It could flow up to the actual vendor.
And some of the litigation involved as well has been even apart from the user, just by virtue of shipping these products that are able to do this, is there liability that accrues to that. And the answer is potentially, yes. Spoiler.
Other liability concerns here, in respect of training, and maybe if you want to protect the training data you have online, how are some ways you can do that? So if someone does train on it, you have some kind of recourse. So your terms of service are relevant. They can exclude scraping. Many of them probably already do.
The newer ones will also expressly exclude AI training. You can also take some practical measures. You'll have a little bots.txt file on your websites that will tell bots scraping the internet not to use your site for training. Compliance may be questionable, but it is better than nothing.
And something that's relevant here, you see how it's been applied, actually, in a case in BC started about a year ago. The lawyers in the room recognized CanLII. It's basically a free online database of all Canadian law. And they got their data scraped by case way AI and did not appreciate it-- so started a lawsuit there.
So the defendant took 120 gigabytes of data comprising over 3.5 million, excuse me, CanLII Works, a ton, just a ton of data. Two things to note here. Number one, it does, on the face of it, fall or contravene CanLII's terms of use. So I've got an excerpt there. It's probably difficult to read, but on the face of it, that does prevent scraping for AI training.
I guess more to the point as well, I don't think it should have been permitted technically. If you have a third party come to your website and they're suddenly downloading 120 gigabytes of data in one shot, that should be a big red flag to stop that, because possession is 9/10 of the law. And if you can stop the problem from arising, that is the best way to prevent it.
A bit of a sidebar. I don't think we have too much time to get into it today. But in the US in particular, a lot of these lawsuits center around circumventing what are called technological protection measures-- so technical ways to prevent copying. And we have similar laws in Canada that would, at least in theory, permit a similar cause of action.
So if you had something in there, I know password protection, some kind of rate limit or what have you, and someone who wanted to train on your data circumvented that, the very act of circumventing it arguably gives you another way to seek recourse after the fact.
Of course, with the potential liability for IP infringement from use, companies have identified that as a way they can distinguish themselves in the marketplace. So if you are looking to retain or license third party generative AI products, some companies now, with a deeper pockets in particular, are saying, if you use our products the way we intend and you do get sued for IP infringement because we made a mistake with the training or what have you, we will indemnify you for that.
So a couple of very prominent ones-- Microsoft, Copilot copyright commitment, Adobe Firefly IP indemnification. If you Google this, they will have terms online basically saying, use our products. If you're using Photoshop, for example, and you can inadvertently create a copyrighted work that results in infringement, we've got your back.
You have to look for that indemnification language. Read it carefully. There are a bunch of exceptions, unsurprisingly. So the unpredictable nature of a lot of these models that are available now means that these companies are careful when they collect their indemnities. You can't try to create an infringing work and then get an indemnity. They say you have to use our product as we've instructed you to in the normal course of business.
Basically, don't get too cute, but if you use it as intended and there's an issue, they say they'll cover it. And of course, like with any indemnity, make sure the organization standing behind the language is reputable. Microsoft and Adobe probably, no issue.
One of the 14 emails you probably get every day asking you to license a new generative AI product that's guaranteeing everything, maybe Google that company first before thinking their indemnity is solid. But they are out there now. And they weren't there a couple of years ago.
And to mitigate liability in the workplace, what can you do? Well, a usage policy is a good idea because people are using these generative AI tools, regardless of whether you have a policy. They will go to ChatGPT or whatnot and enter queries.
And I think the best way to control use is not prohibit all use but to say there are certain permitted uses. So that will depend on your business, the products you're using, the kinds of data you have. What do you think about in a policy? What generative AI tools are your employees allowed to use?
There are the public ones, of course, ChatGPT being the most famous one, but not the only one. There are proprietary ones. Again, if you're a lawyer, you're probably approached by Thomson Reuters for co-counsel, Harvey for professional AI services, or professional-grade AI service, things like that.
And what are they allowed to use them for? You can permit both. You're using a public service. You can say, non-confidential data, publicly available data, general inquiries. If you're using a proprietary service, then that where hopefully that proprietary service says, we're going to sandbox your data. Anything you give to us, we're not going to make public. We're not going to use it to train anyone else's product. No one's going to inadvertently see it. Then you can use your more sensitive information. Restrict it to only those proprietary tools.
There may be some data you don't want to put in a generative AI at all. Accidents can happen. Mistakes can happen. So even if there is a company saying, look, we're going to use this perfectly, everything's going to be on side and on board. You might have some data that's very sensitive, that might be regulated in a particular way, like Paul may be talking about, where you just don't want to take the risk. And that's something to think about based on who you're licensing from and what your risk profile is.
I will touch again, in respect of liability, just on a few decisions that have come down in the US. Like I said, there is a ton of litigation going on right now, even by American standards. There's a ton of litigation going on right now, and we're starting to see a few of the actual decisions.
So the first one, this case here, it's actually a legal case talking about legal service providers and generative AI, where this company, Ross Intelligence, was basically taking headnotes, which are summaries of judicial decisions, cases, using that to train a non-generative AI product and then basically undercutting Thomsom Reuters business, Thomson Reuters being a very prominent legal publisher.
So Thomson Reuters sued Ross Intelligence and said, hey, you're taking our headnotes. They're copyrighted. You can't just use that to create a competing product. And in this case, in the US, the court agreed really on two factors. Number one, what defendants in these cases typically try to do is they will try to say, look, we fall under what are called fair use exceptions.
There are certain tests for that, one of them being we've taken your data. And we're not just regurgitating it, we're transforming it into something else to create a whole new product. Well, here, if you're just taking a whole bunch of headnotes someone else has prepared and you're summarizing them, that is not transforming anything really, particularly since it's a non-generative AI product where you might be having summaries but not creating something new based on what was input.
And number two, there was evidence of market harm here to directly competing product. So in this case, you put those two things together, that company was found liable for damages. And there have been a couple other cases more recently where there was no liability found.
So one, Bartz v. Anthropic, Anthropic being another large generative AI company. Here, Anthropic was doing training on books. So they had two classes of books. They bought a whole bunch of books legally, and they were training on that. And then they pirated a whole bunch more books, expressly pirated them to train on that for use in engineers, metadata analysis, product improvements, things like that.
So here the court made a division between those two categories. For the pirated books, they said, look, that's beyond the pale. You pirated a bunch of books. You're using them to train this. That's not kosher. But for the legally purchased books, the court said, look, that is a fair use for two reasons. Number one, it's a generative AI product, which means there is sufficient transformation once you've done that training.
Like you type in a query into a generative AI product like ChatGPT, the output does not always usually doesn't directly match the input. There's something being created there and transformed. And two, there was no evidence of market harm. You may be a book publisher, but people in Anthropic, their products aren't just used for writing books.
They're general purpose tools. So here, evidence of transformation of the training data plus the fact that there was no evidence of market harm led there to be no liability for the legally purchased books. For the pirated books, you're out of luck.
Then the next day, there was this case involving Meta. Meta has this AI model called Llama. And here similar Llama, excuse me, Meta was training Llama on books. And again here there was sufficient transformative use of that input data and also a lack of evidence of market harm.
And it's a very interesting judgment if you go through it. The judge really wanted to find market harm. They were there basically saying, I would like to rule for the copyright holders, but there's literally no evidence of market harm here. I can't do it. So that is something to keep in mind going forward. If your business, if there's some kind of training being done and there is no evidence of market harm of the person who owned the copyrighted work, at least in the US, that's been found very relevant in a couple of cases.
I'll move on now to ownership of AI outputs. So this is where, for example, you may be using a generative AI like Anthropic's Claude tool for coding, ChatGPT for image generation, what have you. And you're creating stuff where you're wondering, are there IP rights that subsist in that stuff that you own?
The general answer to that is, no. So if you are putting a very bare bones prompt in an AI tool and it goes and creates something very, very different from that prompt, it generates a bunch of stuff apart over and above what you've input, there is no, for example, copyright in what has been created unless you've mistakenly copied what the AI was trained on, which hearkens back to what I just talked about.
There could be liability there, but there's no ownership over the fact that you said, draw me a picture of a cat, and you get this wonderful picture of a cat that's output. You don't own that picture of the cat. There are a bunch of decisions worldwide that this one apparently very well funded person, Taylor, has started litigation saying, look, I've got an AI. It's created this artwork. It has created these inventions.
I want my eye itself to be listed as an author or Inventor. It's basically lost everywhere, basically. And then there is a question, though. I mean, how much must the human contribute for IP protection does attach? That is still, I would say, generally an open question.
I'll talk about that more in a minute, but it kind of follows along the lines of what recent US Copyright Office guidance says, which is, if you have human authored expressive elements that find their way in the final work that's protectable, it's purely AI-created that is not protectable. That's not really new. It's that kind of follows from first principles. But it's interesting that they've put that in writing and put that out there.
In terms of what is that line, how much do you have to contribute before this copyright that subsists in the final work? This is actually the first copyright registration that the US Copyright Office has granted for an AI-generated work. So it's called A Single Piece of American Cheese. It's on her head, if you're looking for it.
And that picture on the right is an AI-created piece of work generated by someone who works for this company called petapixel. So in order to get that registered, what they actually did was this guy from petapixel recorded himself creating the work with the tool and said, look at all this human expressive instruction that I am providing to the computer that's being reflected. In the end output, I am deserving of a copyright registration.
It got a YouTube link there. The guy created a time lapse of what he submitted. It's pretty neat. I was watching him create the eyeball in the middle of the head yesterday, a little disconcerting, but pretty neat. And that was basically you watch that video, you can get an idea of what he had to submit in order to get his registration.
Moving on to protecting your own AI innovation. So this would be if, for example, you're in a business, maybe your business historically hasn't used AI but you've got a ton of data-- insurance, health, image processing, what have you, and of course, you've recognized that AI is something that's very useful for your business, but an off the shelf product, not good enough.
You want to take it and improve upon it, incorporate it into your own products, sell it, what have you. So I think the first thing is identifying yourself really, where is the innovation? What am I doing here? I mean, of course, across the board, it's a technology of I would say general application. So I'm talking about things like, image processing.
Photoshop is your quintessential example-- but it's not just them-- video processing, fraud detection, if you're working in a financial institution, advertising to generate ads for people in real time, dynamically, things like that.
So usually when you're creating something like this, you've got some primarily software, potentially also hardware, depending on the business that you want to license and commercialize in some way. And you got to ask yourself, what is the stuff I want to protect, and how do I want to commercialize it when you figure or you're trying to figure out what your IP strategy is going to be and how you're going to protect it.
So for software, which all this stuff is going to include, of course, you've got copyright in the code itself in both source and object formats-- so source being the stuff that someone types, object being the stuff that actually executes and that you probably, if you're distributing anything, you'll be distributing that.
Copyright you can attach to that. And that can prevent third parties from just literally copying and pasting it somewhere else. Patenting can protect functionality. Copyright doesn't. So with just copyright, if you had someone there seeing what your product does and you're like, that's a really cool product, I'm going to go create that product myself. It'll take me six months, but that's what I'm going to do. Totally permissible under copyright law.
If you want to stop that, you're starting to look at protecting functionality, which is what patents are for. Patents are not necessarily a good fit for all kinds of AI innovations. It will depend. What are some of the common questions you ask yourself when you're going to patent? Number one, discoverability. That is, even if I get a patent for this, if someone copies it, am I ever going to they are copying it, and am I ever going to sue them?
So if you, for example, are in a business where everyone is commercializing using SaaS, all the code and all the functionalities behind the scenes, you're just outputting data. That's not really implying how that data is being calculated. You got to ask yourself, what is the actual value here even if I get this patent?
There are numerous ways to create this data. Lots of competitors. I can't go sue everybody blindly. I can't go on a fishing expedition. In a situation like that, the patent rate would be less valuable. Of course, technical and commercial merit, the more technically cool your invention is, the more work you've put into it, the more technical problems you're solving, the more commercially valuable the market is. Obviously, that will increase the need or the desire to protect it better.
And a third category called eligibility, which I'll talk about a little more in a couple slides, this is basically something that is giving patent offices around the world a headache. The basic concept is you can't patent math, but all computers do is math. So how do you figure out what that line is? If you can't patent math but computers are just doing math, what is the line between not being able to patent math but being able to patent what a computer actually does? And no one's got a great answer is my cold notes version. But there is some guidance which I'll get to in a bit.
Another way to protect functionality apart from patents is to use trade secret. That is you actually take steps to keep your technology and your innovation a secret. Primarily, your consideration here is, is that even possible? Can you commercialize while keeping that technology a secret?
If you've got a product that you're shipping to everybody, maybe it's a heart rate monitor that's using some kind of AI to figure out a heart rate. Well, if someone can buy that, if they can take it apart, they can experiment on it. They can have a pretty good idea of what you're doing and reverse engineer it. You can't keep it a secret. I mean, people are going to buy it. They're going to be able to do that, to figure it out. And if they do that, that's fair game under the law.
If you can't reverse engineer it, going back to my SaaS example of a couple of minutes ago, if you can't reverse engineer it, then trade secret may, may absolutely be a great idea. It may be the preferred way. Keep
In mind, though, that even if you can commercialize the product itself, there are cultural factors in the AI industry that may make you want to patent. It's a very strong culture of disclosure. Lots of people want to publish, want to burnish their reputations and philosophically, just think a lot of this stuff should be out in the open.
So if you want to attract the best talent, you may have to agree to allow publication, which means, again, you can't keep it a secret because you've published it. So that, again, will, of course, affect your strategy. And keep in mind as well that it's not a one and done when you're thinking about what IP to apply to a product.
Different aspects of an AI-based innovation can be protected in different ways, and you can stack these different rights on each other to protect the same product in multiple ways. So, for example, if I go back to my heart rate monitor example, you can have copyright in the actual code that's shipping.
So I ship my heart rate monitor. I've got object code on the monitor. If someone literally is going to copy that, that's a copyright violation. I could patent the technology that's in the heart rate monitor. So if someone independently replicates that functionality and infringes my patent, I could sue them for patent infringement, even if they haven't copied the actual code.
And behind the scenes on my server somewhere, I've got all the source code kept as a secret. So if I've got an employee who says, I hate this place. I'm going to go across the street and they take the code, that is both a copyright violation and also trade secret misappropriation.
So you've protected different aspects or facets of the product in different ways. And you can use trademarks as well. That doesn't protect the actual product or technology itself, but it protects the brand surrounding the product. If I advertise to you, I've got an Apple Watch heart rate monitor versus a Roch heart rate monitor, you will probably go for the Apple Watch heart rate monitor based on brand alone.
Do keep in mind open source issues. This is relevant for copyright. So you have developers working on this stuff. There's a ton of open source in AI. They're going to download all this. You're taking all these open source tools pursuant to a license. You go for around of funding. If you go to try to sell the company, you're going to have to establish that all these open source licenses, you're using them, you're using your product in accordance with the terms of the license and they're not prejudicial.
So it is something to keep in mind. If it's much easier to do while you're building the product, then two years after you've built the product to go back and try to fix the product. And that needs to be fixed before closing.
And for a certain class of AI invention, what's called supervised learning, training data is super important. So training data is hearkening back to the first few slides I had. It's what you're using to teach a certain kind of model how to do a new thing. That, generally speaking, can be a massive competitive advantage. And you want to keep that confidential if you can, because the amount of data you need to train these models to cause them to have great functionality it's just it's mind boggling.
We're talking about every book on the planet, the whole internet-- massive amounts of information. So if you're in a business where you've got a license from your customers to use the data they're creating in a specific sector, your sector, you can launch an initial product. It might be OK, not the best. Get some uptake. Get some data. You're allowed to use it. You refine the product. You make it better and better.
After some years of iteration, you can have a product that is actually very good, built on years of sector-specific data that other people don't have and that you want to keep secret. It is something to keep in mind with patenting. Right now, you don't have to provide a copy of your training data when you submit a patent application.
You do have to describe what kind of data you're using, how you're training. But if you've got like a billion images you're using to train some kind of CT scan analyzer, you don't have to actually provide on a USB key a billion images to the Patent Office. You can say, you need a billion of these images. Good luck. And you can still proceed on that basis.
So you might even be able to have your cake and eat it too, in that your patent will, of course disclose what your technology is. But if competitors don't actually have that training data, there's still a practical impediment to them catching up.
I did mention eligibility a minute ago about, can you patent math? There is a bunch of recent guidance from the US. You may have noticed that things are changing very rapidly in America, including at the Patent Office.
So I think lesson 1 here is if you just are taking an existing AI and you're like, I'm just going to use that in my industry, I'm going to download that AI, and now I'm going to use it in my product and now I've got, for example, a heart rate monitor based on AI giving me a patent, it's not going to work.
So there was a recent decision, just in April of this year, where the court basically said-- this is an appellate court in the US-- if you're just saying, apply it. Take this machine learning model. And now we're doing this old thing but we're using AI or machine learning to do it without providing a lot of detail there, that's just not going to cut it.
And I had this conversation a lot. A lot of people will call and say, yeah, but I'm doing with AI. And I'm like, and? There's got to be something more there to it. What is the something more? You want to be able to establish you've got a real technical improvement. And the lower level the improvement, the better.
If your model is actually using less memory, if it's operating faster on a given compute constraint, that's great. It doesn't have to be that low level, but you've got to be able to point to some kind of technical improvement. And you also don't want to go too broad. You don't want to say, oh, I've improved this model. It could be used for everything.
No. What are you going to practically and commercially use it for? Let's focus on that. Then you're paying for the protection that's actually going to be meaningful to you, and you're probably going to pay a lot less in the aggregate because unsurprisingly, the Patent Office will get a little more resistant when you're saying, oh, you can use this for everything under the sun. That's going to raise some issues. So be reasonable and I would say, conservative.
The last couple of months, there has been updates at the US Patent Office in respect of guidance. Nothing that, strictly speaking, changes the law, but that does seem to change the implication is examiners, patent examiners are being a little too liberal with these eligibility rejections.
One was a memo just this summer, August of 2025, which is a reminder that listen examiners, if going to reject the claim, be at least more certain than not that it's ineligible, that there's an actual problem. Because speaking as someone who does this day in, day out, you get a lot of examiners who will just, because it's easy, issue a rejection without giving it a ton of thought and saying, I've actually got a reasonable case here.
Another one, September 2025 case that was at the patent Appeal Board, where they basically said, listen, the implication is we're rejecting too many of these inventions. If we just categorically exclude all these inventions, it is not good for America. It's not what the patent system is for.
So again, not technically a reshaping of the law, which the Patent Office isn't allowed to do, but you've got people at the Patent Office now in positions of leadership. This was actually issued by the director of the USPTO, who was confirmed shortly before this date where they're saying, look, practically, I think we've gone too far in one direction. So I think things are changing now at the US Patent Office in respect of examination of these applications. It's gotten a little easier even this year to get stuff through.
I've got an example here. If you are going to patent an AI-based invention, what's the kind of detail you want? You know for example, you've got a diagram showing a high-level example of the architecture. You've got a low level example of the architecture. You've got an example of the training data here without actually providing the gigabytes of training or more of training data you're actually using.
You've got some evidence showing how your invention actually works, the fact, it does something. You've got a very detailed description of what you're actually doing, and then you've got what you're actually seeking protection for at the end.
So it is more than just applying. Like you need to actually say, look, I'm doing something. This is what I'm doing in text. This is the example of what I'm doing, showing that it works. This is how I train the model to do something. And this is what the model is. So there's actually meat there. It's more than just take what I did last year and add the words. Now I'm doing with AI.
So to close off the first half hour, if you're using third party generative AI, which I mean, certainly you are, if you're using it for important data that's potentially confidential. Read the terms and conditions. Make sure your data is sandboxed. Make sure you trust the vendor. Consider even if the terms and conditions say the right thing, whether you actually want to put all your data in there. See if there's some kind of indemnity, and make sure they can stand behind it.
If you're creating AI-based innovations, absolutely protect that training data stack. These various IP rights that are available to you in different ways to protect your end product. If you've got a business where you've got a lot of data online, consider contractually or technically restricting the ability for third parties to just come and use that data to train their own models.
It's a huge industry there. Businesses they're being based on making training data available, generating synthetic data, it's quite valuable. And then again, internally, I think your employees are going to be using generative AI, whether they tell you or not, whether you have a policy or not. So I think you should have some kind of training and policy to say, at least listen, this is what you're allowed to do. This is what you're not allowed to do. If you're going to do it, this is how you should safely do it and safely use these tools.
And I will hand off to Paul. I guess, Paul, before you take over, I mean, are there any questions on that? We can talk later too, but anything is pressing now? All right.
PAUL ARMITAGE: Great. Well, thanks, Roch. So now we're going to largely change gears and look at the state of AI regulation in Canada and also some considerations for developing compliance frameworks that organizations may be looking at.
So that's just the topics that we're going to look at. So we're going to deal first with federal law and policy. So I think as a lot of people would be aware, what would have been the Federal Artificial Intelligence and Data Act, or AIDA, was allowed to die on the order paper during the prorogation of parliament before the last election.
And it's not expected that that law would be reintroduced. If it had been enacted, what it would have done is it would have created a broad-based sort of European-style AI system regulation in Canada, dividing AI systems between two groups-- general purpose AI and what are called high-risk AI and then attaching penalties for non-compliance.
So the government has backed away from that regulation. Essentially the concern was that it would potentially stifle innovation in the AI space. So instead what the government did was they embarked on around of consultation through what's called the AI strategy task force, and it's expected that a new AI strategy will be released by the federal government later this year.
Now, at the federal level, in terms of the federal public service itself, there do exist directives around the use of artificial intelligence within the federal public service. So the Treasury board secretariat has released two directives-- one dealing with the use of generative AI, the other one dealing with the use of automated decision making. And as we will see, automated decision making is a bit of a theme in terms of AI regulation in Canada.
Moving next to provincial laws. So there are a number of provincial laws that regulate aspects of AI. So starting first with Quebec, in what's called the Quebec Act-- so that's the Quebec law that regulates privacy-- there's now a requirement that organizations must inform the individual of the use of automated decision-making when personal information is used, and the automated decision-making exclusively makes the decision.
Then, on top of that, organizations upon request, are required to provide individuals with the information that was used to make the decision, the reasons and the principal factors that led to the particular decision, in other words, how does the automated decision making work? And then finally the right of the individual to have their information corrected.
So turning to Ontario, starting January 1, 2026-- so that's just around the corner-- Every employer in Ontario that does a publicly advertised job posting, if that employer will use artificial intelligence to screen applications, there'll be a requirement to disclose that fact as part of the posting.
Now, some people may be familiar with the converse phenomenon, which is employees when they submit applications, embedding in their applications other hidden texts or even computer code, which is designed to trick AI screening systems to select their application. So that sort of employee phenomenon will remain unregulated, but the employers will be required to disclose their use of AI in the job hiring process.
Also in Ontario, dealing with their public sector, Ontario passed a law called strengthening cybersecurity and building trust in the public sector, actually all the way back in November 2024. And part of this law is a general regulation of artificial intelligence by the Ontario public sector.
However, the act at the moment is really just a shell. What it does is it has lists broad topics that may be regulated under the act, such as requirements around transparency, having accountability frameworks for use of AI, risk management and use of AI. But all the details for this law will be supplied by regulations, and the regulations have not yet been created. So at the moment, it's just kind of a black box, an empty shell that will be populated in the near future, presumably.
Appearing next to Alberta-- so Alberta also regulates the use of AI in its public sector. Starting in June of this year, at that time what was passed was called the Protection of Privacy Act. What that is that that's a law that replaced what was formerly the Freedom of Information and Protection of Privacy Act in Alberta, which was the law that governs the use of personal information by the public sector.
So under the new law, public bodies that collect personal information directly from the individual must, at the time of collection, give a notice if the public body intends to put that information into an automated decision-making system. And then if the public body uses an automated decision-making system, they have to ensure that the information that's used is accurate and complete and also retain that information for a year.
So at the federal level, there is also a pending law dealing with automated decision-making. So I think there's a lot of people would be aware of the companion piece of legislation to AIDA, was what's called the Consumer Privacy Protection Act, which would have replaced PIPEDA as the federal privacy law statute.
So this statute, Consumer Privacy Protection Act, unlike AIDA, it is expected to be reintroduced and become law in the future. And this law contains a requirement that organizations must keep what's called-- and make available what's called a general account of their use of automated decision-making that uses personal information.
And then upon request of individuals, organizations will be required to disclose the type of information used, the source of the information, and the reasons or principal factors that led to the prediction or decision. So, in other words, they're going to have to disclose how the ABM works.
So the federal government has also issued a voluntary code of conduct around the use of generative AI. Although this code of conduct is directed at generative AI, it actually applies can be used applied more broadly to other types of AI as well.
So unlike the Treasury board secretariat directives we touched on earlier, which dealt only with the public sector, this voluntary code is for the private sector. But it's entirely voluntary, and it's only for companies that essentially sign up to it. Currently, there are 46 signatories to the code that includes some of the very largest organizations in the country and also a lot of smaller companies as well.
So we're going to spend a few moments on this code because it essentially illustrates some of the key considerations around governance and use of AI within organizations and ethical use of AI. So the code is organized around six principles. The first principle is safety.
So safety refers to essentially an assessment of the AI system before it is deployed within the organization. And that's for risks such as bias, sharing of proprietary information, breach of privacy, infringement of individual rights, things of that sort, effects on particular groups, such as elderly or children or marginalized groups. So the risk assessment should occur first.
The second principle is called accountability. So accountability refers to when the AI system is actually being used in operation within the organization. Organizations are supposed to do two things-- one is essentially monitor the risks associated with use of the AI system, secondly is to address incidents that occur from use of the AI system. What an incident is is essentially any sort of occurrence where the AI system creates a negative or unexpected outcome for an individual.
The third principle, which is a broad-based principle, is fairness and equity. Essentially, AI systems should be deployed and developed so that they operate fairly and equitably. The fourth principle is transparency. And transparency breaks down into two subcategories.
The first category is transparency around the use of artificial intelligence. So in circumstances where it's not obvious that a person is interacting with an AI robot is a requirement or a best practice that the organization should basically indicate that a person is interacting with an artificial intelligence system.
Second to ask about transparency is about publicizing what the AI systems that you use actually do. So in other words, in the first instance, you have to give notice that you're using AI. And the second instance, organizations are expected to publish essentially explanations of what AI systems they use, how they function, and what are the risks associated with those systems.
The fourth or the fifth principle is what's called human oversight and monitoring. So this is sometimes referred to as human in the loop. Essentially, what it means is that AI systems should not be left to run fully autonomously. Instead, there should be human involvement. And that could be either the human actually inserting themselves into the decision making or the process that the AI system is performing, or it could be human involvement in terms of monitoring and overseeing how the system is functioning.
The final principle is called validity and robustness. So validity refers to the AI system actually performing the way it's supposed to do. In other words, does it do what it's supposed to do? Robustness refers to, does the AI system does what it's supposed to do in all the circumstances where it may be applied, including unexpected circumstances, different scenarios, or unusual use cases?
So moving on from the voluntary code of the federal government, there are a number of also industry or sector-wide AI guidelines that have been released by various bodies in the economy. It's actually quite interesting how these guidelines have proliferated. There's actually a wide variety of them that are out there at the moment.
So, for example, and if you are a federally regulated financial institution, such as a bank or an insurance company, OSFI has guidelines for use of AI. If you're a pension, the pension supervisory authorities have issued guidelines.
If you're in the healthcare space, Canada Health Infoway has released a very detailed toolkit around use of artificial intelligence. For medical devices, Health Canada has released two guidelines dealing with AI in medical devices. For drug development, the FDA in the United States has released guidelines around the use of artificial intelligence in the context of regulatory submissions made to the FDA.
Capital markets, Canadian Securities Administrators have guidelines around use of AI, nuclear space, the Canadian Nuclear Safety Commission, in tandem with its counterparts in the UK and the United States, have released what are called trilateral guidelines around use of AI in the nuclear space.
And of course, the lawyers in the room, both the courts themselves and the law societies of many provinces have released guidelines dealing with the use of AI.
So in addition to the guidelines and standards that may be issued by various bodies, the standards organizations have also created standards around use of artificial intelligence, for example, NIST. So a lot of people will be familiar with NIST as the organization that created what is the de facto standard for cybersecurity.
So NIST has also created standards for artificial intelligence. It's called the artificial intelligence risk management framework or AI RMF. And what that does is it creates baseline standards for AI implementations. In other words, organizations are expected to understand how the algorithms work, provide transparency, do auditing of their use of the system.
So a pause here to note that the Trump administration has directed NIST to revise its standards to what they say, revise references to misinformation, DEI, and climate change. And I'm going to come back to that near the end of the presentation, when I talk a little bit about what's happening in the United States in terms of AI regulation.
So Canada also has standards. The National Standard of Canada has released their standard. It's called ethical design and use of AI for small and medium organizations. Although it's directed at SMIs, on its face it can also be used for larger organizations as well. ISO has also released two standards. So organizations can now get certified and be audited for compliance with the ISO standards.
So there are other bodies of law that also regulate the use of artificial intelligence. So one of those bodies would be Intellectual Property Law. And Roch has already spoken to that in the first part of the presentation. Another major area of law is privacy law. So this really refers to the use of personal information, either in the development of an AI system or in its deployment.
Now, the topic of privacy law and AI is a very large topic. We don't have time today to delve into it. However, I will flag that in the new year, our privacy Law Group at Gowling will be doing a symposium on privacy law, and that will involve a deep dive into these topics around AI and privacy. So I'd encourage people to register for that if you're interested.
Another area of law that has focused on artificial intelligence is competition law and our Competition Bureau. The Competition Bureau has released a couple of discussion papers around the use of artificial intelligence and its potential anti-competitive effects.
The Competition Bureau focuses on three things-- first is essentially barriers to entry and monopolization in the AI space. I think, as Roch mentioned, in order to create AI models, it requires access to vast amounts of data. It also requires access to vast amounts of compute resources.
So there's a concern that in the marketplace, a certain number of smaller or large corporations will effectively monopolize one way or the other access to either compute resources or the training data. Another area that the Competition Bureau has flagged is essentially how AI can be used in an anti-competitive manner, either purposefully or that being the effect. And essentially what they're looking at here is collusive behavior that may be enabled through AI.
So an example here is what's called a hub and spoke conspiracy. So a hub and spoke conspiracy. What that would involve is the center of the hub, for example, could be an AI algorithm or an AI service. And then the spokes of the hub could be a variety of different companies, all using the same AI algorithm or AI service.
The danger is that the hub effectively would treat all the companies as a group, rather than looking at them individually. And that could have the effect of creating collusive behavior among the group, for example, by setting prices within a group of companies that otherwise would be competitors.
The last area that the Competition Bureau has looked at, which is maybe of more interest to most of the people in the room, is around misleading advertising-- so misleading advertising that falls within the ambit of the Competition Bureau's mandate. And so there the concern is around things such as deep fakes or generation of fake reviews and endorsements using AI to do those things.
Now, the Competition Bureau has not released any formal guidance yet on these topics. What they have released, however, are these discussion papers, which have discussed the issues which could be portend to future guidance that they may release.
So sort of wrapping things up in terms of compliance frameworks for organizations, where we're developing a compliance framework within your own organization, there are number of things to look at-- first of all, to identify any specific laws that may apply to your company, secondly is to identify whether there are any specific guidelines for the industry or sector that you operate in, next would be to consider the guidance of the privacy commissioner and also the Competition Bureau.
The next step would be to adopt AI governance and policies within your organization, around your use of AI, and particularly the ethical use of AI. And those types of policies and governance would be animated by the types of principles that we discussed when looking at the voluntary code.
Your organization may also consider adopting one of the standards that are out there, either formally adopting it or following it, such as NIST, the Canadian standard, or the ISO standards. And the other thing organizations are to consider is what their exposure is to foreign laws. Because AI is being regulated in different ways around the world. And we're going to look at a couple of examples of that.
The first example we look at is the European Union AI Act. So what the AI Act is, it's a broad-based regulation of artificial intelligence. And the underlying philosophy of the act is to regulate AI in a way that protects fundamental human rights and freedoms, safeguard the health and safety of individuals, ensure the reliability of the AI system, and also protect against adverse effects of AI.
And the way the law functions is it creates a risk classification for different types of AI, and then it attaches different regulatory frameworks or approaches to each classification. So the top level of classification is what's called unacceptable risk. And if you fall into the unacceptable risk category, that means the AI system is strictly prohibited. Illegal.
So this covers various things which are considered to be fundamentally opposite to human values or rights. And so that includes uses of certain types of biometrics. It also includes certain types of untargeted scraping of facial recognition by CCTV cameras or building of facial recognition databases.
It also refers to what's called social scoring. So social scoring what that means is, for example, in a public body, essentially it would create scores of individuals based on whether they attended a protest or what they published on their social media sites, and then creating those scores and then using them for different purposes, such as to decide whether or not individuals are entitled to benefits. That type of behavior would be strictly outlawed, or using AI for that type of activity strictly outlawed.
The middle category is what's called high risk. The thing about the AI Act is that it's very detailed and very prescriptive. It's description of what a high risk AI system is goes on for multiple pages and it's very detailed and actually is quite difficult to figure out.
But at a high level, it potentially captures a wide variety of things. That includes certain products. It includes certain safety components of products. It includes use of AI for education, use of AI in the workplace, use of AI by public bodies in order to assess people for benefits.
And so if you fall within any of these high-risk categories, you then are under the highly regulated AI regime. You're required to have a risk management system. You're required to have data governance within your organization. You're required to have technical documentation and keep records around the use of the AI system. You're required to have human oversight of the AI system that you're using, et cetera, et cetera, et cetera.
The third category is what's called limited risk. An example of a limited risk use of AI would be like a general chatbot or general use of deepfakes. Now, that doesn't mean that all deepfakes and chatbots are limited risk, because the way the act is structured, you have to basically flow through all the top categories and only to be caught by the limited risk category before you fall under limited risk.
So in other words, if your AI system or your chatbot is doing something that's unacceptable, it's prohibited. If it's doing something in the high-risk category, you have to comply with the high-risk regulation. So if you fall out of those two things, then you fall into the general category of regulation. And there the only requirement is one of transparency, in other words, making people aware that you're using the AI system.
At the very bottom is what's called minimal or no risk. And this essentially covers the use of artificial intelligence as part of back office systems or behind the scenes. In other words, it's not directly affecting an individual or directly dealing with an individual. An example of this could be an AI module that's used to power a video game or AI that's used in a spam filter. Those are considered minimal risk. And that type of use of AI remains unregulated.
The other sort of case example we'll look at is AI regulation in the United States. In the US, it's a little bit similar to Canada, in the sense that at the federal level, there is no federal law governing the use of artificial intelligence. However, earlier this year, the Trump administration released what's called its AI action plan.
And the whole focus of the AI action plan is deregulation of use of AI. And the reason for deregulation is to incentivize innovation in the AI space and more specifically, to ensure US dominance and superiority in the AI space.
Having said that, the Trump administration has also issued various executive orders, including executive order number 3, which deals with the procurement of large language models by federal public bodies. And essentially what this executive order says is that if a federal body in the US is going to be procuring artificial intelligence, it can only procure a model that is unbiased and maintains factual accuracy in ideological neutrality.
And it goes on to say that examples of things that are not considered accurate or neutral, it's that DEI, references to climate change, and things of that sort. So this obviously serves a sort of rhetorical or political purpose of the administration. However, it also represents a requirement. So if your organization is developing an LLM and wants to sell it to the federal government in the US, it's going to be a standard that you're going to have to demonstrate is that your LLM is able to meet.
At the state level in the United States, it's a bit of a different story there. There are, in fact, many AI laws that regulate AI. And the more and more of them are being passed all the time. Most of the state-level laws are specific use case laws.
For example, there are a lot of laws dealing with chatbots, either as a general consumer matter, in other words, requirements for people to be transparent around the use of artificial intelligence and chatbots, or in specific circumstances, for example, chatbots that interact with children or chatbots that interact in a mental health context, or what are called AI companions, in other words, chatbots that simulate a human relationship.
There are also laws dealing with things such as bias-- so bias in consumer or educational type applications, also, laws around use of AI in consumer health, deepfakes, algorithmic pricing, and so on. Now, there are also a couple of broad-based US State laws dealing with artificial intelligence.
The Colorado passed a comprehensive statute regulating artificial intelligence, somewhat similar to the AI Act in the European Union. However, Colorado has not brought this law into force. Part of the reason for that relates back to what I said earlier about the administration's goal for deregulation in the United States.
In other words, the federal government does not want the states, quote unquote, "over regulating the use of artificial intelligence." And there's a concern that within that this Colorado Act may offend that principle. So it's parked. It's on hold at the moment.
California, however, has passed sort of a general artificial Intelligence act. Or if it has a unique sort of regulatory approach, it only applies to what you might consider global scale tech companies. And the way it does that is it creates thresholds for its jurisdiction in terms of really the size of the models that are being deployed by the companies, and those are the revenue of the companies.
So if you're doing a very, very large model, which is measured in terms of certain types of compute, and you have a certain revenue, then you fall under the ambit of this California law. Everything that falls beneath those thresholds remains unregulated, at least under this particular law. There are other more specific California laws as well dealing with AI. I was actually just reading in The New York Times, this morning, that New York State has passed a very similar law to California's, and they're considering whether to enact it.
So for your organization, if you are subject to foreign laws, sort of the consideration in terms of your compliance approaches, will your organization create separate playbooks for each of the jurisdictions that you're exposed to, or will you endeavor to create a similar playbook or similar consistent approach across all the jurisdictions where you operate?
Now, one of the things we noticed around the GDPR when it was introduced, there was what people referred to as the Brussels effect or the Brussels vacation of privacy law. And what that referred to is organizations, even if they're not subject to the GDPR or even if they are, essentially building their entire privacy program based on compliance with the GDPR.
Whether we see a similar thing in the AI space I think largely remains to be seen. And that's everything.
In this on-demand webinar, two of our senior partners take a practical, business-first look at how to protect AI-driven innovation as well as how to comply with rapidly evolving legal frameworks for AI.
Key topics include:
- Intellectual property protection strategies for AI technologies
- Recent developments at the U.S. Patent and Trademark Office and in generative AI copyright litigation
- Update on existing and proposed federal and provincial AI regulations in Canada
- AI risk management best practices and standards for industry
- Building an organization's compliance frameworks
CPD information
- Law Society of British Columbia: This program contains 1.0 hour of Practice Management credit.
- Law Society of Ontario: This program contains 1.0 hour of Professionalism Content.
- Barreau du Québec: Participer à ce programme vous permet d'obtenir 1 h pour vos heures de formation continue du Barreau du Québec.
Read the original article on GowlingWLG.com
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.