In this episode of R&G Insights Lab's podcast series, Culture & Compliance Chronicles, litigation & enforcement attorney Tina Yu speaks with four Ropes & Gray partners and counsel who focus on data, privacy and security across the firm's enforcement and transactional practices. In the first part of a two-part discussion, Ed Black, Rohan Massey, Rosemarie Paul and Clare Sellars discuss the legal issues that are raised as more companies utilize data for both commercial and compliance purposes. In a wide-ranging conversation, they discuss: transparency around the use of data, especially in light of data protection laws; how organizations can make the most of their data while still fulfilling their legal obligations; the international transfer of data after the CJEU's recent decision in Schrems II; and the interplay between AI and data protection law.

979728a.jpg


Download a summary of this episode.

Transcript:

Tina Yu: Welcome, and thank you for joining us on our latest installment of Culture & Compliance Chronicles, a podcast series focused on data analytics and the behavioral sciences approach to risk management, brought to you by the R&G Insights Lab. I'm Tina Yu, a litigation & enforcement associate at Ropes & Gray. On this episode, which is the first of a two-part discussion, I am joined by my colleagues, Rosemarie Paul, Rohan Massey, Ed Black and Clare Sellars, to discuss data and behavioral sciences, as well as navigating data privacy. Rosemarie, would you like to introduce yourself to the audience?

Rosemarie Paul: I'm Rosemarie Paul. I'm a partner in the litigation group at Ropes & Gray and I specialize in financial services enforcement in the UK.

Rohan Massey: I'm Rohan Massey. I'm a co-lead in the firm's data, privacy & cybersecurity practice, based in the London office.

Ed Black: I'm Ed Black, a partner based in Ropes & Gray's Boston office. I'm global lead of the technology & data practice. I have a transactional focus - I focus on deals in the technology field, including data deals.

Clare Sellars: I'm Clare Sellars and I'm counsel in the data, privacy & cybersecurity group at Ropes & Gray, also based in the London office. I also focus on transactions and deals a lot of the time.

Tina Yu: Thanks, everyone. Rosemarie, turning to you first, just to frame our conversation - why are we talking about data and behavioral sciences right now? Why are we linking behavioral science and data? Why is that becoming an important part of what corporations are doing nowadays? And how are we using data and behavioral science?

Rosemarie Paul: It's a really good question, Tina, and I think it's something that we can sometimes take for granted. Behavioral science and its appeal and its effectiveness is really primarily based on looking at the way people actually behave, rather than a hypothetical idea of how one might expect them to behave. And so, to understand how people actually behave, data is key. We're in a universe now where there is more data available about people and organizations than ever before. So, data basically provides feedback, and this can be through surveys or monitoring - that really gives us the building blocks to develop behavioral science methods that can engage with these and achieve effective outcomes. Data also informs organizations' priorities, their resource allocation, their monitoring, and even who to check in with in terms of developing lines of reporting.

Tina Yu: Thanks, Rosemarie. What I find really interesting in that is because nowadays corporations are investing more and more into data analytics technology, and those platforms really are growing by leaps and bounds in terms of their capabilities, so I think what it boils down to is that we're just able to use data in more ways than ever before. We're able to play with the data. We're able to use it to spot trends. We're able to use it to identify risk areas that we probably might not have even been aware of in the past just because of this increasing development in AI capabilities. And now, there is a lot of talk, obviously, coming out of all these technologies and platforms with big data and how it can be used. What do you think that means for behavioral sciences and compliance?

Rosemarie Paul: I think it's a really exciting time to be looking at it. We are, as you described, seeing a massive growth of compliance-type firms that are utilizing data, and I'm sure many listeners will have heard pitches from such platforms about the new techniques that they can apply. I think the thing about big data analytics is that, again, thinking about the fact we have more data available than ever before, this vast amount of data can be used to gain insight from a variety of data sources. So, you can link source A to source B and help identify patterns that an individual on their own may not be able to, and that can be relevant for compliance as well as a host of other reasons. It's generally perceived as the way in which we can improve the way we manage risk. And the other aspect about using these sorts of analytics is that they can respond really quickly - they're very nimble compared to traditional ways of approaching compliance analysis. A lot of institutions and organizations are utilizing data analytics in their market monitoring programs to spot trends, to identify spikes and unusual transactions. We're also seeing it with the regulators where the regulators receive huge amounts of data through regulatory reporting concerns, and they're very openly saying, "We are going to synthesize this data with the information we already have and we're going to supervise much more effectively because we're just going to know more. We're going to be able to put all of this data together to get a much more accurate picture of what's going on. We're already seeing it with the banks who are using data analytics to achieve deeper regulatory compliance analysis. But of course, this has to be balanced against issues around transparency, how that data is used, and what sort of ethical and legal considerations we need to take into account on this, whether it's in the context of automated decision-making, use of personal data, and of course the tension between the GDPR and other data protection laws. There's also this issue about where data is located, how it's processed, who's responsible for it, what do we tell people about it, and transferring data between jurisdictions. I think Rohan's going to address that a bit later in light of the latest Schrems II decision. So, it's an exciting time, but it's a complex time as everyone's wanting to take the benefits but manage the risks around using this as well.

Tina Yu: Thanks, Rosemarie. I agree completely. We will be turning back to the risk management side of data in a bit, but, Ed, right now, I'm actually quite curious to get your opinion on the other side of data that we see, namely the commercialization of data, more generally. We now see and hear about organizations using big data to determine and predict customer behavior, and utilizing that. How is that playing out for you in what you're seeing in your work?

Ed Black: In a way, it's just a commercial take on some of the trends that Rosemarie was addressing. First, just against the baseline trends over the past, say, five to 10 years, there really hasn't been a revolution of any kind in the math and statistical science. The places we've seen the revolution that has really brought data forward are, first, in the level of available computational capacity. So now, the computational capacity necessary to do high-end statistics is available for most people on their smartphone. In other words, it's a pervasive capability - anyone who has access to the data and the skill can start doing data analytic analysis.

The second place we've seen a sea change is in the availability of the data itself. So, 10-15 years ago, if you wanted to do a large statistical analysis, a big portion of your challenge was collecting the data. Now that most human activity, at least most commercial activity, is carried out through an automated digital platform, day-to-day purchases made through a digital platform, all of your checking the weather, communications with friends through social media, all of these digital platforms are just automatically monitoring behavior. And so, suddenly, over five to 10 years, we've gone from a world where gathering data was a challenge to a world where we are swimming in a sea of data, and regulating the use of the data is the challenge.

And then finally, between computational capacity and vastly available data, we've now seen an academic focus and a market focus on applying these tools. And so, what's happened from a commercial perspective is that data, which for quite some time has always sat on what a company might call the liability side of its balance sheet-that is collecting data, storing data, using data has always been an issue where you needed to monitor compliance with certain regulatory regimes of one type or another to avoid liability-that data has also now migrated to the asset side of the balance sheet. That data is an opportunity to generate income. That data is an opportunity to generate equity value. We even have clients now who are collateralizing data and borrowing against data assets - all are things happening on the asset side of a company's balance sheet. And so, suddenly, there's a huge uptick in commercial activity of all types because there are now these robust markets and applications for data that just didn't previously exist.

Tina Yu: Now that we are getting these vast amounts of data and we understand it's being commoditized, and organizations are probably utilizing similar behavioral science theories to this data to understand how consumers are thinking and influencing their decisions and their marketing, etc., what are some of the changes or impacts that you see based on that transition?

Ed Black: Well, a few things. First, as I mentioned a second ago, the development of independent markets for data has created a situation where data can have its own value inside a company. Five, 10 years ago, if a shoe company were collecting data to help it sell shoes, people would say, "Well, that's important data, but the value is fully reflected in the revenue generated from the sale of the shoes. That data has no other value. We're already taking account for the value of the data." Now, we look at that very same data in that very same company and we say that data can be liquidated, sold, licensed and used in many other ways. Suddenly, it has a much more dramatic value on its own. Now, you can sell it, borrow against it, do a series of other things, and all of this has migrated data into a whole new regulatory framework. So, now that data has its own value, is in its own transactions, you see a lot of government interest in making sure that the collection of data, the storage of data and the use of data is appropriately regulated in the particular commercial or industrial context. Of course, widely known, widely observed health care data, financial services data - very much in the same category, but we're also seeing evolution of a law around data in more commercial settings involving day-to-day consumer transactions, involving company-level transactions. We have tax authorities around the world looking at data transfers as a potential independent stream of taxation revenue. And so, as data becomes an independent value, as data moves into independent data transactions, the regulatory framework responds, and in each and every one of those contexts is finding the regulatory environment increase and grow. And of course, the places that have always been focus points-health care, financial services, government contracting and government activity and so on-it's expanding quite dramatically in those areas, but all across the board as well.

Tina Yu: Thanks, Ed. That's really interesting to hear. We will be delving into the regulatory perspectives a bit more, but I just want to focus on this area for just one more question. I am curious because we're talking about regulation, we're talking about this new world where we're swimming in all sorts of data and there's so many different uses and so many applications for it, but then, where do we draw the line? Where do we say, "Well, here's the information that I originally intended to give, and now here's the information that you, a company, has been able to glean through analytics and technology, and that type of analysis?" So, when do we draw the line and say, "Well, this is what I consented to and this is okay, but when you cross this line, that is becoming abuse of data?"

Ed Black: That's a great question. Of course, it's at the heart of the regulatory framework and it's also at the heart of everyone's personal interest in the question. The fact of the matter is if you have a smartphone in your pocket and you use it the way most people use it-that is, when I download an app and get the option to read the terms and conditions, I certainly don't read them in detail, I don't think anyone does-and so the world is already collecting about you information on all of your movements, all of your communications and all of your transactions. And you've probably consented to most of it in ways you may not even be fully aware of. So, there's a fundamental fairness question here that I think the political community, the economic community, the social and cultural community have not yet fully answered. Thinking about, though, the specific question of how we see that from a legal point of view, then there are some bright lines that you cannot cross. We already see recognition that if data is gathered illegitimately, in violation of some of these bright lines, then there'll be an immediate reaction. So, for example, in the States, we're starting to see people looking at the way information's gathered for use in trading securities. We've seen a small string of cases, sometimes referred to as outsider-trading cases - this is where someone gathers information wrongfully-not in the old school way of wrongfully, not by approaching an insider and convincing them to give you secret information about a company - that's the classic insider trading-but instead, by hacking systems and gathering information about a company without corrupting a human, but by hacking through a firewall. Now you still have information you shouldn't have, it's still harmful to the market for you to be trading on that, yet, classic insider trading doesn't cover it because there's no human in breach of their duty. We've seen a couple of U.S. cases where outsider trading has been a basis for securities liability because of an illegitimate data source, and I think you're seeing similar trends in commercial uses of data. We've had a couple of cases involving web scraping, one now headed for the Supreme Court in the U.S., where the legitimacy of gathering data without the authorization of a website publisher by scraping that web publication is a question of whether or not that is crossing a bright line. Again, at the end of the day, the larger question of how the world's actually going to create some basic rights around the avatar of each of us that exists in the data world, that of course is way down the road, and that's a really interesting question - the law is nowhere near that. Right now, we're just looking at a few of these bright lines and specific settings.

Tina Yu: Thanks, Ed. A lot of food for thought, it sounds like. I think on that note, that actually is a good leeway to transition into more of the risk management side of things with Rohan and Clare. Rohan and Clare, we've been discussing how the data's playing a very large role, understandably, in our lives. How are we going to be able to balance that with privacy and data gathering, and this murky new world that we're all living in right now?

Clare Sellars: Thanks, Tina. I think that there is a tension between data protection law and the use of big data analytics, particularly for commercial purposes, and this brings up a whole host of questions. But the biggest question is probably: How do organizations make sure that they're fulfilling their obligations under data privacy laws while still making sure that they have meaningful data for analysis that's helpful? Who's the gatekeeper in determining whether data collection and analysis is proper, but still effective? What are the guidelines? What safeguards should there be? Rohan, what are your thoughts on that?

Rohan Massey: Thanks, Clare. I mean, it's an interesting one, listening to what Ed was saying, as well, the fact that we have this sea of data now that's being generated and we've got the ability to analyze it. And the fact that our shopping habits, our checking the weather, our checking in with friends, are all generated and supposedly statistical, brings up an interesting question which is this: Research has shown that if you have four pieces of anonymized credit card metadata, you've got a 90% chance of identifying the individual from those four pieces of data. So, if we're thinking about these massive amounts of data people like to use, I think we really do have to be thinking about them as personal data as well, because it's so easy these days because of the size of these data sets to link them, to analyze them, to do the computational analysis on them, and to identify people that we must be keeping it as personal data. And we must be giving individuals the right information about how we're collecting their data, what we're doing with their data, what the requirements are to keeping it secure, etc. Obviously, in Europe, we have very, very strong data protection regulation under the General Data Protection Regulation, and I think those areas and methods will need to carry on in the future to make it more robust.

Clare Sellars: So, in terms of transparency and letting people know what you're doing with their data, what sort of things do organizations need to tell people?

Rohan Massey: I think people need to be informed about what data is being collected, what purposes it's being used for - basically, if it's going to be used for areas of profiling or automated decision-making, the people have to be aware of who's got their data and what they're doing with it. I think if you put that information out there, even as Ed said, some people won't read it when they're downloading apps - you still have the opportunity to read it, but in certain circumstances, you may need to seek their consent to do these things. Now, if you are seeking their consent, there may be limitations on that - you may only be able to use that consent until it's withdrawn, and it must be able to be withdrawn in order to be valid. So, there's certain limitations around it, but again, it's something we'll need to be kept in mind for those trying to purposely use the data.

Clare Sellars: That's a very good point. So, in terms of what organizations can actually do with data they've collected, once they have this data, are the purposes that it can be used for limitless? How widely can data be used? Are there any limitations on what can be done?

Rohan Massey: There most certainly are. Under the privacy regimes in most jurisdictions, there is a purpose limitation. One of the attractive areas of these huge data sets is the ability to use the data in a number of different ways, and certainly in novel ways - engineers will always look at novel ways to use the data. But where it's personal data, it is very clear there is a purpose limitation. It can only be used for the explicit and legitimate purposes for which it was selected, but it can't be repurposed unless more notice or transparency has been given to the individuals so they understand that it's going to be used for a new purpose. Any idea of scope creep, of using one form of data or one purpose for another purpose I think should be avoided at all costs.

Clare Sellars: Yes, I think it's very tempting for organizations to allow scope creep to occur with data sets if they're not very careful because, obviously, they're such a powerful thing and can provide so much useful information and insight into the behavior of individuals. What other things do you think organizations should be keeping in mind in order to make sure they're compliant when using these huge data sets?

Rohan Massey: I mean, there are a number of things. I think to name three or four of the key ones - one, that you are keeping all the data you have securely. That means both taking technical and organizational measures to ensure it's secure. Don't keep data for longer than is necessary. There's often a desire to keep data and keep accumulating it in the hopes that you can use it for something in the future. That links back to my last answer about purpose limitations, so don't keep it for longer than you need it. Make sure that it's accurate. There's nothing worse for analytics than inaccurate data, so make sure the data is quite accurate and reviewed for those purposes. And then the other element - thinking about data protection impact assessments. If you are collecting data, you think it's non-personal but it could be personal, you're going to have an impact assessment to ensure you know how you're using that data and that data is being used lawfully. And then finally, if you are using personal data, think of the individuals - they have rights in that data. How would you respect those rights, whether it's to access and provide information to individuals or to delete the information, or even to transfer it to a third party? You have to consider those points as well.

Clare Sellars: I think that's a very interesting point, particularly as people, especially since the advent of the GDPR, are becoming increasingly aware of those rights and they're not afraid to use them. So, that's definitely something that organizations are coming across more and more, I think. Moving on to international transfers - no discussion about data protection would be complete without touching on this. It's been an increasingly hot topic, and even hotter since the CJEU's decision in Schrems II. So, what about transmitting data around the world? What should organizations bear in mind now in light of that decision when transferring personal data outside of Europe, do you think?

Rohan Massey: Well, if it's personal data, nothing has really changed. The position has always been that, in Europe, you can use personal data, transfer it around Europe, as long as it's protected, and you can transfer it outside of Europe as long as it has adequate protection. Now, the Schrems decision has challenged a couple of mechanisms that we use to ensure that protection is given to the data when it was transferring out of Europe. One was the privacy shield between the EU and U.S. - that's now been found invalid and can't be relied on. And the other was the use of standard contractual clauses where two organizations, one in the EU and one outside, contract with each other to say they'll protect data adequately. These were found to be valid, with an asterisk or qualification that those in Europe need to check that there are appropriate safeguards in the jurisdiction to which the data will be transferred so it will have the protection that it would have, were it in Europe. So, that's going to be a challenge, especially in the U.S. because we know that Europe finds the U.S. not to be adequate, and the threat of regulatory surveillance operations in the U.S. make this a real privacy challenge. So, there will be a hard tussle there I think for organizations. If you can't use the privacy shield, there are other derogations. I think they'll be challenging to use in a big data analytics space, mainly because the use of the derogation is limited to the infrequent and low-number transfers. So, I think there will be a challenge. There may be the ability to get individuals' consent to purpose of transfers, but we'll have to keep an eye on this space, going forward.

Clare Sellars: Yes, that's certainly going to give organizations a lot of food for thought about how best to make sure data's protected when it's transferred, internationally, I think. But increasingly, organizations are using artificial intelligence in order to collate data, analyze data, make decisions based on the data. What do you think the impact of artificial intelligence is and how does that affect the interplay with data protection law?

Rohan Massey: I think one of the big challenges coming out of the use of artificial intelligence is in fact being able to accurately provide transparent notice to individuals of what is happening with their data so that individuals could be confident that they know how their data is being used and how it's being processed. The understanding of artificial intelligence and machine learning, and the algorithms behind that, which themselves are incredibly complex and often commercially confidential because they're giving market-leading positions, becomes a real challenge when you have to balance that with open notice and transparency. So, I think to me, that's one of the big areas of concern, going forward.

Clare Sellars: Yes, I agree. And I think there's also, as we touched on earlier, the issue of automated decisions and profiling, which can often be made through the use of artificial intelligence. So, how important do you think explaining to individuals how those decisions are made, if you're using AI, is for an organization?

Rohan Massey: I think it's critical. And the level of granularity that you have to do it in is also very important. We've seen that the European regulators really focus on granularity, so that does become a challenge because you have to balance commercial sensitivity with openness, and there's always going to be a tension there.

Clare Sellars: Yes, definitely, I agree. So, finally, what are some practical things organizations should bear in mind to help them comply with data protection regulations when they're collecting and analyzing big data for commercial purposes in particular?

Rohan Massey: From the outset, you've got to be clear and tell people what you're doing. So, have a clear and transparent, well-drafted notice so that people are aware of what's going on. Where consent is used, make that clear and simple as well. Where you've collected a large amount of data, think about exactly what you need it for, if you've got that purpose limitation. Streamline those processes so you're only getting what you need. And I think looking at trends going forward, also think about where you're hosting the data. We have heard a lot of talk recently about data localization and it not transferring so much internationally - that may be a problem, going forward. But again, something that I think that all organizations should bear in mind is where they host their data.

Tina Yu: Thank you all so much. And thanks to our listeners. Please stay tuned for part two, where we'll discuss using data ethically. For more information, please visit our website at www.ropesgray.com. And of course, if we can help you navigate any of the issues we discussed, please don't hesitate to get in touch with us. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple, Google and Spotify. Thanks again for listening.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.