Training is an important part of any compliance program. Rarely, however, do we stop to meaningfully assess the effectiveness of the training. What are people getting out of them? What are the goals of the training and have those goals been achieved? And how can social science research inform how we design and evaluate training? On this episode of the Better Way? podcast, co-hosts Hui Chen and Zach Coseglia, along with the Lab's behavioral scientist Nitish Upadhyaya, explore ways to reimagine training in order to achieve holistic outcomes for the organization. Along the way, they tackle the challenges of remote training, consider how emerging technology could revolutionize the training landscape, and reflect on how training offers organizations a unique opportunity to listen to employees.
Transcript:
Zachary Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab. This is a curiosity podcast, where we ask, "There has to be a better way, right?" There just has to be. I'm Zach Coseglia, the co-founder of R&G Insights Lab, and I am joined, as always, by my friend, Hui Chen. Hi, Hui.
Hui Chen: Hi, Zach. This is pretty exciting because, first of all, it's the first time we're actually in the same room recording this. And secondly, we're talking to our wonderful colleague, Nitish.
Zachary Coseglia: Nitish, welcome back.
Nitish Upadhyaya: Hello. Thanks for having me back.
Zachary Coseglia: We are so happy to have you. We're coming back to one of the topics that we touched on, at least, a little bit in your last discussion with us, and that is training/education. And, Nitish, I actually just want to say at the outset, we say the word, "training," and I almost cringe, a little bit, at the word, "training," because it feels like a very generic concept, in comparison to what we actually want to see folks doing, when it comes to corporate education. Definitions are important, so let's define those terms, Nitish. What is training? And, frankly, is "training" even the right word to use here?
Nitish Upadhyaya: It's such an interesting perspective, because, I think, training does get lost in 20-30 years of the same sorts of things happening and people having the same ideas in their toolkits. When you think of training, what do you think of? You think of online, five-minute, 10-minute seminars that you do. You think of click, multiple-choice questions. And sometimes, you think of these boring sessions that you have to go to on a mandatory basis to, again, get the tick in the box. That's what's ended up being our vision of training. What training should be is this amazing opportunity to help with learning, to help with development of an individual, a team, an organization, and to really improve their effectiveness. It's not just about gaining knowledge—it's also about thinking about skills, attitudes, and behaviors. I think, often, that aspect gets lost in this hunt for standard education, rote learning, "We tell you this—you'll remember this." And no one really thinks about the next step, which is we get told things, we don't necessarily do them as human beings.
Zachary Coseglia: It's almost like the word, "survey," which I also sometimes cringe at, although at the end of the day, I feel like using surveys or questionnaires to collect data is incredibly important. But as with surveys, it's almost although the cringe factor or some of that initial negative feeling is actually driven by the fact that surveys and trainings just sometimes aren't actually done very well, and that's then colored our perception of the entire concept.
Hui Chen: Exactly. And like you said, Nitish, it can mean so many things based on people's own experience. Unfortunately, certainly when it comes to trainings related to ethics and compliance in a corporate setting, they have not been pleasant experiences for most people. I know I personally have gone through trainings that really just felt like torture.
Nitish Upadhyaya: It's so sad, isn't it, from my perspective, when you think about that. Those experiences that you folks have had, and lots of listeners will have had, as well; training is meant to produce learning—that's the outcome—and so often, it does the absolute opposite. It turns people off at an organization. It gets people not listening when they should be. I think there's also a lot of missed opportunities with the narrow framing of training that people use (i.e., always a teacher at the front of a classroom lecturing to a set of participants), and there are other things you can get out of training, rather than just what people believe.
Hui Chen: What are those other things that people can get out of training?
Nitish Upadhyaya: It's one aspect in a very complex environment, especially when it comes to organizational culture. And so, just as much as there's a chance for the organization itself to profess its values, how they behave, what they expect of individuals, it's also a chance to listen to people and to really hear what employees are saying, especially when it comes to in-person training. How often is it that people are all together in a room, talking about some of the gray areas in their business, things that are difficult for them, or things that they're trying to grow and develop in? That's as much a chance to really engage with that population, to think about that way of creating a community, and answering challenging questions, and it being a conversation rather than just someone setting forth a set of principles.
Zachary Coseglia: Nitish, let's take a little bit of a step back and talk about your approach to training or some of the Better Ways that you've observed in your work and in your research in this space.
Nitish Upadhyaya: I think, at heart, I'm a facilitator. And so, whenever I am thinking about training, that's what I'm thinking about: a facilitated experience, a workshop, a way of engaging individuals, getting to understand what they actually want out of a session, what their objectives are, and then taking them on a journey to get there, building and layering content in a way that, again, isn't just giving them everything or hundreds of words on a slide, but through experiences, through practice, allowing them to fail and to learn in a safe environment so that when it comes to the real thing, they're able to make better decisions (their behaviors might have changed, their attitudes might have changed). Ultimately, training is such an intensely human experience that, I think, we reduce it and we end up giving it very short shrift, if all we're doing is a one-way dialogue with individuals.
Zachary Coseglia: We talk a lot in the Lab about human-centered compliance and putting the person, the human being at the center of a lot of our analysis. That sounds pretty core to the way that you think about training, that is, "Let's not forget that there's actually a human being that's sitting in that room," or, "There's a human being sitting on the other side of that computer screen, taking this training."
Nitish Upadhyaya: Absolutely. We're not robots. We're not sitting there with a chip, disk, or a USB flash drive, and it's not, sadly, like The Matrix, where you plug in and suddenly you've learned everything. It's not just about knowledge. There are all of these other nuances that you are trying to influence. And don't forget, as a trainer, you are also combating all sorts of other influences and levers that are going on with your participant. It might be on the day—their phone is dinging or their brain is somewhere else because they've got a personal crisis going on or something is happening at work; but also, all the other levers that have pulled on them prior to the training, the way in which their boss behaves, the way in which the organization sets out its communications; to the things that are done after the training itself, where suddenly, they go straight back into their day job, they forget, they don't have a chance to practice what they've learned. The context in which that training was meant to be conducted doesn't pop up for six months, and by that time, all of that information that they could have used to solve a problem is no longer relevant or it's no longer within their immediate recall.
Hui Chen: I've had opportunities to talk to employees about their compliance training experience, and I'm going to tell you what I have heard and ask you to react to them. One is, "Compliance training is never about the gray areas. It's just them telling us what we can't do. It's never about what we can do, things that we probably can do, things that we can do differently—it's about none of those." As one person recently said to me, "It's never a discussion. It's a one-way prohibition. It's a list of, 'No, no, no, no, nos.'" Another related comment is, "The training scenarios that the compliance people provide have nothing to do with my business realities." When I heard this, I asked, "Do you say, 'But this scenario is not going to work in my daily life?'" They said, "No." I said, "Well, why not?" [Their response was], "It's not a discussion because I'm going to say, 'It doesn't fit my realities.' And the compliance people would not know what to do, in response to that. That would be the end of that, or they would just continue to reiterate the nos." And so, that's one set of common complaints that I hear.
Secondly, a lot of the compliance topics are very U.S.-centric. So, take a very popular compliance topic: anti-corruption. Most countries in the world have laws against corruption. Most countries in the world do some form of enforcement in their own countries. Many multinational employees outside of the U.S. tell me that they never hear about how those laws are enforced in their own country. They only hear about how those laws are enforced in the U.S. and, frankly, that really doesn't matter to them. That's another instance where I feel like people are not thinking about those individuals that are listening and taking the training—they are really just coming from their own perspective. Related to that are multiple employees saying, "These trainings make me feel the company doesn't care about me. It just wants to protect the company. All they want to do is be able to say that they've done this, and if anything happens, they would throw me under the bus." I want to see, given the work you've done in this area, how you react to some of those comments.
Nitish Upadhyaya: I react with horror because this is exactly what training should not be. It should not be that feeling of being pressed upon or being targeted. It should not be a tick-box exercise, but something genuine that people care about and want to learn that helps and supports them to do something well in their jobs. I think taking all of those points in turn, the idea that training is not relevant for the business realities that people face on the ground. Hui, you and I have done quite a lot of training, where we've spoken to business people, sales people in particular, and gone through some really difficult gray-area scenarios with them, where they have started to ask questions about materiality. "Does this matter? This is a pattern. This is how I generally treat my clients. Is this okay if I have a dinner in this way. This is what the cultural norms are in this jurisdiction. How does that matter?" All of those questions, whether or not the training gives space for them to be dealt with, are being asked by your employees, especially the ones in the key risk areas, so sales, maybe procurement because they can spot things, maybe finance because they're looking at receipts and doing some accounting or audit processes. All of those individuals have those questions, so why not, in a training session, provide a forum to have those conversations? I think a lot of it stems from compliance teams, sometimes being quite scared of having those frank conversations, even knowing their subject matter well enough to say, "Here are the bright lines, here are the gray areas, and here is where we're going to be able to support you in the business."
Zachary Coseglia: This idea of having a discussion is really compelling. I think we've seen in practice how having a discussion and giving people a forum to live in that gray area in a safe way so that they're able to then address it in the places where it may be less safe, when they're actually confronted with a potential challenge, that's wonderful—but that only really works in a live training session. So, how do you create that kind of dialogue when you're not able to do live training to all of the people who need it?
Nitish Upadhyaya: It's one of our big questions, especially since the pandemic, with hybrid working and people really being able to be on anywhere in the world. The aim is to engage your people before a training session or a delivery of content or material. The aim is to layer this so that it's not just live sessions, but interspersed with other material, that all of it combined serve to reinforce, inform, and to raise awareness of a particular situation. Let's think about how this actually works, even if we're talking about the anti-bribery and corruption context. We're saying we haven't necessarily got time for live sessions or to bring everyone there, in person. First, you want to find out what stories your employees are telling at the water cooler about corruption, bribery, what they're facing on a day-to-day basis, and what their questions are. And so, you seek to capture those stories and to really understand what needs to be trained on in the first place, what your people are even asking, before you start designing training. And often, people start designing the training around the rules. They will look at the tenets of the FCPA and they'll work through a PowerPoint presentation, rather than thinking about what their specific teams need and the specific contexts of their roles. So, that's the first bit: it's a risk assessment and a needs assessment, but it's done with your people in mind.
Zachary Coseglia: What I'm hearing from you, Nitish, which is really consistent with, I think, other Better Ways that we've talked about with you, with Caitlin Handron, the Lab's cultural psychologist, which is in the absence of being able to have that dialogue in the context of a training session, we essentially need to be having that dialogue, around the clock, and we need to find ways that we can collect information from our employees so that we understand what they need when it comes to training and we can design around that. But we also need to be collecting those stories so that as we're developing training, we're not training around "the rules," as you say—we're actually training on the real-life scenarios that define how business is done and how the teams operate.
Hui Chen: I think that requires something that we've talked about, a lot, which is listening. Recently, I talked to employees in a company, and they do a lot of these scenarios in their training exercises. So, I asked the sales folks, "How relevant or realistic do you think these scenarios are?" And they said, "About 70% of those are totally unrealistic and irrelevant to me." But the problem is, these are sales scenarios—they're dreamed up by compliance people imagining what a salesperson is like. They didn't come from talking and listening to the salespeople about what they're really facing in a condition of trust. That's something that they also brought up, "That I would raise questions if I trust that you're going to be able to work with me through this and not hold what I say against me." And that condition of trust is often missing.
Zachary Coseglia: Nitish, why don't you address that, both the condition of trust and the other point that Hui made earlier, which I think is reflective of distrust, this idea that the training is really there to protect the company, not to protect me. I think those two concepts very much go hand in hand, so, how do we address those things?
Nitish Upadhyaya: I think part of it is how compliance bills itself to the business. Often, in their words, they can say, "We're here to support you. We don't want to stop your business being done. We just want to make sure it's being done in an ethical way." But the way in which they conduct themselves, whether it's through investigations, or communications, more generally, or the stories that people tell about compliance can often end up painting them as a blocker to doing jobs, something to get around, rather than people to be worked with to do things in the right way. The best compliance officers that I have worked with and spoken to are people who are embedded on the desks, people who genuinely understand the business that salespeople are doing—they understand the subject matter and they understand the pressures. Coming back to a point, Hui, that you raised before, they also understand the cultural norms in which that salesperson is working in a particular jurisdiction. We often have this conversation, Hui: What is a bribe? What does it mean? Is a type of bribe accepted as commonplace in one jurisdiction? Might it not be so in another jurisdiction? Those are the nuances that a compliance officer has to navigate, but also that a businessperson has to navigate on a day-to-day basis. Don't forget, on top of dealing with those clients and those norms, they are also being assaulted by incentives, targets, all of the things that they're hearing from their management. And so, even though a training exercise might be saying one thing, everything else, all of the other voices in the organization, are telling them to push in a different direction, to bend the rules, or to do something. If they haven't got a friendly compliance officer that they trust who spent the time building that trust, who has been consistent in their messaging and their support, that will change, I think, how someone makes a decision.
Zachary Coseglia: Do you think that we can create effective training that is of the e-learning sort, that is not live, that is not facilitated, that doesn't allow for interaction between the facilitator or facilitators and those being trained, but where it is computer based? Can it be done?
Nitish Upadhyaya: I think it can, as part of a wider program. This is the point here: This training is not a one-shot activity. It is not a magic bullet, but it is supported by everything else, the ecosystem that you put around it. And so, if everything is moving towards the same goal, if all of the messaging is working in that direction, and you're also able to sense and respond to pockets of disruption or issues that are coming up, which are pushing against that messaging, then you're in the right place. E-learning, interactive, engaging, choose-your-own-adventure style e-learning, can definitely make a difference in helping people think through some of those scenarios, as long as what they're seeing, as Hui said, is realistic for their business. The second thing that e-learning allows us to do is to personalize the training to specific job roles, and that is one of its great strengths. If you are logging on as a front-of-house person in a company of 5,000 or 50,000, different training is applicable to you, compared to a front-line salesperson, someone who works in catering, or someone who, maybe, runs the finance program as a little bit more junior. You have the ability to turn on these levers and use the data that you fairly collected about someone to influence what they learn and how they learn to change their learning pathways, and then to know precisely what they need to build on and to work on for the next stage. For some people, maybe 50% of the organization, once a year is all they need—as long as it's context-specific and personalized, it's all they need. For others, however, it's a wonderful chance to gather data, and to then start targeting behaviors and areas that are not necessarily being thought about properly or get to a position where you're repeatedly touching people and layering those knowledge aspects, skills, attitudes, and behaviors.
Zachary Coseglia: Nitish, that answer very much focuses on the content of the training. You have this wonderfully unique perspective in that your expertise sits at the intersection of law, behavioral science, and technology. And so, I'm interested in whether or not, separate and apart from the content, whether you actually see opportunities to use technology itself to create a better, more effective training experience?
Nitish Upadhyaya: Absolutely. The possibilities here are endless. I've heard a lot of people talking about the power of generative AI, and there is a lot there. People are jumping to the idea of generating content. We don't necessarily need more content. Mass-produced PowerPoints are genuinely the stuff of nightmares. We shouldn't be using new technology to take old trodden paths and just scale them to an extreme—that's not going to work. It is genuinely horrifying, if that's how people are thinking about using technology. We've got to think about this differently. We want to create thoughtful experiences that create or support skills, and attitudes or behaviors. What might that look like? Well, we can go really far into the future and start thinking about VR and AR, and how virtual reality and augmented reality can help put us in some of those difficult situations—there are some ethics considerations here—which we could only otherwise hypothesize about. There was a really interesting study done with generals in the U.S. Army—these are the people that have their finger on the nuclear trigger or are involved with that big red button. And so, they created a scenario in virtual reality, where they sent these generals through all of the procedures that they needed to go on. "You've just heard this piece of information. You need to make a decision. What does it look like?" and the cascade of factors that come after it. As people came out of that, for the first time, they really understood. Hopefully, it never happens, but before that situation occurs, they were able to step through all of these problems. Some people froze. Some people didn't really understand the procedures. Some people were too quick to rely on their biases, rather than thinking about all of the information in front of them. That's a really powerful tool to help immerse us in a potential situation, and, I think, that is going to be a game-changer, again, if we use it in the right way.
I think the other thing is the use of data and our ability now, with increasingly complex data models, to start to link up some of these siloed pieces of information. We talked a bit about measuring training, and we can come back to that in a second, but what are all of the qualitative things that we talked about? Stories that people are telling, the fears, the issues that they have in the organization—but also, the quantitative side. What's the data telling us? What are the whistle-blowing cases about? What are the reports to compliance about? What are compliance risk assessments saying? If we can feed all of that into the correct sort of model, we can start to raise some of those red flags that allow us to intersect issues before they actually arise. And so, this is the needs assessment portion of the training. You can actually target training where it's needed. We talked a little bit about the delivery side, we talked a lot about the content in the middle, but how do you even know what that content should be? Well, technology has a really important part to play in that aspect, as well.
Zachary Coseglia: Nitish, I want to pull on a thread that you just introduced around measuring the effectiveness of your training or measuring your training. I know that both you and Hui have strong views on this, as do I. I've been doing this work, obviously, for a very long time. And yet, to this day, in 2024, we still see folks measuring their training in fairly rudimentary ways—that includes measuring the effectiveness of their training by something as simple as, did people take the training, completion rates, and a variety of other metrics around completion.
Hui Chen: Honestly, what you've been talking about, Nitish, is a lot of work. It's a whole lot more work than just buying some training off the shelf, and rolling it out to everybody, and in 15, 20, 30 minutes you're done. This is listening. This is trying to figure out the different cultural contexts, the different job roles. This is taking data. This is an awful lot of work. The only way I think a lot of people would be persuaded to go this route is if they can be convinced that this actually yields better results. And it's not just better results, in terms of, "Everybody liked our training. They think it's cool." It's better results, in terms of whatever outcome that they have divined. So, that really puts the pressure on the ability to evaluate effectiveness because why would somebody go through all this work?
Zachary Coseglia: It does. But the one thing that I want to just add to that is a little bit a of a caveat or push back on thatjust a little bit; While there is work here, for sure, what we also know is everyone is doing training—and that doesn't just apply to ethics, compliance, and risk management. We're talking about HR-related training. We're talking about diversity, equity, and inclusion training. We're talking about other operational training, job training. I tend to imagine a world that looks a little bit different from a lot of people, and I can't imagine a world where there isn't training. While, yes, there's work here, we are all spending tens of thousands, hundreds of thousands, probably millions of dollars on corporate training. And so, the question is: Are we spending our money in the right way and in the right places? It's not so much about, "Here's all of this additional stuff that we should be doing." It's, "Here's all this other stuff that we could do instead of some of the investments that we're already making." And so, I think, at the end of the day, that the argument here for a more disruptive, more modern approach to training isn't, "Let's spend more and do more." It's, "Let's spend smarter." And I think that that's what the data ultimately helps us do. So, Nitish, we've had a conversation here. What do you think it means to measure the effectiveness of training, and how do we actually do that?
Nitish Upadhyaya: It's a question that lots of people have tried to tackle over the last number of decades. And you're right, a lot of them do end up with attendance records and attestations—that's what people wave in front of everyone at the end of the year. So, I went back to the literature, and I teased out two important concepts. One takes the more macro perspective, which is assessing the effectiveness of the training system. And one is much more micro, to Hui's point: Did the specific training or suite of training program actually havea desired impact on those skills, attitudes, behaviors, or whatever you were training for? I think people often get these things mixed up, and they also don't necessarily allow for the complexity in both.
First of all, training system effectiveness. Is training reinforced by the organization? What is the culture of the training? What opportunities do people have to practice the situations or the learning that they've had in the actual session itself? All of that is incredibly important. We're not really evaluating what that ecosystem actually looks like, so there's work to be done there, but it actually creates a feedback loop because it tells you how you support the people that have been trained and what else they need from you. There's no point of training someone on a policy and waxing lyrical about it, and then it's buried seven pages under the intranet, but you never know that—the person tries to go and find it, they can't find it. So, is the effect of the training there? Yes, maybe, until the person actually tries to find the policy when they get to a situation. But is it effective for the business? No, because they can't make the decision based on the policy because they can't find it, in the first place. So, I think that's really important.
The second bit, I think, is the more traditional piece that we're talking about, which is, "How do I know whether or not my content and my delivery actually landed, and people moved from a baseline knowledge to something that is the knowledge that I want to get them?" I think people often think that this needs to be really scientific—especially behavioral scientists who get very excited—about setting a baseline and a benchmark, having a control group, and then moving people up a level or whatever it might be, those changes to individuals. It doesn't have to be that complicated, and there are much cheaper and easier ways of measuring this outcome. First of all, you need to think about whether or not it even matters where people were, compared to where you want them to get to. As long as people get to that location and that next stage, where everyone understands, then we're probably in a better place for the organization, and you can do this in two ways. First of all, again, and I come back to this a lot, are the stories that people are telling about the issues. Are they now able to articulate how they would solve them, how they would make a decision, how they would think about things in a different way and are they spotting some of those red flags? Overall, that's through the course of their next six months in the organization. The other bit is the quantitative data. Again, I talked about whistleblowing—what are compliance reports and what are people talking to compliance about? All of those are indicators about what's happened during the training session because you can create hooks in the training session about what people could be doing, what resources they could be using, and then try to understand whether or not they are, in fact, using them. If you're moving people in that right direction and you're changing the stories that people tell, you're getting there. If you're then changing the system, not just targeting the individual, then you're giving the training the best chance of success, both from an individual's perspective, but also from an organizational perspective.
Zachary Coseglia: I think about this from a real practical perspective, knowing what a lot of our clients and a lot of our listeners probably have, in terms of data. Part of the reason why folks are using the completion rate is because that's the data they have. But what you often also hear is that folks are doing some amount of knowledge check during the training. And I, by no means, want to suggest that I think that that is how you measure the effectiveness of your training without more or on its own, but the truth is a lot of folks are collecting that data from those knowledge checks, but they're not doing anything with it. And so, that, to me, is a great place to start: Let's just start using the data that you have, beyond completion rates, to start looking at what people are learning. Then, what you hear is, "We have a knowledge check at the end, and people have to get 100% to pass. And everyone got 100%." To which I would say, "How many times did it take them to get to 100%, because those are usually multiple-choice questions, whereby process of elimination, you're eventually going to get there." So, I said, "Let's start looking at how many times they got this question wrong before they got it right? When you start breaking up that data, I think you might see really interesting things."
I remember working with a client not too long ago, where they started breaking it up by question and how many times it took to get the question right, and then, they started breaking it up by geography and business unit. What they ultimately saw was that the people who were getting the questions wrong the most were actually in the legal department because they thought that they knew all the answers, and so, they were just powering their way through the quiz, which then tells you a couple of things. One, people may not be paying attention because people who know better and people who probably would have gotten the answers right got them wrong, or it actually may tell you something else, which is that your training was confusing, or that the questions that you were asking were confusing because these folks who are trained and should be expected to know this stuff without the quiz were getting things wrong. And so, at a very practical level, I just encourage folks to start using what you have, as an initial matter, and then, we can start talking about all of the Better Ways that you can explore to do this even more effectively.
Hui Chen: I'm going to run a couple of thoughts by you. One is testing out. If your evaluation method is my knowledge at the end, why can't I test out? Give me the test in the beginning—if I pass your test, then I don't have to sit through this painful experience. So, that's one thought. The other thought is integration of training. I've always said that in my ideal world, there would be no separate compliance training—however you get trained for your job, that's part of it. So, I would love to hear your ideas on testing out and training integration.
Nitish Upadhyaya: Testing out really works well where something is knowledge-based: Do I have the requisite knowledge? In some cases, if there's a company making widgets or you need to get to a certain threshold, it's really not a problem. Can you really test out on attitudes and behaviors? That's a different story. So, we talked a bit about anti-bribery and corruption training, and you could have a whole range of objectives. You could be raising awareness. You could be talking about ethical dilemma situations: How do you act responsibly under pressure? What's the right thing? I know there's lots of conversations about that. What is it that you're trying to achieve? Then, you think about what those exercises actually could look like. They might not be knowledge exercises in the first place. You might be going about measuring people's attitudes towards corruption—that's a different way of thinking about that topic. How do people react when they play games that involve elements of bribery, and what do they think is the thing to do in this situation? Those sorts of methods don't necessarily, I would say, lend themselves to testing out, but knowledge is the one key component of the overall picture. In that situation, if people can test out, don't waste their time—give them a good experience that'll make sure they'll then listen and they turn up to the next event, and the next event. And that's the other thing with training, if you lose someone early on, the next time they come to the event, they're not going to be paying attention until you've already lost your chance at advocacy in changing those behaviors.
Zachary Coseglia: I think that we've seen some momentum toward testing out, over the course of the past five-plus years. I feel like the testing out is often a really attractive solution to the not so effective training. It's, "We're going to do this very rote, rules-based, knowledge-focused, behavioral social norms absent training, and we're going to let you test out of it, if you can get these questions right." But what we've also seen over the course of the last several years, kind of ironically, is a shift from rules-based to values-and-principles-based compliance. And now, I think we're very much pushing culture, contextual, and precision compliance, and testing out doesn't work so well when you're operating in that space, as opposed to a very rules-based one—just to reiterate and very much agree with what you shared, Nitish. So, integration/integrated training?
Nitish Upadhyaya: It's a simple answer there because as much as I know it's difficult to collaborate and to bring some of this compliance work in, what better time than the exact point in time when you're learning about a new product, let's say, or a new market that you're going into, to flag the issues and the risks that might be specific to that jurisdiction or the type of product that you're selling? What better way to game out what the risks could be for your business and not meeting its target in that area than when you're actually training people on a specific farmer product that they're selling, or something that they're going in to do on the MedTech front, or financial products that they might be pushing, as well? All of those times, people are turned on because they're incentivized to be turned on. They are excited about the opportunities that are coming forth. And so, it's the best time, within the right context, to give them a sense of the risks, as well as the benefits and the exciting things that are coming their way.
Zachary Coseglia: It's interesting, Nitish, because on this point of integration, we were just having a conversation recently with a client in the context of a compliance program review. One of the things that we do in those reviews is we always try to not just talk to risk professionals—compliance, legal, audit, HR—but to also talk to the business, to talk to those who are closest to the risk. We were recently talking to a senior executive who has responsibility for South America for their company, and the topic of training came up. One of the concerns that this individual raised was that the training is good, but that it didn't have enough cultural context. It felt like a global training. It felt like something that was intended to be broadly applicable when it might actually be more effective if we had something that was really more real for our people in these markets. Then, we pitched the idea of, "What about integrated training? What if the training wasn't a separate compliance training, but when you train your people to do their job, we just also train them on compliance? And then, you'd have this singular message, and it could be tailored to the context, culture, country, and the business that it's being delivered for." And it was just this really wonderful response of, "You can do that?" Just because that's not the way that it's typically done or that's the way it's always been done, of course we can do that.
Nitish, we just have a few minutes left. To sum things up, what key things do you think people should bear in mind when they're next looking at their training program and looking to, maybe, take it to the next level?
Nitish Upadhyaya: When I'm thinking about training, I do start with a quote that's from Heidi Grant and Tal Goldhamer. They wrote in HBR a few years ago on hybrid workplace and learning programs, and they said, "Great learning is about great design, design that considers how human brains actually encode and embed information." It sums up, for me, this point around making sure that your training is human-centered and really designed well—I don't necessarily mean pretty pictures, but it's been thoughtfully designed, and that's incredibly important. It's not one-size-fits-all, and it's certainly not just one shot, especially when it comes to your key risk-takers, that use of multiple methods and multiple channels to reinforce the outcomes. Use the power of storytelling. Sometimes, those funny stories, the stories from the CEOs where they're caught in a tough situation, they make for the best learning experiences because they are honest, they are sincere, and they stick. I think, focus on systems, as well as your people. I've got a two-year-old daughter. I didn't teach her language. I didn't sit down in a classroom and teach her how to speak or how to behave, necessarily. She models on the behavior that she sees me doing and what she sees my wife doing. I can tell her to do one thing, but if we are doing something totally different, multiple times, that's what she's going to do. And that's the difference between telling someone something in training, and then observing something in a business—that's what they're going to end up doing. My final point, which I think people are uncomfortable with, generally, is just give people the chance to fail during training. Allow them to make those mistakes. I don't mean those multiple-choice mistakes, but in a gray-area situation, say, "I would have done this." And someone says, "Actually, that's not right." That's okay—they're making the mistakes in a psychologically safe environment, where we can help them to plumb in new ways of decision-making in their brains, and they feel like they've also learned something along the way and it's been beneficial.
Zachary Coseglia: I just want to add on that last point about failure, about making mistakes. I think there's also an opportunity for us to take some risks and do some experimentation when it comes to how we can do our training more effectively. You don't have to roll out a disruptive or different take on training to 500,000 of your employees. Let's find a subset. Let's do some experimentation. Let's do some data collection. Let's see if it works, how it works, or where it works, and then go and scale from there. This is such a recurring theme on the Better Way podcast, "Let's lean into and have an open mind when it comes to experimentation."
Nitish, thank you so much for joining us and sharing all your Better Ways. And thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that's happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don't hesitate to reach out—we'd love to hear from you. Thanks again for listening.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.