With powerful language models and a wide range of AI tools at our disposal, and the legal requirements largely manageable, a key challenge remains: how can we ensure users fully leverage the opportunities AI offers? To address this, we have published an internal use case guide. This guide helps us improve the quality and efficiency of our daily work, through both small and large steps. This is part 29 of our blog series on the responsible use of AI in business.
While the initial hype around generative AI has subsided, many companies are still keen to adopt AI tools. Some have successfully enhanced customer service with website chatbots, while others have initiated and often discontinued internal projects. Many businesses, eager to provide their employees with AI capabilities, are readily adopting aggressively marketed solutions like Microsoft's Copilot for M365, partly just to be able to claim they are using AI. However, user feedback is frequently one of disappointment. This creates a risk of users either disengaging from AI altogether or turning to private, unmonitored alternatives, as many perceive tools like ChatGPT to be superior to, for instance, Copilot.
We have decided not to use most of these tools because we find them too expensive for widespread use and generally unsatisfactory in their functionality, integration, and legal compliance. A few exceptions confirm this rule. As is well known, we have developed our own solution for our needs – and this is also gaining increasing acceptance outside the firm (it can be used by anyone). We have also addressed professional secrecy concerns.
Challenge No. 1: We are creatures of habit
While we have addressed the technical, legal, and cost issues, two further challenges remain. The first is our inherent resistance to change. We are creatures of habit, accustomed to established workflows, and often reluctant to adapt, even for clear benefits. This traditional approach typically excludes AI. Consequently, while we are fascinated by AI's potential and eager not to be left behind, we struggle to integrate it effectively into our processes or restructure them accordingly. Often, we simply overlook the possibility of delegating specific tasks to AI in our daily work. This causes us to miss opportunities to enhance the quality and efficiency of our work.
Here are a few examples:
- I need to fill out an Excel form with the details of a case. Although I programmed our AI tool myself, it did not occur to me that it could perform this task. But it actually achieves in two minutes what might take me an hour, as I found out. I still have to check the result, but I'm definitely faster and often better.
- I'm writing a memorandum and I know the subject matter well. But with AI, I suddenly get a different perspective: an "advocatus diaboli" who can pick apart my arguments in seconds. While AI-generated content is not always coherent, it enhances my work. And it can also draft an executive summary for me in a minute. However, I must remember to ask it to do so and learn how best to do that. The effective use of AI is also a skill, and even I, who know the technology well, am constantly learning.
- I check a contract, jump back and forth, look for a passage that I think I saw somewhere earlier – and end up wasting a lot of time. While AI cannot review a contract with my level of thoroughness, it assists me as a chatbot, handling minor but helpful tasks. I am, however, still learning to adapt to working with a constantly available "AI buddy", a novel experience for me. I now also regularly let the AI give me the "big picture" first, which makes my review much easier. Our solution delivers this directly within Word in a few seconds – a decisive factor, as I have noticed.
- I was always used to searching for information on the internet myself. I am very familiar with how to do this, and Google is well known for its extensive "knowledge". Why not use an AI research assistant to read 50 websites in a minute and deliver a report? This would require me to learn how to delegate (for example how to frame the research task) and control, rather than doing everything myself.
Therefore, every employee should consider how AI can enhance their work processes. This reflection is not automatic; often, AI's contributions will seem minor, such as performing tasks just a bit better or faster. However, these small improvements accumulate. For instance, after contract negotiations, why not ask an AI to verify that all terms used in the contract are defined, or to check whether redacted documents are truly free of personal data?
To provide impetus, we have compiled use cases — like the examples above — to show our employees where and how AI can support their daily work. The result is a handbook with over 70 use cases, which we are constantly expanding. We have implemented all of them with our own tool, so you can implement them in your company at almost no cost.
We supplement the handbook with training courses and other measures to motivate our team to engage with this technology and use it daily. This is an ongoing process. No special technical expertise is required: one colleague, who describes herself as not tech-savvy, now uses AI every day and would not want to be without it. Another colleague mentioned that AI helped her understand a previously incomprehensible instruction – she had never dared to ask. Often, small use cases make the difference, such as using AI to copy text from other documentation with just two clicks, even where copy and paste is disabled. AI can do this if you know how.
The handbook is available free of charge at http://vischerlnk.com/redink-uc.
Challenge No. 2: Loss of our skills
Eventually, as measures like user adoption are introduced, a second challenge typically arises: How do we ensure our employees do not over-rely on AI, consequently diminishing their ability or desire to devise solutions, be creative, acquire knowledge, or make important decisions themselves? Initial studies already indicate that using generative AI negatively impacts our cognitive abilities (this is also the case in Switzerland, see for example "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking").
While AI can provide a university graduate with a solid starting point for a draft contract, it does not teach them the essential skills of structuring an agreement, including necessary provisions to prevent problems, or identifying the pitfalls of drafting. This knowledge comes from personal experience, which is why starting with a blank page is often the only right route to acquiring the necessary skills. This challenge is not new; for example, large law firms have always had templates – a temptation for a novice. However, like AI-generated drafts, templates can be dangerous as they will regularly discourage inexperienced professionals from engaging in critical thinking because what they read sounds perfect.
As opposed to that, using AI as a sparring partner to critique a first draft for completeness and appropriateness offers a significant learning opportunity and ensures profitable use of the technology. This approach has the advantage that authors can ask questions they might hesitate to ask a person. With the right programming, the AI can also be relentlessly direct. We should also consider new applications; for example, we have a program that allows young lawyers to practice negotiating contract clauses against an AI. The AI then provides feedback on their performance, which remains confidential from their superiors.
We must not only demonstrate use cases and provide training but also convey the importance of remaining in control and using our own judgment. We need to maintain our core competencies and cultivate our thinking skills and creativity independently of AI. Most people quickly realize that even advanced AI tends to produce only "average" content, as its concept is to learn from training data and identify the common denominator. While this may suffice in many situations, solving certain problems requires the human impetus to think "outside the box". AI can then help execute, justify, or test the resulting idea. The decision thus remains with the human, who can achieve better results or be more productive with AI. This does not mean we will work less or become unemployed; rather, the requirements are simply evolving.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.