What's the big deal with ChatGPT?

One of the big themes at the annual meeting of the World Economic Forum in Davos last week was on the impact of Fourth Industrial revolution against the broader theme of 'cooperation in a fragmented world'.

The fireside chat with Satya Nadella, the CEO of Microsoft, and Klaus Schwab, the founder and Chairman of WEF, will be remembered for predictions of the transformative potential of AI to empower humans, with generative AI chatbot, ChatGPT, catching the attention of the assembled great and the good and garnering news headlines across the globe. Released by OpenAI, the independent research and deployment company, ChatGPT is the next generation of AI automation able to generate unique and creative content in a purportedly human-like way.

What is generative AI?

Generative AI uses a combination of technologies such as natural language processing and deep learning to generate unique content, in response to prompts. The model was trained on a combination of sources comprising millions of pages of web text and other sources including books, articles and other text sources.

What makes the tool compelling is the ability of the AI model to guess the next word (or words) in the sentence in a human-like way. On the face of it, the tool is surprisingly versatile. It can purportedly write an essay, draft a contract, or answer a question in a concise way, that appears to be accurate. It uses a technique called Reinforcement Learning from Human Feedback (RLFH). In layman's terms RLFH is a machine learning technique that enables the machine to predict the next best word, or words based on human feedback to the answer, that is then optimized or learned by the machine, using a reward system.

Is it really a big deal?

Judging by the news headlines from Davos last week it would appear that generative AI is a big deal. This is backed by statements made by Big Tech companies on the longer term value of tools such as these. If user interest is a worthy metric, if you try to log onto ChatGPT today you will be met by a notice that the system is at capacity and to try again later!

However, it's important to understand that ChatGPT is still in testing and has limitations. A key limitation of the technology is that these AI chat programs are only working at the level of language syntax, matching words with the text in their database of web text and documents. The AI does not have any understanding of what the words it generates mean, i.e., the semantics of the text. For many applications (e.g., answering factual questions) this may not matter. However, for other applications (e.g., giving legal advice or giving medical advice), the meaning of the text and its consequences if used to guide decisions may be crucially important.

Notwithstanding, the longer term opportunities for generative AI tools like this are significant, including next generation servicing of customers at scale by augmenting chatbot tools (traditionally considered as a lower order automation risk) with believable human like qualities powered by AI; and to augment multiple domains with unique content creation, including the arts, education, professional services and many more.

Assessing the risks

One should not assume that generative AI brings us closer to a general level of artificial intelligence. AI falls far from human intelligence and is still heavily task centered. As explained above, a machine is unable to apply any human like judgment or context to any input or output. It might use patterns identified or learned from the training data to predict or make a decision (that might be wrong or that might evidence bias) when deployed against real world inputs. Even if bias is not prevalent in the training data, bias can still be inherent in the model, or be prevalent in the overall system for which the AI tool is to be used.

Traditional pillars for assessing AI risk such as fairness (e.g., is there bias?), transparency (e.g., can we explain it?), robustness (e.g., how accurate is it?) alongside privacy and security evaluations are all relevant. Context also matters. Organisations must have an understanding of the potential harm or harms of the tool when deployed for a particular use and must assess these against principles of responsible AI governance. They must also develop mitigations to reduce the potential harm, assess the residual risk against their legal and regulatory obligations and make trade-offs (where these are legally possible). These considerations will vary depending on different factors, such as the relevant domain and where the AI tool will be deployed.

Organisations are well advised to think through their approach for the responsible use of AI. As AI applications increasingly become ubiquitous, having done foundational work to assess organisational appetite and strategy for on-boarding AI, as well designing processes for assessing and mitigating AI risk, will place them in good stead to take advantage of the opportunities AI brings to empower humans to reach their potential, as articulated at Davos last week.

With thanks to Professor Peter McBurney for collaborating on this article

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.