ARTICLE
26 June 2023

Being smart about AI

K
KordaMentha

Contributor

KordaMentha, an independent firm in Asia-Pacific, specializes in cybersecurity, financial crime, forensic, performance improvement, real estate, and restructuring services. With a diverse team of almost 400 specialists, they provide customised solutions to help clients grow, protect from financial loss, and recover value. Trusted since 2002, they deliver bold, impactful solutions for clients.
The potential of generative AI systems brings opportunities for organisations, and for those with malicious intent.
Australia Technology

We are standing on the cusp of an AI-driven efficiency revolution.

The immense potential of generative AI systems has created a new frontier in almost every aspect of our lives. This brings opportunities for organisations, as well as for those with malicious intent.

AI technology, generative or otherwise, has potential for enormous benefits. For example, a machine learning model was used recently to identify a new antibiotic effective against a hospital-borne, drug-resistant bacteria by processing vast amounts of data in mere hours instead of years of scientific research.1

On the flipside, flaws in facial recognition technology have led to wrongful arrest, 2 and racism. 3 Threat actors can leverage generative AI to augment their attack methods in the same way as any code developer. They are enhancing social engineering and phishing attacks with improved and more targeted narratives. Deepfake technology is generating realistic, but fake, content using voice or video samples. Your 'CEO' can now leave you a video or voice message instead of just sending an email.

Concern around AI's runaway growth has led technology executives, including Tesla founder Elon Musk, to call for all development to be halted until its associated risks are identified. Sam Altman, CEO of OpenAI, has gone before US Congress requesting increased government regulation and oversight of AI development.4

This is not to say that AI technology is to be feared and avoided. We are in the early stages of the revolution and just as email and instant messaging have become tools embedded in all organisations, so too will AI. These technologies also brought new and increased risks - risks that continue today, given they remain the favoured avenues for hackers to obtain their initial foothold. But the benefits easily outweigh the potential downsides.

When adopting generative AI, a key risk mitigation is understanding how these systems are developed and operate. Especially the training data the model relies on for content generation. This and the prompt information entered by users, are the key factors behind the content generated by the platform. Once operational, the systems can be configured to draw also on information from live sources, such as the Internet. Live sources improve the system's ability to generate current and relevant responses, but they also increase certain risks, notably the threat actor's ability to create an authoritative sounding, but false response. Even more concerning is the risk of 'data poisoning' where threat actors gain access to the AI's training data to introduce deliberate bias and incorrect outcomes into the core behaviours of the platform.

Of course, AI systems don't need malicious intervention to cause havoc. Current generative AI platforms are prone to producing highly convincing but inaccurate information. Blind reliance on content generated by these systems, which should be treated as fallible assistants, has resulted in embarrassment and poor outcomes. One recent example in the US is an attorney who faced possible disbarment after asking ChatGPT to provide him with case law histories.5 He had one enormous problem: the chat bot made up the cases in their entirety.

Employee data privacy education is essential. Surveys reveal that companies are leaking confidential information following AI's rapid adoption. One analysis found 4 per cent of employees have pasted confidential data into ChatGPT.6 Several corporations, including Apple, JP Morgan and Verizon, were so concerned about losing confidential information that they implemented bans on the use of third-party generative AI tools altogether. Central to preventing both accidental and malicious AI misuse incidents is education. Organisations and their employees must ensure they understand how generative AI creates content, who has access to the prompt information entered and where response content is drawn from. Employees should also receive guidance on appropriate AI use.

In embracing AI, we must keep a firm grip on the reins. Human oversight at every step of the process is essential if we are to mitigate AI's risks and benefit from the momentous efficiencies it promises to deliver.

Footnotes

1 AI Battles Superbugs: Helps Find New Antibiotic Drug To Combat Drug-Resistant Infections (scitechdaily.com)

2 Robert was wrongly arrested because of a racist algorithm. Are these the hidden dangers of AI? - ABC News

3 Robots trained on AI exhibited racist and sexist behavior - The Washington Post

4 OpenAI CEO Sam Altman Asks Congress to Regulate AI - Time Magazine

5 AI ushers in the Misinformation Age | The Australian

6 11% of data employees paste into ChatGPT is confidential - Cyberhaven

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More