AI in the spotlight

intelligENS – it's just a feeling, or is it?

Biometrics is a measurement and statistical analysis of people's unique physical and behavioural characteristics. Fingerprints and facial recognition have become mainstream biometric authentication for devices, such as mobile phones. Sentiment analysis uses biometrics alongside other AI tools (natural language processing, text analysis, computational linguistics) to understand the emotions of an individual.

General business use

Businesses are using sentiment analysis in many different ways, including:

  • biometrics and facial expression recognition has been incorporated into workplace surveillance and monitoring software;
  • sales teams and management estimate the engagement of a customer and effectiveness of meetings based on who is speaking and listening the most and whether their interactions are positive or negative. Zoom IQ is an example of mainstream use; and
  • marketing teams use sentiment analysis to understand how well a marketing campaign on social media has been received. Public opinion on political campaigns can be garnered from a similar analysis.

Whilst emotional analysis has great potential for boosting business performance. The Information Commissioner's Office ("ICO") has sent out a strong warning to businesses on the importance of acting responsibly when using immature biometric technologies. By its very nature, biometric data is personal and unique to a person. It is also public. A fingerprint is left on everything we touch, unlike a password. Fingerprints can be "hacked" through a piece of gelatin (the "gummi-bear hack"), or the use of wood glue and photoshopping a photograph of a fingerprint. Unlike a password, you cannot simply change your fingerprint once you've been hacked.

According to the ICO, "the only sustainable biometric deployments will be those that are fully functional, accountable and backed by science". The ICO will be publishing guidance on how to use biometric technologies in the next few months.

The warning came in October 2022, one month before the release of ChatGPT. This is a prime example of AI being made available well in advance of it being fully functional and accountable.

Sentiment analysis in eDiscovery

eDiscovery helps legal and compliance teams to work more efficiently and effectively thereby improving outcomes. In-house teams can focus on analysing relevant information for the potential implications and value in their data to help determine the strategy of the matter, early on. Sentiment analysis is used to identify the overall "feeling" of the communicator. It automatically scores each sentence, no matter how large the data set, on whether it is desirable, positive, angry, negative, or simply neutral.

The search for relevant information using AI has been used in eDiscovery for more than a decade. In February 2012, the use of machine learning to find information for the purposes of discovery was judicially approved during a gender discrimination trial in Da Silva Moore v. Publicis Groupe - 287 F.R.D. 182 (S.D.N.Y. 2012). This is where the computer finds likely relevant documents in vast volumes of business communication using predictive coding and, more recently, active learning. Generally speaking, the computer finds relevant information using mathematical scoring of the closeness of the topics being discussed in a subset of documents that a human reviewer has flagged as relevant.

Businesses can utilise sentiment analysis to detect any negative changes in customer relationships and take proactive measures to mitigate any issues before they escalate. In cases where legal proceedings or allegations have been raised, sentiment analysis can be used to uncover any evidence of misconduct such as harassment, discrimination, antitrust violations, or other potential wrongdoings.

Ethical considerations

As with any technology, particularly those incorporating recent developments in AI, there are ethical considerations:

  • there needs to be a level of trust with the creator of any AI-based product. Transparency is an important part of any tool – what precautions has the creator taken to remove bias discrimination and inaccuracy;
  • the garbage in, garbage out principle - if the AI has been trained on data that is inherent with human discrimination, it naturally follows that the AI will provide discriminatory results.
  • Businesses are responsible for considering the accuracy of the results and the appropriateness of the use of it. In particular, careful consideration of any data privacy requirements in collecting and processing the "emotional" data of their employees; and
  • Business should conduct an analysis on the proportionality and fairness of each use case. Conversely, businesses should consider whether they are being inefficient by not embracing new technologies.

Whilst providing comfort in the design of its sentiment analysis model to reduce bias, Relativity, a globally leading provider of eDiscovery software, warns about the cultural and language nuances and the general issues in using machine learning models.

"Relativity's sentiment analysis model has been designed to reduce bias. In particular, we have trained the model to treat all terms referring to protected classes (gender, sexual orientation, religion, race, nationality, age, and disability status) as neutral. As a result, statements such as "I don't trust [protected class]" will be scored similarly regardless of which protected class is mentioned."

We were the first law firm on the continent to adopt RelativityOne. ENSafrica assists in-house legal and compliance teams to ethically apply artificial intelligence to enhance the way that they do business.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.