With most AI systems, the inner working patterns are not visible, even to its developers. These systems are referred to as 'Black Box AI' and pose challenges in terms of accountability, transparency, and various other regulatory issues.
The Concept
The basic operative mechanism of Black Box AI is 'Machine Learning', whereby AI is essentially trained with the help of large datasets to make decisions and generate an output. Machine Learning, in simple terms, is an algorithm fed with a huge amount of data which is trained to analyse and understand patterns and features contained therein. However, the layers of computations often conceal the path which leads to the final output or result, thereby forming a 'Black Box', where the entire path remains concealed. While one would be clear on the data fed (means) and the result generated (ends), the algorithmic analysis leading to the final result remains shadowed. In such a system, the issue of transparency and accountability arises. AI models which can be categorised as Black Box AI are embedded in our everyday lives. They power 'facial recognition' software used for unlocking smartphones, 'AI voice assistant' used for ease of convenience while operating electronic gadgets, chatbots like ChatGPT and 'hiring algorithms' used for screening of candidates for recruitment purposes.
Machine Bias
The problem with opaque systems begins to arise when decisions made by AI unexpectedly generate unwanted outcomes. A chief concern is discrimination.
As reported by the New York Times in 2020, when police officials sought assistance of AI to identify wrongdoers in a shoplifting incident in Detroit, the facial recognition technology wrongfully identified one 'Robert Julian Borchak Willims' as the wrongdoer despite him having no relation with the incident. The AI, for unknown reasons, matched the African-American man with a grainy image from an in-store surveillance video showing a man taking away watches worth millions of dollars. Willims was arrested, forced to provide a mug shot, fingerprints and a sample of DNA. The case was dismissed two weeks later at the prosecution's request. This may be the first known case of its kind, where a faulty facial recognition match led to wrongful arrest, and it underscores the risks associated with AI tools. As a New York Times article reports1, facial recognition systems have been used by US police forces for more than two decades. Recent studies by M.I.T. and the National Institute of Standards and Technology, or NIST, have found that while the technology works relatively well on white men, the results are less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases.
The article suggests that low-quality search images — such as a still image from a grainy surveillance video — are a part of a problem, and that AI systems should be tested rigorously for accuracy and bias.
In another report2, the researcher states that a typical commercial AI face recognition system most accurately predicts fair-skinned males. Images of Michelle Obama and Oprah Winfrey — two well-known, oft-photographed black women — were frequently misidentified by AI. Serena Williams, another globally recognised black woman, was incorrectly identified as male 76% of the time. Such misidentification of iconic women indicates the urgency to re-examine the manner in which AI tools are trained and how they analyse data.
While issues of race may not be typical to India, discrimination on the basis of skin colour is certainly an issue that the country grapples with. Furthermore, the manifestation of the Black Box AI problem may not just be in the context of wrongful facial recognition but something broader. For example, while an AI computer vision system would classify the image of a woman in a white western gown as a bride, it may classify the image of an Indian bride wearing a red sari, as a costume for an event or performing art. Such inaccuracies would probably be rooted in the fact that a majority of the images used to train AI datasets do not come from India but the Western world.
Need for transparency
Legally speaking, Article 14 of the Constitution of India guarantees the 'principle of equality' to every citizen of India. Recently, a paper titled 'Towards Responsible AI for All' by NITI Aayog (the apex public policy think tank of the Indian government), recognised the principle of transparency and, in the context of AI, defined it as "The design and functioning of the AI system should be recorded and made available for external scrutiny and audit to the extent possible to ensure the deployment is fair, honest, impartial and guarantees accountability".
This is echoed in Article 4a of the European Union AI Act (adopted on 14 June 2023) wherein, " AI systems shall be developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights." Likewise, Article 13 therein states that "high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system's functioning." Similarly, Singapore's Model AI Governance and the US Blueprint for an AI Bill of Rights advocate transparency, explainability, and fairness.
Indian Initiative
A team of scientists at the Indian Institute of Science (IISc) recently made a breakthrough in addressing bias concerns in AI-generated images. This research, conducted at the Vision and AI Lab of the Department of Computational and Data Sciences, offers a unique approach to mitigate bias in popular image-generative models without the need for additional data or model training through 'distribution guidance for image generation'. This is possible through a machine-learning process such that the machine is fed with information that follows a prescribed attribute distribution, thereby reducing inherent bias present in the training data. The innovation of the researchers lies in leveraging the latent features of 'donoising U-Net', which is a component of diffusion models used in image generation and creative applications. These features, rich in demographic semantics, are utilised by the newly introduced attribute distribution predictor (ADP) to guide the generation process towards fairness. For instance, if one wants to generate images of doctors or firefighters, the images would typically be of men. The new approach acts as a plug-in, intervening during the image-generation process which involves multiple iterations. At each iteration, the approach guides the diffusion process to achieve a balanced distribution of the desired attributes in the final generated images. The initial results are promising and the code is open access.
Explainable AI
The development of Explainable AI (XAI) has also evolved to address accountability issues. XAI refers to techniques and methods that allow human users to understand and trust the output of AI models. It pertains to the ability to explain how an AI system arrived at a particular outcome so as to help users understand why a certain decision was made, what factors were considered, and what the possible outcomes are.
In sectors like finance, healthcare, and law, where AI decisions can have significant impacts on people's lives, explainability is crucial. For example, in healthcare, an AI system's diagnosis must be explainable to doctors and patients to ensure trust and facilitate correct treatments.
INDIAai (The National AI Portal of India), a joint venture by MeitY (Ministry of Electronics & Information Technology), NeGD (National e-Governance Division) and industry body NASSCOM, has been set up to address concerns re AI and create a safe and trusted ecosystem that fosters AI innovation by democratising computing access, enhancing data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially impactful AI projects, and promoting ethical AI. One of the subjects under discussion is XAI.
In a connected development, MeitY in collaboration with the UNESCO South Asia Regional Office, organised a stakeholder consultation on Safety and Ethics in Artificial Intelligence in New Delhi on November 14, 2024. This event marked the launch of a series of five consultations under the AI Readiness Assessment Methodology (RAM), an initiative by UNESCO and MeitY aimed at crafting an India-specific AI policy report. The report's objective is to identify strengths and growth opportunities within India's AI ecosystem, while providing actionable insights for the responsible and ethical adoption of AI across various sectors. The consultation brought together diverse stakeholders from government, academia, industry, and civil society to explore strategies for aligning India's AI ecosystem with UNESCO's Global Recommendation on the Ethics of AI, emphasising transparency, inclusiveness, and fairness.
A vital complement is India's Digital Personal Data Protection Act, 2023. Though not in force as yet, the Act creates a consent-centric regime for personal data processing. The Black Box dilemma is certain to be a contentious issue in this regard, and apart from adopting measure such as XAI, the safest way to avoid liability would be to train Gen AI tools on anonymised or synthetic data. Another aspect is when employees utilise a Gen AI tool like ChatGPT using business-related information; then this potentially confidential information is used to train the model further. An add-on can help filter the data and preserve secrets.
The Way Forward
For organisations working with GenAI, the best option might be to let the experts in early to ensure that a particular tool's design and implementation complies with privacy and transparency regulations right from the phase of ideation.
Footnotes
1. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.