ARTICLE
7 August 2024

The Threats Of Artificial Intelligence (AI) On The Legal And Regulatory Systems – All You Need To Know

OA
Olisa Agbakoba Legal (OAL)

Contributor

Olisa Agbakoba Legal (OAL) is a leading world class legal solutions provider with clients in diverse sectors of the Nigerian economy. Our diversified skills ensure that we provide innovative legal solutions to our clients. At OAL, we are always devoted to our EPIC values: our excellence, professionalism, innovation & commitment.
Recently, there has been substantial discussion concerning the ethics of artificial intelligence (AI), particularly involving the creation of robot weapons, as well as a related but broader issue...
Worldwide Technology

Recently, there has been substantial discussion concerning the ethics of artificial intelligence (AI), particularly involving the creation of robot weapons, as well as a related but broader issue about AI as an existential threat to humanity. AI is made up of a lot of data and complicated algorithms, but many Nigerian lawmakers, researchers, and practitioners don't know anything about these algorithms and technologies. This makes it hard to understand how AI works. AI continues to dominate and evolve at such a rapid speed that the law cannot catch up. AI is being adopted as new technologies emerge. Not to mention that even judges may not grasp what the technology does.

In this scenario, regulation and liability are two sides of the same coin in terms of public safety and welfare. Regulation is concerned with making AI systems as safe as possible; liability is concerned with determining who we can blame—or, more precisely, who we can seek legal recourse from—when anything goes wrong.

One of the most significant regulatory threats to AI adoption in Nigeria is the lack of an appropriate legal framework for its emerging technology. Following several scientists' concerns about the possibility of a technological singularity, which could pose an existential threat to humans, the need for a legislative framework to oversee AI development evolved.

AI Threats to Fundamental Human Rights

AI has the potential to influence humans and their lives. Many examples can be found throughout this article, ranging from self-driving cars to the application of AI to support the provision of a court judgment to the study of the early stages of cancer. Even now, AI systems can deliver higher outcomes and productivity than people in many domains, which is why humans developed AI in the first place. However, without sufficient measures, the use of AI may have an influence on human rights, which is why it is critical that AI tools are always thoroughly assessed at the early stages of their development to minimize the possibility of a detrimental impact. Throughout this article, numerous challenges arising from the application of AI in relation to specific human rights will be discussed in further detail. However, it is important to remember that the usage of AI systems can have an impact on almost all human rights.

The use of artificial intelligence can change Nigerian values and norms, which can lead to violations of fundamental rights like the right to freedom of expression, freedom of assembly, human dignity, non-discrimination based on sex, race or ethnic origin, religion or belief, disability, age, or sexual orientation in certain areas, protection of personal data and private life, or the right to an effective legal remedy and a fair trial, as well as consumer protection.

These risks may be the consequence of errors in the general design of AI systems (including human oversight) or the usage of data without addressing possible bias (e.g., the system is trained using exclusively or mostly data from males, resulting in inferior findings in relation to women). AI can do a variety of tasks that were once exclusive to humans.

Consequently, individuals and legal entities will increasingly be exposed to acts and decisions made by or with the aid of AI systems, which may be difficult to comprehend and dispute effectively when necessary. In addition, AI enhances the capacity to monitor and analyse people's everyday activities. There is a possible risk, for instance, that state authorities or other organizations may utilize AI in violation of data protection and other regulations to conduct mass surveillance and monitor the behaviour of employees.

By analysing huge amounts of data and discovering connections between them, AI may also be used to retrace and de-anonymize data about individuals, generating additional personal data protection issues even for datasets that do not include personal data per se. AI is also utilized by internet intermediaries to prioritize information for their users and moderate content. The way data is processed, how apps are made, and how people interact with each other can all affect the rights to free speech, privacy, and political freedom.

Bias and discrimination are inherent dangers in every social or economic endeavour. The process of human decision-making is not immune to errors and biases. However, the same bias, if present in AI, may have a considerably greater impact, impacting and discriminating a greater number of individuals without the social control mechanisms that govern human behaviour. This can also occur when an AI system "learns" during operation.

Liberty, Security and Fair Trial

The risks posed by using AI systems to facilitate or amplify unjust bias can pose a threat to the rights to liberty and security, as well as the right to a fair trial when these systems are used in situations where physical freedom or personal security is at stake (such as in law enforcement and the justice system). Some AI systems used to predict recidivism, for example, rely on the suspect's shared characteristics (such as address, income, nationality, debts, and employment), which raises concerns about maintaining an individualized approach to sentencing and other fundamental aspects of the right to a fair trial.

Potential bias in the data sets used by AI to learn is an evident example of a challenge influencing the fairness of a trial. AI systems do not comprehend the complete context of our complicated societies. Their input data is their sole context, and if the data used to train AI is inadequate or contains (even unintentional) bias, the AI output may be anticipated to be incomplete and biased as well. In such situations, where the outcome could not have been prevented or anticipated during the design phase, the risks will not stem from a flaw in the original design of the system but rather from the practical consequences of the correlations or patterns that the system identifies in a large dataset.

CASE STUDY: Nijeer Parks Was Arrested and Jailed Due to False Facial Recognition Match

Nijeer Parks, a 31-year-old Black man from Paterson, New Jersey, was accused of shoplifting and attempting to attack a police officer with a car in Woodbridge, New Jersey, in February 2019. Even though he was 30 miles away at the time of the event, the police used face recognition software to identify him. (General and Sarlin 2021)

Parks was later arrested and spent 11 days in jail on accusations including aggravated violence, unlawful possession of firearms, stealing, and marijuana possession, among others. According to a police report, Parks was apprehended after a "high profile comparison" from a facial recognition scan of a fake ID found at the crime scene.

The case was dismissed in November 2019 due to a lack of evidence. Parks is now suing everyone involved in his arrest for civil rights violations, unlawful arrest, and false detention. Facial recognition technology, which employs machine learning algorithms to identify people based on their facial traits, is known to have numerous problems.

A 2019 study discovered that facial recognition algorithms are "much less accurate" in distinguishing Black and Asian faces. Nijeer Parks is the third person to be arrested due to false face recognition matches. The individuals misidentified by AI recognitions were all Black men, leading to cases of Racism.

The opaque nature of an AI system may make it impossible to comprehend the reasoning behind its results, making it difficult or impossible to ensure the full respect of the principle of equality of arms, to challenge the decision, to seek effective redress, or to have an effective remedy. AI systems frequently lack transparency in their conclusions. They lack explainability, or the ability to explain both the technical processes of an AI system and the corresponding human judgments (e.g. application areas of a system). As a result, humans do not understand or have concerns about how they get their opinions. These conclusions may be innocuous in everyday use, but when employed in the law court, they may interfere with the fairness of the proceedings.

The specific characteristics of numerous AI technologies, such as opacity (the "black box effect"), complexity, unpredictability, and partially autonomous behaviour, may make it difficult to verify compliance with and impede the effective enforcement of Nigerian laws designed to protect fundamental rights. Authorities tasked with enforcing the law and impacted parties may lack the ability to ascertain how a decision made with the assistance of AI was reached and, consequently, if the applicable laws were adhered to. Individuals and legal organizations may have difficulty gaining access to justice in cases where adverse judgments might affect them negatively. Consent must be given conditionally on an acceptable response to the four transparency issues (opacity – the "black box effect", complexity, unpredictability, and partially autonomous behaviour) for it to be really informed.

If applied appropriately and with discretion, however, certain AI applications can also make the jobs of justice and law enforcement personnel more efficient and consequently have a beneficial influence on fundamental rights. This needs further efforts to enhance the knowledge and implementation of artificial intelligence systems among judicial actors.

Personal Data Privacy Issues

The National Human Rights Commission (NHRC) protects several facets of our private lives, including a person's (general) privacy, physical, psychological, and moral integrity, identity, and autonomy. These categories may be impacted by a variety of AI applications. This is particularly noticeable when personal data is handled (for example, to identify or monitor individuals), although it can also exist in the absence of personal data processing. Examples of invasive AI applications include systems that track the faces or other biometric data of persons, such as micro-expressions, gait, voice tone, heart rate, or temperature data.

Aside from identification and verification, such data can be used to assess, anticipate, and affect a person's behaviour, as well as to profile or categorize individuals for a variety of reasons and settings, ranging from predictive policing to insurance premiums. There is also a lot of evidence that using biometric recognition technology can lead to discrimination, especially based on skin colour and/or sex if biases in the algorithm or underlying dataset are not dealt with well.

Furthermore, AI-based monitoring techniques can be utilized in ways that have a broad impact on "generic" privacy, identity, and autonomy, as well as the ability to constantly monitor, follow, identify, and influence individuals, compromising their moral and psychological integrity. As a result, people may feel compelled to conform to a specific norm, raising the problem of the power balance between the state or commercial organization employing tracking and surveillance technologies on the one hand and the tracked individuals on the other. The indiscriminate tracking of all aspects of people's lives (via online behaviour, location data, and data from smartwatches and other Internet-of-Things (IoT) applications, such as health trackers, smart speakers, thermostats, cars, and so on) can have the same impact on the right to privacy, including psychological integrity. A right to privacy entails the right to a private area free of AI-enabled surveillance, which is required for human development and democracy.

Threats to Freedoms of Expression, Assembly, Information Access, and Association

The use of AI systems, both online and offline, can impact the freedoms of expression, access to information, assembly, and association of individuals. AI applications can be used to efficiently intervene in the media space and significantly alter human interactions.

The internet and social media platforms have demonstrated enormous potential for individuals to organize and exercise their rights to peaceful assembly and association. Nonetheless, the use of AI-driven surveillance can jeopardize these rights by automatically tracking and identifying these (groups of) individuals or even by excluding them from social protests.

Moreover, the personalized tracking of individuals—both online and in the real world—may compromise these rights by reducing the protection of "group anonymity." This can result in individuals no longer participating in peaceful demonstrations and refraining from openly expressing their opinions, watching media, and reading particular books and newspapers.

In addition, the use of AI systems in online (social) media and news aggregators to pre-sort or display content based on a user's personal preferences or interests can impact the right to receive and transmit information and ideas. This can also reinforce outmoded social norms, such as gender-based stereotypes, as well as fuel polarization and extremism by creating echo chambers and filter bubbles.

Search engines, recommendation systems, and news aggregators are frequently opaque and unaccountable when it comes to the data they use to select or prioritize content, as well as the purpose of the specific selection or prioritization, which can be used to promote financial or political interests.

AI systems are routinely used to select and prioritize content that keeps users on the platform for as long as possible, regardless of the content's objectivity, veracity, diversity, or relevance. Moreover, content is increasingly "faked" by creating synthetic media footage, such as by mimicking the appearance or voice of real people using so-called "deep fakes." This technology can already generate or manipulate visual and audio content with an unprecedented capacity to deceive and blur the line between real and fake content. This has a significant impact on the ability of individuals to freely form and develop opinions, receive, and transmit information, and exchange ideas, and may lead to the deterioration of our information society. In addition, online platforms are increasingly relying on AI systems to detect, flag, degrade, and remove content that violates their terms of service. Inaccuracies in AI systems can result in legitimate content protected by the right to freedom of expression being mistakenly flagged or removed. This is especially challenging for content that requires a nuanced and contextual understanding, such as hate speech and disinformation.

As online platforms have captured audiences and advertising revenue, traditional news outlets have struggled to survive. The way people get news and information online is a threat to the future of news media. This is a threat to a free, independent, and diverse media ecosystem.

Issues Relating to Inequalities, Discriminations and Biases

One of the most reported effects of AI systems on the prohibition of discrimination and the right to equal treatment is that AI systems may be used to identify and alleviate human prejudice. Machine learning algorithms have been used to look through past court decisions for patterns and anomalies. Such algorithms provide chances to respond to recognized prejudices, which are impacted by politics and race as well as birthdays, weather, and athletic events, by publicizing them. It has also been proposed that applying the same algorithm to a variety of instances will ensure that the same decision-making logic is used consistently. In certain settings, this may be true for transparent and well-designed symbolic AI systems, but algorithms may also reinforce and scale up the worst excesses of human bias and prejudice.

At the same time, the use of AI systems can make it easier for biases and stereotypes, sexism, racism, ageism, and other types of unfair discrimination (including discrimination based on proxies or intersectional grounds) to continue and get worse. This poses a new challenge to non-discrimination and equal treatment.

In general, AI developers do not create biased algorithms on purpose, but there are a few unintended ways in which they might be created. Take, for example, a symbolic AI algorithm for screening employment applications. It may assess candidates only based on their education and experience. However, if it fails to consider considerations such as maternity leave or to recognize education at foreign universities in the same manner as human selection committees would, the algorithm may be prejudiced against women and international applicants.

Consider a comparable AI tool inside the context of the ML paradigm. These algorithms devise their own methods for determining which kinds of candidates were chosen from their training data. The computer can learn structural biases in these picks if there has been a history of them, such as racial discrimination. Even when data on country or ethnicity is deleted, ML is effective at discovering proxies for underlying patterns in other data, such as languages, postcodes, or schools, which may be good predictors of ethnicity.

CASE STUDY A: Amazon's AI-based Recruiting Tool Showed Bias Against Women

Amazon discovered that its new AI-based recruitment system was not assessing candidates for software developer jobs and other technical positions in a gender-neutral manner: the new recruiting engine "did not like women."

Amazon began using machine learning programmes in 2014 to examine the resumes of job seekers. The AI-powered trial hiring tool, on the other hand, had a big flaw: it was biased towards women. The programme was trained to evaluate applications over a 10-year period by reviewing resumes submitted to the company. Because men submitted most of these resumes, the algorithm trained itself to favour male candidates. This meant that resumes featuring words like "women's" (as in "women's chess club captain") were demoted. Similarly, graduates from two all-women's universities were ranked lower.

By 2015, the company realised the tool was not evaluating applicants for diverse roles in a gender-neutral manner, and the experiment was subsequently discontinued. The incident was brought to light in 2018 after Reuters reported on it.

CASE STUDY B: Microsoft's Ai Chatbot Turns Sexist and Racist on Twitter

Microsoft released the Tay AI chatbot in 2016. Through "casual and playful chat," Tay interacted with Twitter users. But in less than 24 hours, individuals on Twitter manipulated the bot to make blatantly sexist and racist statements.

Tay used AI to gain knowledge from its interactions with Twitter users. It became "smarter" the more discussions it had. It didn't take long for the bot to start repeating offensive user comments like "Hitler was right," "feminism is cancer," and "9/11 was an inside job.

Microsoft had to abandon the bot a day after its debut due to the disaster that was developing. Peter Lee, vice president of research at Microsoft, later apologized for Tay's unintentionally inappropriate and harmful tweets, saying they "do not represent who we are, what we stand for, or how we developed Tay."

The risk of discrimination can arise in a variety of ways, including through biased training data (e.g., when the data set is not sufficiently representative or accurate), biased design of the algorithm or its optimization function (e.g., due to developers' conscious or unconscious stereotypes or biases), exposure to a biased environment once it is used, or biased use of the AI system. For example, historical data sets may be deficient in gender-balanced data due to prior legal or factual discrimination against women. When such a data set is exploited by AI systems, it might result in similarly skewed conclusions, perpetuating unfair discrimination. The same may be said for historically vulnerable, excluded, or marginalized communities in general. Furthermore, the above-mentioned disparities in representation in the AI field may compound this danger. Measures to promote ethnic and social diversity and gender balance in the AI workforce might help alleviate some of these dangers.

Moreover, when the transparency of AI systems' decision-making processes is not ensured and when mandatory reporting or suitability requirements are not in place, the existence of such biases can easily go undetected or even obscured, marginalizing the social control mechanisms that normally govern human behaviour. Several studies have proven that with only a few factors, it is feasible to de-anonymize data and generate accurate predictions about individuals. In one well-documented example, an algorithm meant to estimate a prisoner's likelihood of reoffending was implemented to assist more objective parole decisions, but it was demonstrated to be biased against black inmates, erroneously and unfairly.

Furthermore, discrimination based on incorrect predictions is feasible, such as employing algorithmic analysis of speech patterns and facial movements to "identify" disability in job prospects. Such prejudice has been coupled with opacity—due to technical complexity or economic sensitivity—to limit victims' options for redress across a wide variety of disciplines, from justice and policing to recruiting and employee appraisal. Another example of algorithmic prejudice is the difference in reliability for various users, with face and voice recognition systems consistently performing well for white men. This is probably because the training data isn't balanced, which can lead to uneven service for people who pay for these services and when these methods are used to look at data for security applications or to screen job applicants.

It is vital to emphasize that AI algorithms cannot be impartial because, like humans, they acquire a method of making sense of what they have seen in the past and use this "worldview" to categorize new scenarios with which they are confronted during their training. It is commonly overlooked that algorithms have subjective worldviews. They seem to be objective because their biases are applied more consistently than humans.

Perhaps their use of numbers to portray complicated social realities lends them the appearance of exactitude ("mathwashing"). Perhaps humans see their remarkable might but struggle to understand their rationale, so they simply accept their apparent dominance. No matter the reason, it's important to understand that AI agents are inherently subjective if you want to make sure they only do jobs for which they are qualified.

After all, if AI is a discrimination machine, it is unquestionably preferable to train it to discriminate against cancer rather than vulnerable humans. In the same way, AI might be able to help find biases in how people make decisions, but it's unlikely that it will play more than a supporting role in efforts to get rid of deeply rooted prejudices in human society. The next chapter delves more deeply into these alternatives.

Finally, given that people are inherently subjective and often difficult to comprehend, others argue that algorithms should be punished for the same characteristics. When algorithms or people propose films based on someone's tastes, bias and explainability don't matter much, but they might be critical when it comes to criminal procedures or hiring new employees.

Biases can multiply and spread at an alarming rate when they get integrated into algorithms that influence these areas. We have safeguards in place to deal with human subjectivity and occasionally untrustworthy explanations. Legal measures like courts of law, social measures like norms and values, and even technical measures like lie detector tests are examples of this. One reason algorithms are seen as "biased black boxes" is that they are not always subject to the same controls.

AI Social and Economic Impacts

The widespread adoption of AI systems in every aspect of our lives also poses new threats to our social and economic rights. AI systems are increasingly used to monitor and track employees, distribute work without human intervention, and evaluate and predict employee potential and performance in hiring and firing situations. In some cases, this can hurt workers' rights to a decent wage because algorithms can decide their pay in a way that is irregular, inconsistent, and not good enough.

Moreover, AI systems can also be used to detect and combat worker unionization. These applications can threaten the rights to fair, safe, and healthy working conditions, dignity at work, and the right to associate. The capacity for discrimination of AI systems that evaluate and predict the performance of job applicants or employees can also undermine equality, including gender equality, in employment and occupation matters.

AI systems can be used in the context of social security decisions, which may have an impact on the right that all workers and their dependents have to social security. Indeed, AI systems are increasingly utilized in the administration of social welfare, and the decisions made in this context can have a substantial impact on the lives of individuals. The deployment of AI systems in education or housing allocation administrations raises comparable concerns.

Furthermore, whenever AI systems are used to automate decisions regarding the provision of healthcare and medical assistance, such use may have an impact on the rights, which state that everyone has the right to benefit from measures that allow them to enjoy the highest achievable standard of health, as well as the right to receive social or medical assistance. AI systems can be used, for instance, to determine patients' access to health care services by analysing their personal data, such as their health care records, lifestyle data, and other information. Importantly, this must occur in accordance not only with the right to privacy and the protection of personal data but also with all social rights, the impact of which has received less attention than that of civil and political rights.

As some contemporary AI applications, such as facial recognition, have demonstrated, deviation from commonly held social values can result in opposition and controversy. In response, specific values such as privacy and non-discrimination can be "by design" incorporated into technology. In the symbolic AI paradigm, this involves coding specific instructions, whereas in ML, it entails determining which data is used to train the algorithm. In addition to limiting algorithm usage to specific contexts and implementing robust quality control and impact assessment mechanisms, value alignment can also involve limiting algorithm usage to circumstances. So, it's reasonable to think that AI development today will consider how people think about autonomy and privacy now, but it's also possible that these values will change a lot in the years to come.

Predictions of individuals' willingness to pay used to set individual prices were identified as one of the challenges to transparency. These practices may also pose obstacles to competition, but it may be challenging to investigate them due to unequal access to algorithms. It is also conceivable that price-setting algorithms could automatically learn to collude in order to fix prices without the knowledge of the vendors involved. Both personalized pricing and automated collusion may pose a threat to competition authorities.

The availability of data and algorithms makes it increasingly simple and inexpensive to produce deep fakes, making them accessible to individuals with modest skills and resources. The fakes themselves are only one aspect of the problem, as these materials can be rapidly disseminated via powerful dissemination platforms, which are in some cases also powered by machine learning. Collectively, these applications pose financial risks, risks to reputation, and problems for people, organizations, and society when it comes to making decisions.

Differentiating between appearances and reality in the digital age is also indicative of a broader problem. This includes the utilization of algorithms to measure and predict performance. For instance, ML algorithms have been used to screen job candidates by automatically analyzing videos of them speaking and employing features like speech patterns and facial movements as surrogates for job suitability. Using the system, these characteristics become key predictors of job performance, which can reinforce structural inequalities and prejudices. In a further twist, video analysis can be used to categorize candidates deemed likely to have a disability, opening the door to discrimination against people with disabilities or, more precisely, those whom the algorithm identifies as likely to have a disability.

AI adoption raises potential threats of job losses or obsolete employment.

A typical predicted consequence of future AI is job displacement. In gloomy scenarios, human employees are replaced by agents who do not take vacations, join labour unions, or even receive pay. This results in increasingly unequal societies, as those who can do useful activities or have a stake in the means of production become affluent while the remainder endure unemployment and poverty. Unlike prior waves of automation, employees lose their function in the industrial system and, consequently, their negotiating power, resulting in the formation of an irrelevant underclass.

If the notion of employment itself becomes outmoded, this job obsolescence is not a concern for optimists. It has been argued that future AI might take over nearly all professions, allowing us to construct a "Digital Athens" in which robots play the unpleasant role of slaves, freeing humans to focus on interpersonal, artistic, leisure, and recreational pursuits. Some may opt to work for personal fulfilment or additional compensation, possibly in technological development or professions where human interaction is essential, such as delivering social care. These two ideas seem to be at odds with each other, but they have been combined into a single vision in which some countries benefit from AI progress and take care of their people while others fall behind, creating pockets of extreme wealth and extreme poverty around the world.

These scenarios are purposefully provocative, prompting us to consider the future effects of AI in ways that are very similar to the current effects. Depending on their skills, sector, location, and ability to retrain, it appears likely that AI will affect people from diverse industries in different ways, depending on their industry, location, talents, and retraining capacity.

Threats of AI Anthropomorphism and Humanized Robotics

Although AI has many potential applications, anthropomorphizing it can pose several risks. AI systems have the capacity to replicate the human brain, demonstrate intellect, and express emotions. But do they truly have intelligence or emotions? No, they may exhibit emotions but not feel them. They can display and detect emotions using learning datasets that have been trained to react based on a pattern. However, only living species can feel anything because of their sensory organs. However, by demonstrating comparable behaviour, AI systems can lead humans to assume that they can. That's what anthropomorphizing AI accomplishes. It blinds people to the true capabilities and limits of AI.

The argument over "Will AI diminish jobs?" has been going on since AI computers began reducing the need for people in various tiny boring activities. It is still going on now that they have taken over some larger and more difficult projects. Many individuals believe that AI will eliminate human employment, while others believe that technology will generate more new jobs than it eliminates. But making AI more like people will almost certainly make less need for people, which will put many people out of work.

Anthropomorphized AI systems can build artificial humans capable of befriending a person and creating a completely new environment for him or her. It will be wonderful to have an artificial companion who can grasp feelings and be trustworthy and loyal. However, this will render people non-interactive with others. People will not connect socially with their coworkers or friends because they will be content in their own world with their robot pals.

Humans are constructing AI systems for their own use, and the technology has given much back to them. However, anthropomorphizing AI will result in a situation in which AI will utilize humanity rather than the other way around. It is not that AI should not be employed; rather, it should not be anthropomorphized because of the numerous threats it brings.

These are some key ethical and legal issues of AI Anthropomorphism:

A. AI as Legal Entities

If robots can be endowed with intellect and the ability to make decisions that impact others, do they also have, or can they be granted human-like rights and responsibilities? These entities have legal rights and obligations in various nations. Non-human entities are often corporations; now that AI is growing more like entities, does this imply that they, too, have rights? This is a topic that many AI researchers have debated. It is even more pertinent given the level of intelligence demonstrated by robots such as Sophia and the fact that the Saudi government granted this robot citizenship.

CASE STUDY A: Robot Citizenship in Saudi Arabia

Saudi Arabia has Granted a robot citizenship, making it the first government to do so. The move is part of Saudi Arabia's larger plan to modernize itself. Saudi Arabia's technical advancements are drawing attention away from its human rights transgressions. Prince Bin Salman's attempt to lead his kingdom into the future may send his people back in time. Some think that discussing robot rights is premature, especially because they have not yet gained mainstream use. Others believe that if robots are awarded rights, they would be placed in the same category as animal rights.

CASE STUDY B: Robot Arrested Over Ecstasy Purchase at Darknet

Another case in point occurred a few years ago in Switzerland when a robot purchased Ecstasy (an illegal narcotic) on the dark web and was "arrested," i.e. seized. Random Darknet Shopper is a computer program that randomly buys an item from the "dark net," which is a part of the internet that is not open to the public. Swiss police seized Bot after it purchased 10 ecstasy tablets from Germany. It was later released "without charge," according to the artists behind the bot. They made it as part of an art exhibition to show things like sneakers, a passport scan, and cigarettes that the robot bought on the dark web. Consciously purchasing illegal narcotics is a serious offence, but how would you administer justice?

Some emerging robots can converse with humans in regular ways. Would the court be able to punish the perpetrator if they were abused?

B. Robots for Surveillance

Robotic technology has enabled unprecedented levels of direct surveillance. The vast array of sensors (cameras, laser and sonar rangefinders, GPS, etc.) that robots may be outfitted with, as well as the variety of robot designs and the breadth of robotic physical capabilities, have significantly increased surveillance capability. Unmanned drones utilized by the military and police may remain in the air for days, navigate independently, and stake out certain places for extended periods of time without being noticed. Cybercriminals also employ AI frameworks to compromise vulnerable hosts. In increasingly audacious ways, military leaders are expanding artificial intelligence (AI). These cutting-edge systems are made possible in part by sophisticated machine learning (ML) models. However, these same ML models create new holes in AI systems that enemies can use to their advantage, which could have disastrous effects.

AI should be of particular concern to Nigeria's defense because the approach threatens to disrupt and possibly even bypass or overcome Nigeria's emerging AI-based defence capabilities. There are already a multitude of ways an adversary could employ AML to attack an AI defense system. An adversary could, for instance, use AML to impersonate automated surveillance cameras, allowing the adversary to move undetected in areas that are otherwise monitored by surveillance. An adversary could also employ AI to subvert the navigation systems of autonomous vehicles, causing them to veer away from their intended targets or, in the worst case, crash into civilian populations or infrastructure. In sub-threshold or gray zone conflicts, AML could be used to help an enemy do harmful things on social media that get around AI-powered content moderation algorithms. This would let these harmful things go on without being caught.

In addition, the employment of robotic monitoring is not restricted to government agencies. Private corporations can use drones and other robotic technology not just for lawful objectives like safeguarding their facilities but also for exploitative and marketing purposes. The unregulated employment of robots for surveillance raises the potential for pervasive or mass surveillance and diminishes individual privacy expectations. There is no dearth of legislative regulations aimed at preventing direct observation.

The Nigerian Data Protection Regulation (NDPR) prohibits both the invasion of privacy by public authorities and private companies, and the Nigerian Human Rights Commission has defined mass surveillance practices as unlawful invasions of privacy in several rulings. The UN instruments use a similar strategy, although they are frequently either non-binding or non-enforceable. It is possible to claim that the harm to privacy presented by direct surveillance exists due to the difficulty of implementing these principles.

C. Robots' Privacy Access

Robots' ability to grant access to usually private spaces is another concerning privacy aspect. Specifically, the use of Internet-capable robots allows hackers unparalleled access to a home's interior. These robots may be equipped with a variety of sensors, and some of them can transmit images and noises in real-time via the Internet. In other words, they can keep detailed recordings of happenings in houses and transmit them through the Internet. Therefore, if these robots are hacked, the attackers will have access to all the victims' private information. These details may include photographs of extra keys, for instance, and victims may also be vulnerable to physical incursion. In recent research undertaken by computer experts at the University of Utah, a variety of household robots on the market were assessed for hackability and found to be insecure and susceptible to hacking. The investigation demonstrated that not only could hackers eavesdrop on nearby conversations, but they could also control the robots. Some people, however, say that this risk could be greatly reduced if robots were designed and programmed with stricter safety rules.

D. Robots as Social Actors

Humans are likely to be exposed to a new source of emotional manipulation as the role of robots as social partners in society expands. In addition to the immediate effects of such manipulation on human behaviour and psychology, it exacerbates the threat that digital technologies bring to individual privacy. By their very nature, social robots are capable of employing emotional persuasion methods (such as fear or reward) to elicit trust from people, and they may persuade individuals to give more information about themselves than they would freely and knowingly provide to a database.

Ian Kerr cites Elle Girl Buddy as an example of an application that attempted to engage in social interactions with young people and children for commercial purposes. In addition, it is believed that while engaging with friendly robots, people reveal their "most private psychological thoughts," as the robots will revolutionize human conceptions of love and sexuality by allowing them to examine themselves more thoroughly. As people express their interior states and discover themselves in their relationships with friendly robots, these expressions may be captured using robot sensing equipment or code. This is the first time that the ordinarily personal experiences of individuals are being converted into information, and this new category of extremely sensitive personal information is just as susceptible to privacy problems as any other type of information. Nevertheless, the majority of friendly robots seek personal information in order to engage with people on a more intimate level, and being excessively prescriptive may negate the benefits of social robots.

Because of the social role of robots, other robotics privacy issues are slightly more complicated. People respond instinctively to robots, especially friendly robots, as if they were humans. In other words, when humans are in the presence of friendly robots, they cannot act independently. Consequently, the existence of friendly robots may reduce the chances for solitude and self-promotion, which may compromise long-held privacy ideals. Given that it is not privacy itself that is at risk but rather the values that are safeguarded by privacy rules, it is evident that this threat cannot be handled by traditional privacy measures.

Conclusion

In conclusion, the rapid advancement and adoption of artificial intelligence (AI) pose significant threats to the legal and regulatory systems, particularly in Nigeria, where there is a lack of an adequate legal framework and understanding of AI technologies among lawmakers, researchers, and practitioners. The potential for AI to infringe on fundamental human rights, exacerbate biases and discrimination, and challenge personal data privacy underscores the urgent need for comprehensive regulatory measures. AI's opaque nature, the complexity of its algorithms, and the anthropomorphizing of AI systems further complicate accountability and transparency, making it difficult to ensure fairness and justice. The evolving role of AI in surveillance, social interactions, and economic activities highlights the critical balance required between leveraging AI's benefits and safeguarding public welfare, privacy, and human rights. Therefore, it is imperative to develop and enforce robust legal and ethical frameworks to govern AI's integration into society, ensuring its use aligns with societal values and legal standards.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More