Artificial Intelligence And The Energy Sector: A Legal Perspective

A
Alliance Law Firm

Contributor

ALF is a multiple award winning law firm operating out of offices in Lagos, Abuja, and Port Harcourt Nigeria. Our mission is to establish a world class, full service Nigerian law firm distinguished by its premium service. We incorporate a rich blend of traditional legal practice with the dynamism required to satisfy our broad range of clients who operate in various industries.
The energy sector is undergoing a significant transformation fueled by artificial intelligence (AI). While AI offers immense potential for optimising energy production...
Nigeria Energy and Natural Resources
To print this article, all you need is to be registered or login on Mondaq.com.
  1. INTRODUCTION

The energy sector is undergoing a significant transformation fueled by artificial intelligence (AI). While AI offers immense potential for optimising energy production, distribution, and consumption, its integration raises critical legal challenges. This article delves into the difficulties of assigning liability for AI-driven decisions within the energy sector, mirroring the broader global concerns. We will explore the unique challenges posed by AI in this context, analyse existing legal frameworks, and discuss potential solutions for navigating this evolving legal landscape.

  1. HOW AI IMPACTS THE ENERGY SECTOR

Navigating the complex and rapidly evolving energy sector is becoming increasingly challenging due to its complicated regulatory landscape. Artificial intelligence has emerged as a potential solution in the legal energy sector to address this challenge

  1. Smart grids: AI technology can play a crucial role in managing smart grids, which are electricity supply networks that use digital communications to detect and react to local changes in usage. By analyzing historical and real-time data, AI algorithms can predict consumption patterns, enabling utilities to allocate resources more efficiently. AI can also help optimize resource allocation, particularly during sudden periods of high demand. In such cases, AI can improve the distribution of electricity, ensuring that power is directed where it is needed the most, thereby preventing the risk of blackouts. However, the use of AI in smart grids raises questions about data privacy and security, as well as who is accountable for AI-driven decisions that might lead to outages.
  2. Predictive maintenance: Energy companies can utilize AI to predict when their equipment is likely to fail or require maintenance. By analyzing large amounts of data from different sources such as usage statistics, weather data, and historical maintenance records, machine learning can predict potential breakdowns before they occur. This approach minimizes downtime, reduces repair costs, and improves the overall reliability of energy infrastructure. However, relying on AI algorithms for critical maintenance decisions raises liability concerns. If an AI-predicted failure occurs and causes damage, who is responsible - the developer of the AI, the company that uses it, or both?
  3. Energy trading: Energy companies use AI to analyze real-time data on pricing, demand, and supply trends for profitable trading decisions. AI is also very efficient in risk management by proactively assessing market volatility and uncertainties. Algorithmic trading executed by AI operates at lightning speed, executing numerous trades in milliseconds. It optimizes energy portfolios, simulates market scenarios, analyzes sentiment, automates tasks, and continually adapts to changing market conditions. AI's exceptional pattern recognition abilities enable it to identify patterns and trends in large datasets, making it invaluable in navigating the dynamic energy market. It can detect market opportunities and risks that may elude human traders.
  4. Oil and gas exploration: The impact of AI on the oil and gas exploration sector is significant. With the ability to analyze vast amounts of geological data with remarkable accuracy, AI can identify potential oil and gas reserves that may have been missed using traditional methods. Moreover, it assesses the feasibility of these reserves, directing exploration efforts towards the most promising prospects. This not only improves efficiency but also significantly increases the success rate of exploration activities, reducing unnecessary expenses and costs.
  1. THE CHALLENGES OF REGULATING ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI) presents unique challenges for regulation due to several key characteristics that differentiate it from other technologies. These challenges complicate efforts to create effective legal frameworks to manage AI-related risks.

a. Autonomy and Responsibility

One of the most distinctive features of AI is its ability to operate autonomously. AI systems can perform complex tasks, such as driving cars or managing investment portfolios, without human intervention. This increasing autonomy raises significant issues regarding control and responsibility. As AI systems take on more sophisticated roles, the legal system must adapt to address the challenges posed by these autonomous actions. A critical issue related to AI autonomy is foreseeability. AI systems can produce actions and solutions that are unexpected even to their creators. For instance, in 2019, Google's AI company DeepMind announced a collaboration with the UK National Grid to optimize energy usage and distribution. The AI system used machine learning to predict power demand and adjust the supply accordingly, aiming to increase efficiency and reduce costs. During this process, the AI developed an unexpected but highly effective strategy for balancing the grid. The AI system was able to identify subtle patterns in energy consumption that human operators had not noticed. That is, it found correlations between seemingly unrelated factors, such as weather patterns and energy use, which allowed it to make more precise adjustments to power generation and distribution. This led to a significant reduction in energy wastage and improved overall grid stability.

In addition, a cancer pathology AI, C-Path, discovered unexpected indicators for breast cancer prognosis that contradicted established medical knowledge.1 These examples highlight that AI systems can generate solutions beyond human anticipation due to their computational power and lack of cognitive biases. This unpredictability makes it difficult to foresee how AI will behave, complicating the assignment of liability when harm occurs.

aa. Legal Challenge in Foreseeability and Causation

From a legal perspective, the unpredictability of AI actions complicates the concepts of foreseeability and causation. If an AI system acts in ways its designers did not anticipate, holding these designers liable for resulting harm becomes problematic. The AI's capacity to learn and adapt further complicates this issue, as its behaviour can change based on experiences post-design. This unpredictability could be viewed as a superseding cause, potentially absolving designers of liability but leaving victims without compensation.

b. Research and Development Characteristics

Effective regulation of AI is also challenged by the nature of AI research and development (R&D). AI R&D shares characteristics with other Information Age technologies, such as discreetness, discreteness, diffuseness, and opacity. That is, the development of AI often occurs in isolated or confidential settings, making it hard to monitor and regulate. AI systems are also built from multiple components, usually sourced from different entities, complicating the attribution of responsibility. Furthermore, AI development is widespread and can be carried out by numerous actors, including small teams or individuals, making centralized regulation difficult. Finally, the inner workings of AI systems are often not transparent, either due to the complexity of the technology or intentional secrecy by developers, hindering effective oversight.

  1. A ROLE FOR THE LAW IN MANAGING AI

Despite the complexities and potential dangers of artificial intelligence (AI), there is considerable optimism that legal mechanisms can mitigate the associated public risks without stifling innovation. Addressing the legal gaps around AI requires a multifaceted approach that balances regulation with technological progress. The first challenge is the current legal framework's inadequacy in dealing with AI. The law needs to evolve to cover the unique aspects of AI. This includes creating legal definitions for AI, which, while difficult, is not unprecedented. The legal system has a history of defining imprecise terms and adjusting as needed.2 Similarly, issues of foreseeability and causation in determining liability are not new to the courts, which have adapted to technological changes over time.

Further to the above challenge is the control of AI systems to prevent harm post-development. However, this does not preclude the regulation of AI development before deployment. Existing legal frameworks can address AI's discrete and opaque nature. For instance, many technologies comprise components from multiple sources, and courts have long managed liability in such scenarios. Legal mechanisms can mandate transparency in AI systems through legislation or incentives, ensuring companies disclose their systems' workings. AI is not unique in its complexity. Other modern technologies also involve components from various sources, yet the legal system has developed ways to handle these complexities. For example, the automotive industry has rules for apportioning liability when defects arise from multiple components. Similarly, AI's opacity can be reduced through laws requiring the publication of AI code and specifications or by providing tax incentives and tort standards favouring transparent systems.

It is also worthy of note that the development of AI by large, visible entities offers a strategic advantage for regulation. Despite AI's potential for diffuse development, major advances are likely to come from large corporations with significant financial and human capital. Companies like Google, IBM, Facebook, and Microsoft are already heavily invested in AI projects, suggesting that commercial and governmental entities will dominate AI development. This concentration makes it easier for regulators and courts to oversee and manage AI's public risks.

  1. THE GLOBAL RESPONSE: REGULATING AI IN THE ENERGY SECTOR

The use of Artificial Intelligence (AI) in the energy sector is rapidly evolving on a global scale, with several countries and regions now implementing regulations and guidelines to address its impact. Policymakers worldwide are increasingly recognizing the importance of regulating AI to ensure that it is being used responsibly and ethically. Countries such as Brazil, Israel, Italy, Japan, and the UAE are actively shaping their AI policies to safeguard against any potential negative consequences.

Europe has recently made a significant move by introducing comprehensive regulations for AI. These regulations are designed to cover various areas such as transparency in AI systems, the use of AI in public spaces, and high-risk systems. Models that pose systemic risks and have a high impact will be subject to stricter requirements, including model evaluation, risk mitigation, and incident reporting.

China has released a preliminary set of regulations for generative AI and is inviting feedback from the public on the proposed rules. Unlike most other countries, China's regulations state that generative AI must align with "Socialist Core Values." The draft regulations suggest that developers are responsible for the output created by their AI, and there are limitations on sourcing training data as developers can be held liable if their training data infringes on someone else's intellectual property. The regulations mandate that AI services must produce only "true and accurate" content. These proposed rules are an expansion of existing legislation related to deepfakes, recommendation algorithms, and data security, placing China ahead of other nations that are just starting to draft new laws.

In 2022, the Ministry of Innovation, Science, and Technology of Israel published a preliminary policy on regulating AI. The draft policy aims to provide a moral and business-oriented guide for companies, organizations, and government bodies working in the field of artificial intelligence. The policy highlights the importance of responsible innovation and emphasizes that the development and use of AI must adhere to the rule of law, fundamental rights, public interests, human dignity, and privacy.

  1. A PROPOSED SOLUTION FOR FUTURE CONSIDERATION

Artificial Intelligence (AI) is revolutionizing various sectors, from healthcare to finance, but it also presents significant public risks. As AI systems become more autonomous and sophisticated, ensuring their safety and alignment with human values becomes crucial. To address these challenges, a comprehensive regulatory framework can be implemented. This framework can manage AI risks while promoting innovation by establishing an agency responsible for certifying the safety of AI systems.

This regulatory framework proposes the establishment of a specialized agency responsible for certifying the safety of AI systems. Unlike traditional regulatory bodies that might ban unsafe products outright, this framework introduces a nuanced liability system. This system distinguishes between certified and uncertified AI:

  1. Certified AI: Designers, manufacturers, and sellers of AI systems that receive agency certification will have limited tort liability. This limited liability incentivizes companies to ensure their AI systems meet safety standards.
  2. Uncertified AI: Companies that offer uncertified AI for commercial use will face strict joint and several liability. This means they can be held fully accountable for any harm caused by their AI systems, encouraging them to seek certification.

This framework leverages the strengths of different institutions. Legislators, with their democratic legitimacy, will set broad goals and purposes for AI regulation. The independent agency, staffed by specialists, will handle the technical assessment of AI safety, insulating these decisions from electoral politics. Courts will adjudicate disputes and allocate responsibility for AI-related harms, using their experience in handling individual cases.

  1. ROLE OF THE COURT IN THE REGULATORY FRAMEWORK

In the proposed regulatory framework for artificial intelligence (AI), the role of the court emerges as a key element in ensuring accountability and fairness. Within this framework, courts are entrusted with the responsibility of adjudicating individual tort claims arising from harm caused by AI systems. Leveraging their institutional strength and expertise in fact-finding, courts will navigate complex legal terrain to determine liability and deliver justice. Moreover, courts will play a crucial role in allocating responsibility among the various parties involved in the development, distribution, and operation of AI systems. In cases where uncertified AI is implicated, courts will apply rules of strict liability, holding accountable all entities associated with the creation and deployment of the AI system. This allocation of responsibility ensures that each party bears the appropriate burden for AI-related harm, fostering a culture of accountability within the AI industry.

Disputes are inevitable within such a dynamic and rapidly evolving technological landscape, and the court system stands ready to address these challenges. In particular, disputes may arise concerning the certification status of AI systems or the point at which modifications rendered the system uncertified. Here, the court's role becomes even more critical, as it must navigate complex technical details and legal nuances to arrive at fair and just decisions. Pre-trial hearings will be convened to determine the conformity of the AI system with certified versions, establishing the threshold for liability and delineating the boundaries between defendants subject to limited liability and those subject to strict liability. In essence, the court serves as a cornerstone of the regulatory framework for AI, ensuring that legal principles are upheld and justice is served.

  1. CONCLUSION

The integration of AI into the energy sector presents unprecedented opportunities, but also complex legal challenges. While AI enhances smart grids and revolutionizes oil exploration, it raises questions about accountability and liability. Therefore, regulating AI in energy requires a delicate balance between fostering innovation and mitigating risks, as the current legal framework struggles to address AI's autonomy and secretive development, necessitating a comprehensive regulatory approach. As we move toward an AI-driven future, collaborative efforts to harmonize legal, technological, and ethical considerations are essential. By embracing this approach, we can harness AI's transformative potential in the energy sector while safeguarding against risks and ensuring a fair and sustainable energy future.

Footnotes

1. "Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data." Margaret Rouse, What Is Machine Learning, WHATIS.COM https://www.techtarget.com/searchenterpriseai/definition/machine-learning-ML

2. See, e.g., SAIF Corp. v. Allen, 881 P.2d 773, 782–83 (Or. 1994) (discussing Oregon's rules for interpreting "inexact" and "delegative" statutory terms).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More