In our series of articles looking at the use of Artificial Intelligence (AI) in public procurement, we have been considering both the risks and opportunities that AI can present and how those might be navigated.

As an issue which contracting authorities and bidders are increasingly considering in the context of the procurement process and the statutory framework, it is important to carefully look at how this interacts with the duties on contracting authorities outside of the Public Contracts Regulations 2015 (PCR), and the new Procurement Bill.

Public Law Duties

Alongside procurement challenges brought in the context of an alleged breach of the PCR, it is not uncommon to see challenges brought against the decisions of contracting authorities on public law grounds through Judicial Review. Contracting Authorities are no stranger to their overarching obligations to make decisions in line with their public law duties, by ensuring – for example – that decisions that are rational and based on sound reasons.

Likewise, we know that public bodies are increasingly using AI in a range of situations, and to make a range of decisions, whether that AI forms a small part of a larger decision-making process, or decisions are wholly automated through the use of AI. With that in mind, we are already seeing examples of challenges to the use of AI in public body decision making, from challenges to the use of algorithms in generating A-level results, to the use of AI in detecting or predicting fraudulent claims for Universal Credit.

So what do public bodies need to consider when the issues of AI, procurement processes and decision making collide?

Risk areas

Public bodies must ensure that their decisions comply with general principles of public law. By adding AI into those decision-making processes, and without proper thought and consideration, public bodies may risk breaching those requirements, making any decision susceptible to challenge by way of judicial review on public law grounds.

Whenever contracting authorities use AI in their procurement processes to make decisions in whole or in part, a number of risks can arise. Broadly speaking, we can expect that the more reliant that contracting authorities are on AI to make those decisions, and the less human involvement, understanding or oversight there is in those decision-making processes, the more significant those risks are likely to be.

Some key risks that are likely to arise include:

Irrationality – a decision made by AI or automated means, might be challenged on the basis that the decision is irrational, being so unreasonable that no reasonable authority could have made that decision. AI is only as good as the data it has been trained on, so without appropriate checks and balances, how can contracting authorities ensure that the AI is taking into account appropriate considerations, or disregarding irrelevant considerations to ensure that a reasonable decision is made?

Bias the concerns around bias in AI are well documented. If AI is based on data which is incomplete, or contains existing bias, the use of that AI to make further decisions can see any bias being 'baked in' to the decision-making process, affecting the outcome. This can lead to decisions being in breach of the Equality Act 2010 if they discriminate against particular individuals on grounds of protected characteristics such as race, disability, or religion or belief. It can also lead to the making of decisions which are in breach of a public body's statutory duty to comply with the Public Sector Equality Duty which may have a much broader application in the contract award process.

Duty to give reasons– aligned with the requirement of transparency under the PCR, public law requirements of fairness may also mean that public bodies should provide reasons for their decisions. Those reasons may be difficult to understand or articulate where AI operates within a 'black box', meaning that those relying on AI have no clear idea of how AI systems make relevant decisions, and so cannot provide those affected by the decision with meaningful information or reasoning behind the decision.

Discretion and vires questions also arise as to the way in which public bodies 'delegate' those decisions to AI and whether, as a result, public bodies have fettered their discretion or acted in some way that is ultra vires – outside of their powers.

Take away

The use of AI in decision making by public bodies can give rise to a number of risks, but these are not necessarily insurmountable.

When considering the benefits of using AI in procurement processes, it is vital to take a step back and consider, at the planning stage, how the use of AI is expected to fit into a contracting authority's wider decision making process and the extent to which a contracting authority will be reliant AI to make decisions – will those decisions be made wholly by automated means, or will it provide information on which the contracting authority will then base its decisions?

As part of its consideration of these issues, contracting authorities should ensure that they understand the way in which the AI they are using works, how it makes or feeds into decisions, and the data set it has been trained on, to ensure that any public law risks associated with its use can be properly considered and mitigated.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.