Artificial intelligence is creeping into every corner of public decision-making, and homelessness law is no exception. What should be a safeguard for vulnerable people, the homelessness procedure risks being hollowed out by algorithms that prioritise efficiency over fairness. Instead of protecting rights, AI threatens to turn the homelessness process into an empty formality.
Local authorities are under growing pressure to manage homelessness caseloads efficiently. Against this backdrop, local authorities may look to artificial intelligence ("AI") to assist with homelessness applications and reviews. While such technology might promise efficiency, its use in this setting raises serious concerns.
AI systems often function as "black boxes". If a housing officer relies on automated analysis or decision-support, applicants may be unable to see how conclusions were reached. This undermines the requirement that reviews be transparent and reasoned.
If trained on historical housing data, AI could replicate patterns of discriminatory decision-making, for example, disadvantaging groups already disproportionately affected by homelessness. The homelessness process, which should be a safeguard against unfairness, risks becoming another layer of systemic bias.
The statutory duty in reviews requires an officer to actively and independently reconsider the case. Delegating substantive aspects of that judgment to AI would risk hollowing out this safeguard. Local authorities remain legally responsible, but AI's use could obscure responsibility when errors occur.
If an applicant seeks to review or appeal the outcome, it may be difficult to identify whether an AI tool influenced the decision. Without disclosure obligations, courts and advisers could struggle to scrutinise the reasoning, weakening the ability to hold local authorities to account.
The review process is intended as a crucial protection for homeless applicants. Introducing AI into this process risks undermining that safeguard by reducing transparency, embedding bias, and diluting accountability. The danger is that reviews cease to function as a genuine check on flawed decisions and instead become another barrier to justice for applicants.
But, the answer is not to ban technology outright — it's to use it responsibly. Safeguards are possible.
What needs to happen?
- Transparency obligations: Applicants should be told if AI tools have been used, and decision-making logic must be accessible to allow scrutiny and challenge.
- Human oversight: Statutory review duties must remain with qualified officers. AI may assist, but ultimate responsibility and judgment must sit with a human decision-maker.
- Bias auditing: Any AI tool must be subject to independent auditing to ensure it does not replicate or exacerbate discrimination.
- Clear accountability: Local authorities should set out how responsibility is retained when AI is used, so errors cannot be hidden behind a "black box".
- Legal reform: Parliament and the courts may need to update procedural rules to ensure that applicants' rights are not eroded by technological shortcuts.
The homelessness review process exists to protect some of the most vulnerable in society. Used carelessly, AI risks undermining that protection. Used carefully, with transparency and accountability at its heart, it could support — rather than endanger — the fairness of the system.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.