close
close

first Drop

Com TW NOw News 2024

Register aims to allay fears over ‘racist and biased’ AI tools being used in the UK | Artificial Intelligence (AI)
news

Register aims to allay fears over ‘racist and biased’ AI tools being used in the UK | Artificial Intelligence (AI)

Artificial intelligence and algorithmic tools used by central government are to be made public on a public register after warnings they could contain “deep-rooted” racism and prejudice.

Officials confirmed this weekend that tools challenged by campaigners over alleged secrecy and a risk of bias will soon be named. The technology has been used for a range of purposes, from trying to detect sham marriages to rooting out fraud and errors in benefit claims.

The move is a victory for campaigners who have challenged the use of AI in central government ahead of what is likely to be a rapid rollout of the technology across the public sector. Caroline Selman, a senior research fellow at the Public Law Project (PLP), an access to justice charity, said there was a lack of transparency about the existence, details and deployment of the systems. “We need to ensure that public bodies publish the information about these tools, which are being rapidly rolled out. It is in everyone’s interest that the technology adopted is lawful, fair and non-discriminatory.”

In August 2020, the Home Office agreed to stop using a computer algorithm to sort visa applications after it was claimed it contained “deep-rooted racism and prejudice”. Officials suspended the algorithm following a legal challenge by the Joint Council for the Welfare of Immigrants and digital rights group Foxglove.

Foxglove alleged that some nationalities were automatically given a “red” traffic light risk score, and that those people were more likely to be refused a visa. It said the process amounted to racial discrimination.

Last year the department was also challenged over an algorithmic tool for detecting sham marriages that was being used to undermine immigration controls. The PLP said it appeared to discriminate against people from certain countries, with an equality assessment disclosed to the charity showing that Bulgarians, Greeks, Romanians and Albanians were more likely to be referred for investigation.

The government’s Center for Data Ethics and Innovation, now the Responsible Technology Adoption Unit, warned in a November 2020 report that there are numerous examples where the new technology has “entrenched or reinforced historical biases, or even created new forms of bias or unfairness.”

The center helped develop a standard for algorithmic transparency registration for government agencies deploying AI and algorithmic tools in November 2021. It proposed that models that interact with the public or have a significant influence on decisions be published in a registry, or “repository,” with details of how and why they were used.

skip the newsletter promotion

So far, only nine records have been published in the repository in three years. None of the models are managed by the Home Office or the Department for Work and Pensions (DWP), which has managed some of the most controversial systems.

The last government said in a consultation response on AI regulation in February that departments would be required to meet the reporting standard. The Department for Science, Innovation and Technology (DSIT) confirmed this weekend that departments would now report on their use of the technology under the standard.

A DSIT spokesperson said: “Technology has enormous potential to improve public services, but we know it is important to maintain the right safeguards, including, where necessary, human oversight and other forms of governance.

“The Algorithmic Transparency Recording Standard is now mandatory for all departments, with a number of records to be published shortly. We continue to explore how this can be extended to the public sector. We encourage all organisations to use AI and data in a way that builds public trust through tools, guidance and standards.”

Departments are likely to face further calls to reveal more details about how their AI systems work and the steps they have taken to reduce the risk of bias. The DWP is using AI to detect potential fraud in universal credit claims, and has more in development to detect fraud in other areas.

Its latest annual report said it had carried out a “fairness” analysis of the use of AI for Universal Credit advance claims, which raised “no immediate concerns about discrimination”. The DWP has not provided details of its assessment over concerns that publication “could allow fraudsters to understand how the model works”.

The PLP supports potential legal action against the DWP over its use of the technology. It is urging the department to provide details of how it is being used and what measures have been put in place to limit harm. The project has compiled its own register of automated decision-making tools in government, with 55 tools tracked so far.