Skip to content

Reinforcing legislation to safeguard data privacy in the age of artificial intelligence is necessary

Strengthening the Data Protection and Digital Information Law for AI and data to serve the needs of individuals and society

Strengthening data protection regulations fits the AI-driven age is necessary
Strengthening data protection regulations fits the AI-driven age is necessary

Reinforcing legislation to safeguard data privacy in the age of artificial intelligence is necessary

The UK government is taking steps to reinforce data protection laws in response to growing concerns about the use of automated decision-making (ADM) in sensitive sectors like employment, finances, and public services. The aim is to prevent harmful AI impacts and ensure individuals' rights are protected.

The key measures include reinstating strong restrictions on ADM, especially when decisions have significant legal or economic impact. This will limit the use of ADM without meaningful human involvement and explicit safeguards. The reforms will also explicitly prohibit fully automated decisions based on special category data, such as health or ethnicity, unless exceptional safeguards and consent are present.

To ensure human reviewers can effectively contest and overturn harmful AI outputs, the government plans to define and enforce "meaningful human involvement" in AI decision processes. Additionally, transparency and individual rights will be enhanced, giving people the right to be informed about ADM use, to contest decisions, and to seek human reconsideration.

Regulators like the Information Commissioner's Office (ICO) will be empowered with stronger enforcement powers and clear mandates on ADM oversight. This includes issuing binding guidance and penalties for non-compliance, and preventing dilution of safeguards via regulatory powers that might permit watering down protections under the pretext of innovation or economic growth.

Embedding impact assessments and bias mitigation requirements for AI systems used in critical decision areas will help identify and reduce discriminatory or harmful outcomes.

The changes come as critics argue that recent relaxations in ADM restrictions, as outlined in the UK’s Data (Use and Access) Act 2025, have weakened protections, reduced individuals' control, and placed enforcement burdens on affected people. The DUA 2025 now prohibits ADM only when special category data is processed and decisions produce significant legal or other effects, opening ADM use more broadly otherwise, though safeguards are required.

Experts recommend stronger, clearer definitions of meaningful human review, statutory rights for individuals to challenge AI decisions, and robust regulator powers, including transparency mandates and bias audits. The Government and Parliamentarians from all parties are being called upon to work with Ada on making improvements to the Bill to strengthen data protection laws for the AI era.

The public is increasingly concerned about an over-reliance on technology affecting people's agency and autonomy. More than half of survey respondents wanted clear procedures for appealing to a human against an AI decision, and nearly as many wanted clear explanations of how AI works.

Examples of unlawful use of ADM have been seen in cases like Deliveroo's 'Frank' platform for automated decision-making of gig worker delivery riders, which was found to be unlawful by the Italian Data Protection Authority. The Post Office scandal, where hundreds of postmasters were prosecuted for theft and fraud based on flawed accounting software, also highlights the potential dangers of ADM.

Independent research has identified that at present, people affected by automated decisions don't have the right to receive detailed contextual or personalized information about how a decision was reached. The Government's proposed reforms do not specify what 'meaningful human review' should consist of in legislation.

As the dedicated AI and data protection regulator, the ICO will play a crucial role in shaping these safeguards. Draft ICO guidance expected later in 2025 may further clarify these measures. The UK GDPR's Article 22 largely prohibits the automated processing of personal data for decisions with legal or similarly significant effects, requiring human involvement.

AI tools have been adopted by businesses in most sectors of the UK's economy. Systematic bias, technical failings, or individual circumstances not accounted for by AI systems can lead to unfair outcomes. The Bill removes the prohibition on most types of automated decision-making and creates requirements for organizations using automated systems to have safeguards in place.

In conclusion, strengthening UK data protection laws should focus on re-tightening ADM restrictions, clarifying and enforcing human oversight, enhancing transparency and redress rights, and empowering the ICO as the dedicated AI and data protection regulator to prevent harmful AI impacts in sensitive sectors like employment, finances, and public services.

  1. The UK government's policy-and-legislation reforms for data-and-cloud-computing involve embedding impact assessments and bias mitigation requirements for AI systems, emphasizing the need for meaningful human involvement to contest and overturn harmful AI outputs, aiming for enhanced transparency and individual rights.
  2. The public's concern about technology in general-news, particularly regarding its impact on people's agency and autonomy, has led to demands for clear procedures to appeal to a human against an AI decision and clear explanations of how AI works, highlighting the call for improvements in the data-and-cloud-computing Bill and a stronger role for the Information Commissioner's Office (ICO) in enforcing these changes.

Read also:

    Latest