Denmark: New report finds mass surveillance and discrimination in automated welfare state

Denmark: New report finds mass surveillance and discrimination in automated welfare state

Denmark’s welfare authority risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalised racial groups through its use of AI tools to flag individuals for social benefits fraud investigations, Amnesty International has said in a new report. 

The report, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, details how the sweeping use of fraud detection algorithms, paired with mass surveillance practices, has led people to unwillingly – or even unknowingly – forfeit their right to privacy, and created an atmosphere of fear. 

Udbetaling Danmark (UDK) has mandated a company, Arbejdsmarkedets Tillægspension (ATP), to administer social benefit and fraud control efforts. In turn, ATP has partnered with private multinational corporations, including NNIT, to develop fraud control algorithms tailored to ATP’s specifications.

UDK and ATP use a system of up to 60 algorithmic models purportedly designed to detect social benefits fraud and flag individuals for further investigations by Danish authorities. During its research, Amnesty obtained partial access to four of these algorithms. 

To power these fraud-detection algorithms, Danish authorities have enacted laws that enable extensive collection and merging of personal data from public databases of millions of Danish residents. 

UDK argues that vast collection and merging of personal data to detect social benefits fraud is ‘legally grounded’. However, Amnesty’s findings show that the enormous amounts of data that is collected and processed is neither necessary nor proportionate.  

Describing the terror of being investigated for benefits fraud, an interviewee told Amnesty International: “[It is like] sitting at the end of the gun. We are always afraid.”

UDK and ATP provided Amnesty with redacted documentation on the design of certain algorithmic systems, and consistently rejected Amnesty’s requests for a collaborative audit, refusing to provide full access to the code and data used in their fraud detection algorithms.  

The information that Amnesty has collected and analysed suggests that the system used by the UDK and ATP functions as a social scoring system under the new EU Artificial Intelligence law (AI Act) – and should therefore be banned.

UDK has rejected the assessment that its fraud detection system is likely to fall under the social scoring ban of the AI Act, but without offering a sufficient explanation for its reasoning.   

Amnesty is urging the European Commission to clarify, in its AI Act guidance, which AI practices count as social scoring, addressing concerns raised by civil society.  

Hellen Mukiri-Smith, Amnesty International’s researcher on artificial intelligence and human rights, said: “The Danish authorities must urgently implement a clear and legally binding ban on the use of data related to ‘foreign affiliation’ or proxy data in risk-scoring for fraud control purposes. They must also ensure robust transparency and adequate oversight in the development and deployment of fraud control algorithms.”

Share icon
Share this article: