In the Netherlands, algorithmic discrimination is everywhere according to the Dutch Data Protection Authority

In its 2023 annual report, the Autoriteit Persoonsgegevens (the Dutch Data Protection Authority) is dismayed by how much algorithmic discrimination it encounters while doing its oversight.

In the year that we are reckoning with the child benefits scandals, the Autoriteit Persoonsgegevens mentions the fact that DUO used a discriminatory algorithm to look for fraud (see here), that the UWV used data in an unlawful way to find fraud, how multiple municipalities used a fraud score card (against better judgment), and that there is a lack of clarity on the use of facial recognition by the Dutch police. The chair of the protection authority writes (machine translated): This is most likely just the tip of the iceberg. The government’s hunger for data seems barely contained. Of course, algorithms and artificial intelligence (AI) can also bring us many benefits, such as more efficient work processes for government organizations. However, as a society, we must always be very alert to the risks of algorithms, including discrimination. This is to ensure that government institutions do not again destroy people’s lives and to maintain our rule of law and the protection of fundamental rights. It is therefore no surprise that the Autoriteit Persoonsgegevens devotes a chapter in its biannual report on the risks of AI and algorithms to the topic of profiling in the context of fraud detection. The chapter clearly outlines how unlawful discrimination can come about in algorithms that classify people. The Autoriteit Persoonsgegevens lists the inclusion of a random sample as a way to mitigate some of the potential problems. The random sample can validate the algorithm, it forces the human check to keep paying attention, and it can help find new ways of fraud (that aren’t part of the algorithm yet). The sample does increase the number of people who will get checked for no reason, so that is a concern. Unfortunately, the Autoriteit is in no way critical of this perceived need to check for fraud. Can’t we create policies that require us to make risk models for fraud (e.g. if you give all students the same student grant, then if you are DUO, you don’t need to check if they live at home)? And why is there no discussion about the ethics and legitimacy of profiling and classification in general? You can download the chapter on profiling here (PDF). See: Algoritmes en discriminatie hebben hoofdrol in privacytoezicht and Rapportage AI- & Algoritmerisico’s Nederland (RAN) at Autoriteit Persoonsgegevens. Image from the report of the Autoriteit Persoonsgegevens.

https://racismandtechnology.center/2024/09/16/in-the-netherlands-algorithmic-discrimination-is-everywhere-according-to-the-dutch-data-protection-authority/