Bias in algorithms - Artificial intelligence and discrimination
The report focuses on the potential for bias in predictive policing and offensive speech detection algorithms and how this can lead to discrimination. It emphasizes the need for comprehensive assessments of algorithms to identify and address bias before such systems are used for decision-making. Regular assessments by providers and users should be mandatory for high-risk algorithms, and data on protected characteristics may need to be collected to enable assessment of potential discrimination. Safeguards should be in place for the protection and use of this data.