The AI Act is a European Commission proposal to regulate artificial intelligence. The AI Act bans certain AI practices and imposes specific requirements and obligations on creators and users of AI. The goal of these measures is to increase trust in AI, ensure the safety and fundamental rights of people and businesses, and strengthen investment and innovation throughout the European Union. The AI Act What does the AI Act even say? The Act takes a "risk-based approach," distinguishing between AI systems with:
an unacceptable risk; a high risk; a limited risk; a minimal risk.
Category 1 systems are prohibited under the AI Act. This includes AI that violates fundamental rights or causes a clear threat to security. An example is AI-driven "social scoring." High-risk AI systems are allowed under the AI Act, but are regulated. For example, these systems are subject to mandatory conformity assessment, training data must meet certain quality requirements, and decisions made by the AI must be explainable. "High risk" have AI systems that are intended to be used as safety components, such as medical devices, and AI systems with fundamental rights implications, such as in the field of education. AI systems from categories 3 and 4 are only subject to transparency obligations. Amendments In late 2022, the AI Act proposal was sent to the European Parliament. There, the Committee on Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) looked at the Act. In May 2023, the committees overwhelmingly approved several key amendments. AI with unacceptable risk A key amendment to the Act deals with the list of prohibited AI systems (Category 1). The House finds more AI applications unacceptable than the Commission:
"real-time" remote biometric identification systems in publicly accessible areas, even if necessary for law enforcement. Consider live facial recognition. Remote biometric identification systems in publicly accessible areas that is not "real-time," unless strictly necessary to detect serious crimes and authorized by a judge. Biometric categorization using special personal data, such as ethnicity and political affiliation. Predictive policing. Indiscriminate scrapping of biometric data from social media or security cameras.
High-risk AI. The amendments provide an expanded definition of "high risk." Now AI systems that harm health, safety, fundamental rights or the environment are included in the definition. Thus, all such systems must meet the stringent requirements ui (Machine translated)