The European Parliament is gearing up for a crucial vote on proposed legislation that could solidify a divergent approach to regulating artificial intelligence (AI) between the European Union (EU), the United States (US), and the United Kingdom (UK). The proposed laws, scheduled to be debated in the coming week, aim to ban certain AI applications and place the responsibility on developers and users to adhere to a risk-based approach.

OPEN AI. Stock Image

Among the prohibited applications are real-time remote biometric identification systems in public spaces and biometric categorization systems that group individuals based on gender, race, and ethnicity. Predictive policing and emotional recognition systems are examples of such biometric categorization systems. These proposals reflect an increasing concern over generative AI technologies, like OpenAI’s ChatGPT, which have garnered significant attention since the laws were initially introduced.

Once enacted, the legislation will empower regulatory bodies to impose fines of up to €40 million or 7% of an organization’s global turnover on entities failing to comply. However, the scope of the legislation goes beyond bans and penalties. It places a significant burden on AI developers and users to ensure compliance with the requirements of a risk-based approach, distinguishing it from the UK’s outcome-based approach. While the US is yet to propose federal legislative measures specific to AI, the White House has released a blueprint for an AI Bill of Rights aimed at providing guidance to society.

The EU’s proposed legislation outlines a definition of “high risk” in terms of health, safety, and fundamental rights. Developers of systems categorized as high risk will be obligated to comply with a set of mandatory requirements for trustworthy AI and undergo conformity assessment procedures before introducing those systems to the EU market. Additionally, the Commission plans to establish a public EU-wide database for registering standalone high-risk AI applications. For all AI systems, developers will be responsible for assessing the quality and suitability of training datasets, including addressing possible biases.

Academics have expressed concerns about the biases present in publicly available training datasets, some of which contain distressing content. The EU hopes that these regulations will incentivize the creation of high-quality training datasets that possess significant economic value.

Meanwhile, the UK has taken a different path, adopting an outcome-based approach to AI legislation. Unlike the EU’s proposed bans and compliance requirements, the UK’s approach devolves responsibility to regulators without introducing new restrictions or penalties. The consultation period for the UK’s proposed legislation will conclude on June 21.

Post-Brexit Britain, emphasizing a “proportionate approach that promotes growth and innovation,” remains optimistic about championing its unique AI regulatory strategy. UK Prime Minister Rishi Sunak recently announced plans for the country to host the first global summit on AI regulation this autumn, inviting “like-minded allies and companies” to collaborate in developing an international framework for the safe and reliable development and use of AI.

The European Parliament is scheduled to engage in debates on the legislation on Tuesday, followed by a vote on Wednesday. The outcome of the vote will significantly impact the future of AI regulation in the EU and potentially shape global AI governance.

Leave a Reply

Your email address will not be published. Required fields are marked *