European Commission Publishes Guidelines on Prohibited AI Practices Under the AI Act

EU Commission Publishes Guidelines on Prohibited AI Practices The European Commission has released draft guidelines outlining prohibited practices under the Artificial Intelligence (AI) Act. These guidelines aim to ensure the safe and ethical development of AI systems across the European Union by categorizing AI systems into risk categories. These guidelines break down what counts as high-risk AI, especially systems that could harm people’s rights, safety, or democracy. They also offer clarity on how these rules will be enforced, helping businesses, developers, and policymakers navigate the evolving AI landscape responsibly.

Key Points:

  • Risk Categories: The AI Act classifies AI systems into prohibited, high-risk, and those subject to transparency obligations.
  • Prohibited Practices: The guidelines specifically address harmful practices such as manipulation, social scoring, and real-time remote biometric identification.
  • Non-Binding Guidelines: While these guidelines provide valuable insights into the Commission’s interpretation of the prohibitions, they are non-binding, with authoritative interpretations reserved for the Court of Justice of the European Union (CJEU).
  • EU Commitment: This initiative underscores the EU’s commitment to fostering a safe and ethical AI landscape, promoting innovation while protecting health, safety, and fundamental rights.

Prohibited Practices

The AI Act, specifically Article 5, prohibits certain AI systems that pose an unacceptable risk to fundamental rights and EU values. These include:

  • Manipulation and Deception
    AI systems using subliminal techniques beyond human awareness or purposefully manipulative/deceptive tactics to distort behavior and cause significant harm.
  • Exploitation of Vulnerabilities
    AI systems that take advantage of individuals’ vulnerabilities due to age, disability, or socio-economic status, leading to behavioral distortion and harm.
  • Social Scoring
    AI systems that assign scores to individuals or groups based on social behavior, personality traits, or unrelated personal data, resulting in unjustified or disproportionate treatment.
  • Criminal Risk Prediction
    AI systems that assess or predict a person’s likelihood of committing a crime based solely on profiling, personal characteristics, or personality traits.
  • Mass Facial Recognition & Biometric Categorization
    AI systems that create facial recognition databases using large-scale, indiscriminate data scraping from online sources or CCTV.
    AI categorization systems that infer sensitive attributes like race, political views, religion, or sexual orientation without valid legal justification.
  • Emotion Recognition in Workplace & Education
    AI systems that analyze emotions in schools or workplaces unless strictly necessary for safety or medical reasons.
  • Real-Time Remote Biometric Identification (RBI)
    AI systems for real-time facial recognition in public spaces for law enforcement, except in specific cases like targeted searches for missing persons or prevention of serious threats.

Significance of the Guidelines

The European Commission’s new guidelines provide clarity on how these prohibitions should be interpreted and enforced. They aim to:

  • Ensure uniform and effective enforcement of AI Act Article 5.
  • Offer legal certainty for AI providers and deployers.
  • Strike a balance between fundamental rights protection and technological innovation.
  • Support national regulators in enforcing the law with practical case-by-case assessments.

These guidelines help businesses and developers navigate the evolving regulatory landscape, ensuring AI is used ethically and responsibly while maintaining compliance. As AI continues to advance, staying informed about these regulations is more important than ever. Read the full guidelines from the European Commission here.

What This Means: These guidelines are designed to ensure consistent application of the AI Act across the EU, offering stakeholders legal explanations and practical examples to comply with the Act’s requirements. Although the Commission has approved the draft guidelines, they have not yet been formally adopted.





Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Google Gemini updates: Flash 1.5, Gemma 2 and Project Astra
  • Displaying External Posts on Your al-folio Blog
  • Agentic AI Summit
  • Can reinforcement learning from human feedback be turned into an attack vector for AI?
  • Think your data is safe because you only shared embeddings and kept the model private?