In February 2025, the European Commission released comprehensive guidelines to clarify key aspects of the EU Artificial Intelligence Act (AI Act), focusing on prohibited AI practices. These guidelines aim to provide clarity on the AI Act obligations that began on February 2, 2025, including definitions, AI literacy, and prohibitions on specific AI practices.
Key Aspects of Prohibited AI Practices:
1. Personalised Ads
– Article 5(1)(a) prohibits AI systems that use subliminal, manipulative, or deceptive techniques to distort behavior. The guidelines clarify that while AI-driven ad personalization is not inherently manipulative, it should not employ techniques that undermine individual autonomy.
2. Lawful Persuasion
– Article 5(1)(b) targets AI systems exploiting vulnerabilities to distort behavior. The guidelines differentiate between unlawful manipulation and ‘lawful persuasion,’ which requires transparency, informed consent, and adherence to legal frameworks.
3. Vulnerability Exploitation
– Prohibited under Article 5(1)(b), this includes exploiting vulnerabilities based on age, disability, or socio-economic status. Examples include AI systems promoting addictive behaviors or targeting vulnerable groups with scams or predatory financial products.
4. Profiling and Social Scoring
– Article 5(1)(c) restricts AI systems from evaluating individuals based on social behavior, resulting in detrimental outcomes. The guidelines cite unacceptable practices like using personal financial data for life insurance eligibility, while legitimate scoring under legal frameworks remains permissible.
5. Predictive Policing
– Article 5(1)(d) bans AI systems used for assessing criminal risks based on profiling. While primarily applicable to law enforcement, private actors involved in crime analytics at law enforcement’s request could also fall under this prohibition.
6. Facial Image Scraping
– Article 5(1)(e) prohibits creating facial recognition databases from untargeted image scraping. The guidelines clarify that databases not used for person recognition are exempt, such as those for AI model training without identifying individuals.
7. Emotion Recognition in the Workplace
– Article 5(1)(f) prohibits AI systems inferring emotions in workplaces and educational settings, with exceptions. Customer emotion tracking systems in call centers, for instance, are not within scope.
Additional Clarifications:
– Market Placement and Usage:
The guidelines define ‘placing on the market,’ ‘putting into service,’ and ‘use,’ covering all stages of an AI system’s lifecycle.
– R&D Exclusions:
AI systems in research and development stages are generally exempt from the Act until market placement or service initiation.
– General-Purpose AI Systems:
The guidelines emphasize that providers of general-purpose AI systems must ensure their systems are not used in prohibited manners, highlighting the responsibility to prevent misuse.
These guidelines provide a detailed roadmap for understanding the EU AI Act’s scope and application, ensuring compliance with EU legal standards while fostering responsible AI innovation.
For more updates on AI regulations, follow us at aitechtrend.com.
Note: This article is inspired by content from https://www.insideprivacy.com/artificial-intelligence/european-commission-guidelines-on-prohibited-ai-practices-under-the-eu-artificial-intelligence-act/. It has been rephrased for originality. Images are credited to the original source.
