On the 21st (local time), the European Union (EU) approved the Artificial Intelligence Act (AI Law), establishing comprehensive regulations for AI. This pioneering law adopts a "risk-based" approach, with stricter rules for AI systems that pose higher risks to society. It has the potential to become a global standard for AI regulation.



The AI Act aims to promote the development and deployment of safe and reliable AI systems in the EU market by both private and public sectors. It also seeks to protect the fundamental rights of EU citizens while encouraging investment and innovation in artificial intelligence across Europe.

According to the AI Law, AI systems are classified based on their risk levels. Low-risk systems will only need to comply with minimal transparency requirements. High-risk AI systems will be allowed, but they must fulfill specific requirements and obligations to be accessible in the EU market.

The law bans certain AI applications, such as cognitive behavioral manipulation, social scoring, and predictive enforcement based on profiling and categorizing individuals by race, religion, or sexual orientation. Additionally, the use of AI systems involving biometric data is prohibited.

To ensure effective enforcement, an AI secretariat has been established within the European Commission. Penalties for violating the AI Act will be set at the higher of a percentage of the violating company's global annual revenue from the previous financial year or a predetermined amount. These regulations will come into effect two years after their publication.