Key aspects of the new EU Regulation on Artificial Intelligence (AI Act):
Introduction
Artificial Intelligence (AI) has gone from being an innovative tool to becoming an essential component of our daily lives. From automating tasks to making complex decisions, AI is transforming entire sectors, improving our efficiency and quality of life. However, its rapid development also presents significant ethical challenges and risks. To address these challenges, the European Union has adopted the European Regulation on Artificial Intelligence (AI Act), a regulatory framework designed to ensure the safe and responsible use of AI.
However, this strict regulatory approach in the EU has drawn criticism, as some experts believe it could limit innovation compared to other countries such as the US, which have a more permissive approach.
What is the European AI Regulation?
The AI Act defines an AI system as one that operates autonomously, using human or machine data to achieve specific goals. This regulation applies to all AI providers and users operating within the European Union, regardless of whether the system has been developed outside the EU. With this regulation, the EU aims to protect both consumers and businesses by promoting ethical and safe technological development.
Exclusions and Limitations
The Regulation excludes certain areas of AI use. Systems used for scientific research, military applications or national security are outside its scope. It also does not apply to public authorities in countries outside the EU or to international organisations cooperating with the EU in judicial or law enforcement matters.
Risk classification: high, medium and low.
The regulation establishes a hierarchy of risks for AI systems, classifying them as high, medium and low risk, depending on the impact they may have on health, security or fundamental rights.
- High-risk systems: High-risk systems are those that affect critical areas, such as biometric identification, infrastructure management or personnel selection. These systems are subject to stringent compliance requirements and must be continuously monitored to ensure that they meet established standards.
- Prohibited Systems: The Regulation also prohibits certain uses of AI that could be harmful to users. This includes systems that use subliminal techniques to manipulate behaviour, exploit vulnerabilities of individuals or generate discriminatory profiles.
Obligations for Providers and Users
AI providers must comply with strict requirements for technical documentation, risk management and transparency on the operation of the systems. Users are obliged to monitor the use of these systems and cooperate with the authorities in case of incidents.
Supervision and Compliance
Each EU Member State must designate notifying authorities and assessment bodies to monitor compliance with the regulation. These bodies are responsible for ensuring safety and overseeing the market to ensure that AI systems operate in accordance with the law.
Transparency and Governance
The AI Act establishes transparency obligations so that users know when they are interacting with an AI system. In addition, a European AI Committee will be set up to guide the implementation and oversight of the regulation, ensuring its correct application across the EU.
Innovation: controlled spaces for development.
To encourage innovation, the regulation provides for the creation of regulatory sandboxes, controlled environments where developers can test their systems under the supervision of the authorities. This allows developers to experiment with new technologies within a secure regulatory framework.
Monitoring and Evaluation
AI providers should conduct post-market monitoring to identify and mitigate any emerging risks. In addition, they should report any serious incidents to the authorities, who can take corrective action if necessary.
European Database
A European database will be created in which all high-risk systems will be registered. This database will be accessible to both the public and the authorities, allowing for greater transparency and facilitating regulatory oversight.
Codes of Conduct
The Regulation promotes the development of voluntary codes of conduct to ensure the application of good practices, including in systems that are not considered high-risk. These codes can also address issues such as environmental sustainability and accessibility.
Sanctions for Non-Compliance
The Regulation provides for significant penalties for those who fail to comply with the provisions set out. Fines can be up to EUR 30 million or 6% of the company’ s annual turnover, depending on the seriousness of the breach and the size of the supplier.
Conclusion
The European Artificial Intelligence Regulation (AI Act) represents a significant step forward in the regulation of a technology with enormous potential. However, its stringent approach also raises the question of whether over-regulation could be stifling technological innovation in the EU. Compared to the US, where the approach is more flexible, some European companies could be at a competitive disadvantage.
As AI continues to evolve, it will be crucial that regulations strike a balance between protecting fundamental rights and allowing the development of disruptive innovations that benefit society as a whole.
Autor Salvador Cabello Guillén Abogado - Attorney
Datos de contacto + 34 652 724 645 salvador.cabello@lariostreslegal.com
Servicios Corporative Litigation and arbitration Responsabilidad civil Civil liability NewTech