top of page

Everything you need to know about the EU AI Act


Adapting to the changing Ad Tech Environment

First is the General Data Protection Regulation (GDPR), now the EU Artificial Intelligence Act (EU AI Act). The European Union once again emerges as the first body to develop and enforce a comprehensive privacy regulation. This legislation aims to regulate the development, deployment, and use of artificial intelligence (AI) systems within the EU, emphasizing transparency, accountability, and the protection of fundamental rights.


Earlier this year, the member states of the European Parliament unanimously endorsed the EU AI Act. However, the law is yet to be formally endorsed by the Council. It will enter into force 20 days after its publication in the Official Journal. The law will be fully applicable 24 months after its entry into force with a few exceptions:


  • Bans on prohibited practices (applicable six months after enforcement date)

  • Codes of practice (applicable nine months after enforcement date)

  • General-purpose AI rules, including governance (applicable 12 months after enforcement date)

  • Obligations for high-risk systems (applicable 36 months after enforcement date).


The EU AI Act covers a wide range of AI systems, including those developed for both civilian and military purposes. It defines AI systems based on their capabilities and risk levels, categorizing them into four classes: Unacceptable risk, High Risk, Limited risk, and Minimal Risk.


Unacceptable risk: Unacceptable risk AI systems are systems considered a threat to people and will be banned. Examples of unacceptable risk include: -


  • Manipulation of individuals through AI technology, such as voice-activated toys, encourages dangerous behavior in children.

  • Social Credit Scoring systems

  • Emotional recognition systems at workplaces and education centers.

  • Classifying people on certain criteria such as socio-economic status, behavior, etc., also known as social scoring

  • Biometric identification and categorization of people

  • Real-time and remote biometric identification systems, such as facial recognition systems


However, there are some exceptions for law enforcement purposes, such as "real-time" remote biometric identification being allowed for some cases and "post" remote biometric identification being allowed for the prosecution of serious crimes where identification is made after a significant delay. This, however, will need court approval.


High risk: AI systems that risk negatively impacting the safety or fundamental rights of individuals are considered high risk. These risks need to be assessed before being released into the market. Continuous assessment throughout their lifecycle will also need to be conducted.


Limited/Transparency risk: AI systems that are made to interact with natural persons or to generate content may risk impersonation or deception. These systems pose limited risks, such as systems that interact with humans (i.e., chatbots), AI systems that generate or manipulate image, audio, or video content (i.e., deepfakes), etc. These systems will be subject to a limited set of transparency obligations. In other words, the users should be made aware that they are interacting with a chatbot, or users should be made aware that content has been generated by AI systems.

 

Minimal/Low risk: These include common AI systems such as spam filters. These systems will not be subject to further obligations beyond currently applicable legislation (e.g., GDPR).

 

Requirements for High-Risk AI Systems

Since High-risk AI systems run the risk of negatively affecting the safety and fundamental rights of individuals, they are subjected to certain requirements. To start with, they need to be registered in the public EU database of high-risk AI systems. Some of their requirements include: -


Data Quality and Governance: High-risk AI systems must use high-quality data, ensuring accuracy, relevance, and fairness. Data governance frameworks must be established to manage data throughout its lifecycle.


Transparency and Explainability: Users must be provided with clear information about the AI system's capabilities, limitations, and decision-making processes. Mechanisms should be in place to enable users to understand how decisions are reached.


Human Oversight and Control: Adequate human oversight is required to monitor AI systems' performance, intervene when necessary, and override decisions. Users should have the ability to control and modify system settings.


Accuracy and Robustness: AI systems must be designed to achieve and maintain a high level of accuracy, reliability, and robustness across diverse scenarios and environments.


Documentation and Record-Keeping: Detailed documentation, including technical documentation, risk assessments, and audit trails, must be maintained throughout the AI system's lifecycle.


Certification and Conformity Assessment: To ensure compliance with the EU AI Act, high-risk AI systems must undergo a conformity assessment procedure conducted by notified bodies. This process evaluates the AI system's conformity with the Act's requirements, which may involve testing, documentation review, on-site inspections, etc. Upon successful assessment, AI systems receive a certificate of conformity, demonstrating their compliance with regulatory standards.

 

General Purpose AI (GPAI) and Foundational Models

General-purpose AI systems are AI systems that have a wide range of possible uses, both intended and unintended by the developers. They can be applied to many different tasks in various fields, often without substantial modification and fine-tuning.


Under the EU AI Act, there are distinct requirements for GPAI models and those GPAI models that pose systemic risk.


GPAI systems will have to create and maintain up-to-date technical documentation. Proper information and documentation should be made available to downstream AI systems providers. Further, GPAI systems will have to comply with copyright law. A summary of content used in training the GPAI model will have to be provided by GPAI providers. GPAI models with high-impact capabilities may pose a systemic risk with significant impacts and hence should notify the European Commission if their model is trained using total computing power exceeding 10^25 FLOPs (i.e., floating-point operations per second). Systems exceeding this limit will be considered to pose a systemic risk. These systems have to adhere to transparency requirements and copyright compliance requirements and should constantly assess and mitigate their risks. This includes properly documenting and reporting serious incidents and implementing appropriate control and corrective measures.

 

Enforcement and Penalties:

Member states of the EU are responsible for monitoring and enforcing compliance with the Act. Penalties for non-compliance can include fines (which can reach up to €30 million or 6% of global income), withdrawal of certificates, or suspension of AI system deployment.


Currently, we are awaiting the formal endorsement of the law by the European Council and its eventual enforcement. To comply, companies will need to analyze current systems for gaps and plan to quickly address these gaps. Transparency and explainability are a crucial focus under this regulation, and companies will need to start preparing policies, documentation, and disclosures accordingly. AI systems will, without doubt, have to be constantly monitored and assessed for risks. And, of course, employees will have to be trained on AI compliance and ethics. We expect that this Act will have impacts beyond the EU, as it sets a precedent for what can be expected from a comprehensive AI regulation, just as the GDPR did.

Comments


Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page