Understanding and mitigating the risks of EU AI Acts' high-risk’ systems
The EU AI Act, which is set to become the first comprehensive AI Act ever, follows a risk-based approach, categorizing AI risks as Unacceptable risk, high risk, limited risk, and minimal risk. AI systems that risk negatively impacting the safety or fundamental rights of individuals are considered high risk. These risks need to be assessed before being released into the market. Continuous assessment throughout their lifecycle will also need to be conducted.
Common risks for AI systems include:
Legal risks, such as those in the case of facial recognition systems, could violate certain laws.
Issues and defects with the systems could lead to biases, incorrect information, incomplete information, and errors.
Reputation risks in the case of organizations associated with problematic AI systems, such as Snapchat's AI chatbot or Air Canada.
It's important to carefully determine areas of vulnerability and potential harm. A better understanding of risks, vulnerabilities, and harms is a necessary first step to creating strategies to mitigate these risks.
Complying with the EU AI Act requires that businesses that provide or use high-risk systems follow the mandatory processes laid down by the Act. So, what does it mean to have high-risk systems? And how do businesses go about mitigating the risks of high-risk AI systems? In this article we will be looking at some different approaches to understanding and mitigating such risks to properly prepare the upcoming regulation.
Assessment of AI Systems and their risks
Build an Inventory
The first step is taking inventory and understanding the AI systems, the training data sources, the nature of the training data, inputs and outputs and the other components in play to gain an understanding of the potential threats and risks.
Perform Risk Assessment
To comply with the EU AI Act, companies need to assess their AI systems to identify whether their risks are unacceptable, high, limited, or minimal. If a company identifies any risks as unacceptable, it will need to stop that AI processing activity as unacceptable risks are prohibited. However, if the risk is not unacceptable and thereby prohibited, the organization should rank the risk and assess its likelihood of harm.
Different laws consider different activities as high-risk. For example, biometric identification and surveillance are considered high-risk under the EU AI Act but not under the NIST AI Risk Management Framework. So, it's important to understand the jurisdiction, industry, and the organization's risk tolerance as you assign risks to AI systems.
Here, you can find examples of different activities categorized on their risk level.
Frameworks or threat models can be applied to identify possible risks. There are several frameworks, threat models that are available and the appropriate model can be adopted to identify AI risks. Organizations can vary in many ways, such as their values and risk tolerance levels. Therefore, an organization must consider the context of its own policies and procedures when adopting external AI principles and frameworks.
Scoring of risks
Next, it is important for companies to ascertain the likelihood of the risk occurring. After adopting risk assessment frameworks or threat models, companies need to use the results to score and categorize the risks. Depending on the likelihood of the risk occurring, mitigation efforts can be planned accordingly. For example, the Singapore Model AI Governance Framework suggests using a risk matrix to understand the severity and probability of a risk occurring, which can help determine the level of human involvement required. Companies can also use the National Institute of Standards and Technology (NIST) AI Risk Management Framework, which provides guidelines on the management of risk as part of AI governance. The Framework emphasizes tracking metrics to understand the trustworthiness and the impacts of the AI systems. It might also be helpful to perform an algorithmic impact assessment leveraging existing tools such as a PIA or a security assessment. Documentation of risks, assessment, and steps taken to mitigate them demonstrate accountability and responsibility.
Auditing and Monitoring
Further, an internal review will have to be maintained to monitor and evaluate the AI systems for errors, issues, and biases in the pre-deployment stage. Regular AI audits will also need to be conducted to check for accuracy, robustness, fairness, and compliance. These auditsRegular audits also help identify gaps and issues in the functioning of the systems. Developers and those using the AI systems will have to be given adequate training on compliance requirements and ethical practices.
Considering mitigation efforts
Mitigating AI risks can look different for different companies depending on the AI systems deployed, the business, the industry the business is in, and the kind of risks itself. For example, it might mean not using Personal Information (PI) in certain models; it could mean replacing PI that exists with synthetic data or using other PETs. According to the NIST AI RMF, the main categories of managing AI risks include:
Designing, implementing, and documenting strategies that maximize AI benefits and minimize their negative impacts. It is important hear to obtain input from relevant AI actors in the design of these strategies.
Documenting and monitoring risks that arise, their response, and mitigation strategies.
Post-deployment auditing and monitoring
However, complying with the Act does not end once a risk mitigation program is in place. Post-deployment audits and assessments are just as important to ensure that the systems are functioning as required. Regular monitoring of risks and biases is important to identify emerging risks or those that may have been missed previously. It is beneficial to assign responsibility to team members to overlook risk management efforts. As always, human oversight is extremely important in AI systems.
Today, users are more aware of and concerned their data and privacy than ever before. Awareness around the risks of AI is growing. People want to be able to trust the technology that they're using and want to be assured of fair, unbiased, and accurate results. Building trust and transparency when deploying AI systems is crucial.
Organization Structure and Culture
The success of the AI governance program depends on the organizational structure and culture. Prioritizing this will position an AI governance practitioner for success. It's important to provide knowledge and training to foster a culture that continuously promotes ethical behavior. We can also use and adapt existing privacy and data governance practices for AI management.
As the EU AI Act has not yet been enacted, this is the time for companies to start their compliance efforts. As we already know, avoiding the use of AI is not the recommended solution; instead, companies need to focus on building clean, efficient, and sustainable AI risk management solutions. And there is no better time to build such systems than while preparing to comply with the upcoming regulation.
Commentaires