Takeaways from key cases of enforcement around automated decision-making
In April of last year, the privacy world witnessed a landmark ruling around automated decision-making by AI systems issued by the Court of Justice of the European Union against SCHUFA. This ruling provided insights into the enforcement of automated decision-making by regulatory bodies, setting a precedent for similar cases to follow.
Businesses are employing AI systems at an increasing rate. A recent study conducted by Forbes Advisor, surveying 600 business owners, found Customer Service to be the most popular application of AI, with 56% of respondents using AI for this purpose. As the use of AI continues to grow, regulatory bodies are ramping up their enforcement efforts to ensure users' privacy is respected. In this article, we will look closely at the SCHUFA case as well as some other crucial cases of enforcement surrounding AI and Automated Decision Making to understand the different factors that affect these rulings, fines, notices, and penalties, which can serve as a reference for companies, data privacy, information governance professionals and those planning to and currently incorporating AI and automated decision making into their business processes.
SCHUFA | Court of Justice of the European Union| December 2023
The Court of Justice of the European Union issued a landmark ruling confirming that preparatory acts can be considered individual automated decision-making.
Summary: SCHUFA is a German credit reference agency that uses a probability-based scoring system to provide credit information about individuals to lenders. A German resident referred to as 'OQ' made a request to access information about SCHUFA's automated decision-making progress that used her personal data after she was denied credit. She received certain details; however, SCHUFA did not disclose information on how her score was determined, citing trade secrets. OQ's claim was initially rejected by the Hamburg DPA. The case was then referred to the CJEU to determine whether the SCHUFA scoring system constituted an automated individual decision under Article 22(1) GDPR.
Outcome: The CJEU noted three conditions for automated decision-making:
A decision is made
The decision is based solely on automated processing (including profiling)
The decision must produce legal effects regarding the individual or produce a similarly significant effect on its impact on the individual.
The CJEU found that all these conditions were met. Since the CJEU found that all these conditions were met, SCHUFA's argument was that it was only engaged in preparatory acts and that any decisions were taken by the lender. Instead, the CJEU held that the company itself was engaging in automated individual decision-making.
Key Takeaways: The outcome of the ruling is that a credit reference agency engages in automated individual decision-making when it provides credit repayment probability scores because of automated processing and where lenders rely heavily on these scores. This means the obligation to comply with Article 22 of the GDPR falls on both the credit reference agency and the lender. Decisions that are heavily impacted by results of automated processing can be considered automated decision-making, overall meaning that a wider range of automated processes can face regulatory heat.
Link to full resources: Key takeaways from the CJEU's recent automated decision-making rulings (iapp.org), SCHUFA: What does the landmark CJEU ruling on Automated Decision-Making mean for businesses? - Kemp IT Law
Public Employment Service, Austria | Supreme Administrative Court | December 2023
The Public Employment Service in Austria was found to have been engaged in automated decision-making to calculate the degree of probability for jobseekers to be employed for a certain number of days.
Summary: The Public Employment Service in Austria helps workers integrate into the labor market through their services. One of these services includes a counselor discussing the labor market opportunities with the job seeker. To determine the jobseekers' opportunities, an algorithm was used to calculate the probability for the jobseeker to be employed for a certain number of days based on 1) age group, 2) gender, 3) education, 4) health impairments, and other similar factors. Based on the algorithm, the jobseekers were divided into three groups: service job seekers with high labor market opportunities, care jobseekers with low market opportunities, and consultancy jobseekers with medium labor market opportunities. The results were used to help counselors assess job seekers' potential in the job market. The algorithm was not used for job placement but supported the decision.
Outcome: Since the outcome of the counselor's decision (impacted by the outcome of the algorithm) had an effect on the allocation of the jobseekers' group, the decision thereby had a legal effect on the jobseekers. The Supreme Administrative Court found the algorithmic process to be automated decision-making under Article 22 of the GDPR as it had a legal effect on job seekers. Further, the Court found that the training given to those counselors using the algorithm's results did not guarantee that the counselors would not ultimately use the algorithm's results as the deciding factor when allocating the jobseeker's groups.
Key Takeaways: Even though the final decision lies with the counselor, this does not prevent the algorithm from being considered an automated decision under the GDPR. The possibility of the counselors relying heavily on the results of the algorithm is an important factor in classifying the algorithm.
Link to full resource: VwGH - Ro 2021/04/0010-11 - GDPRhub
Air Canada | Tribunal, Small Claims Court | February 2024
Air Canada's AI chatbot was found to have misled a bereaving customer.
Summary: A bereaving customer used Air Canada's website chatbot for information on their bereavement fares. The customer took a screenshot of the interaction with the chatbot, where the chatbot stated that Air Canada did offer reduced bereavement feed retroactively and provided some details on availing of the reduced bereavement rate by filling out a form. A link to the bereavement fee policy was also included in this message. However, Air Canada's bereavement fare policy does not offer refunds for travels that have already occurred, contradicting the chatbot.
Outcome: The Tribunal found the claim against Air Canada to be a case of "negligent misrepresentation." According to them, the chatbot is a part of Air Canada's website and not a separate legal entity, and they should have made efforts to ensure that the chatbot was accurate instead of expecting the customer to double-check the chatbot's outputs. The customer was awarded $812.02 in damages and court fees.
Key Takeaways: Efforts should be invested to ensure the accuracy of chatbots so that wrong or misrepresented information is not offered to users. Chatbots, while they have an interactive component, are not separate legal entities but are a part of the company's website.
Link to full resource: What Air Canada Lost In 'Remarkable' Lying AI Chatbot Case (forbes.com)
Snapchat | Information Commissioners Office | October 2023
Snapchat received a preliminary enforcement notice for its failure to properly assess the privacy risks of its generative AI chatbot.
Summary: Snapchat launched its generative AI chatbot, My AI, in April of 2023. This chatbot was meant to act as a friend to users. Users could interact with the chatbot, ask it for advice, and send snaps to it. It was found that the chatbot had not been providing age-appropriate responses to users. The chatbot was found to recommend ways to mask the smell of alcohol to a 15-year-old user and shared information with a 13-year-old user on ways to prepare for their first sexual experience.
Outcome: The UK's ICO's investigation found that the risk assessment conducted by Snapchat did not properly assess the data protection risks of the AI technology. Snapchat has now been given the chance to respond to ICO's concerns before they make any further decisions.
Key Takeaways: Adequate assessments should be conducted to identify and mitigate data protection risks posed by generative AI systems, especially those interacting directly with users such as chatbots.
Link to full resource: UK Information Commissioner issues preliminary enforcement notice against Snap | ICO
Budapest Bank | Hungarian Data Protection Authority | February 2022
Budapest Bank was fined €700,000 for using automated decision-making and profiling of customers without appropriate legal basis or proper safeguards.
Summary: According to the Budapest Bank, software was used that utilized speech signal processing to identify periods of silence, keywords, and emotional elements of the customer to identify customer dissatisfaction. Once this automated decision was made, profiling customers, bank employees would then make callbacks to handle any customer issues. The Bank stated legitimate interest as its legal basis for the processing and quality control, prevention of complaints, and increased efficiency as its purpose. The Bank also stated that customers were informed at the beginning of the call that it would be recorded. However, they were not informed that AI systems would be used to analyze the calls.
Outcome: Amongst other violations, such as breaches of Article 9, Article 12(1), Article 13, and Article 21 of the GDPR, the Hungarian Data Protection Authority, NAIH, found that automated decision-making was carried out. The NAIH held that "it is sufficient if the processing is intended to produce an outcome that influences the decision makers."
Key Takeaways: If the outcome of the processing of software is intended to influence decision-makers, then the process would be classified as automated decision-making. This means that automated decision-making would have to comply with GDPR standards.
Link to full resource: NAIH (Hungary) - NAIH-85-3/2022 - GDPRhub
Tax and Customs Administration | Dutch Data Protection Authority | December 2021
The Dutch DPA imposed a fine of €2.75 million on the Tax and Customs Administration for their illegal and discriminatory processing of personal data.
Summary: The Dutch DPA found that the tax authorities retained and used the dual nationality of Dutch nationals to assess applications for childcare allowance. The Authority automatically categorized the risk of certain applications by using a self-learning algorithm that utilized the nationality of the applicants as an indicator of risk (Dutch/ non-Dutch). In other words, childcare allowance applicants were discriminated against depending on their nationality as this personal data (their nationality) was used to assess their application for childcare allowance. The tax authorities allegedly processed the nationality of the applicants to combat organized fraud.
Outcome: The DPA found that their processing cases, assessing applications for risk, and combating fraud were unlawful and hence prohibited. The discrimination faced due to the processing of nationality was also found to be in violation of the GDPR since, according to the GDPR, processing may not infringe on fundamental rights, such as the right not to be discriminated against. The investigation led to the tax authorities cleaning up their internal systems, and since has stopped using the nationality of applicants in their risk system. The fine was imposed on the Minister of Finance as he is responsible for processing personal data at the Tax and Customs Administration.
Key Takeaways: Measures should be taken to ensure that AI systems that engage in automated decision-making are free from bias and discriminatory processes. There needs to be a proper, lawful basis for processing personal data.
Link to full resource: Tax and Customs Administration fined for discriminatory and unlawful working methods | Dutch Data Protection Authority (DPA) (autoriteitpersoonsgegevens.nl)
コメント