KI-als-Chance europäische Werte VIER 1920x1080
The EU AI Act regulates the use of AI in customer service. What should be specifically considered?

EU AI Act: What responsible parties need to know now, Part 2

The EU AI Act provides for the classification of AI systems into various risk categories. A multitude of systems in customer communication is expected to be classified as "minimal to no risk" or "limited risk," especially affecting chatbots or voicebots. However, the risk assessment will vary depending on the complexity.

A central aspect of the AI regulation is the categorization of AI systems based on their risk potential for fundamental rights and human dignity. The legal framework of the AI regulation establishes a total of four risk levels:

  • Unacceptable Risks

  • High Risks

  • Limited Risks

  • Minimal Risks/No Risk

Simple bots that provide predefined answers are likely to pose lower risks than complex or universal bots.

Presumed low risks with general purpose AI
As mentioned earlier, user interactions with AI systems must meet transparency requirements. Therefore, any General Purpose AI (GPAI) used to generate text, audio, images, or video will need to provide users with transparent information about the applied AI system. Since the development of AI systems in customer service is moving towards GPAI-based systems to effectively respond to diverse customer concerns and provide individualized responses, many of these more modern systems are expected to be classified into this category. However, there may still be use cases where AI systems in customer service are classified as high-risk AI systems if they meet the specified criteria in the regulation for the listed application areas. Presumably, though, this will be less common.

Customer service: Specific compliance requirements
The AI regulation establishes comprehensive compliance requirements for high-risk AI systems and calls for transparency obligations from providers for AI systems with limited risk, especially those associated with a lack of transparency in AI usage. This means that many AI systems in customer service will need to inform users that they are communicating with an AI system. With these regulations, the EU aims to enable users to make an informed decision about whether to continue with the AI or withdraw from the communication. This also means that companies employing such systems will need to create mechanisms for their customers to seamlessly transition between bots and human representatives.

Obligations for high-risk AI systems
Companies developing, distributing, or using high-risk AI systems will be subject to significantly more extensive obligations before they are allowed to bring the corresponding AI system to the market. These requirements include the following in particular:

Risk assessment and risk mitigation: High-risk AI systems must conduct a thorough risk assessment to identify potential hazards. Based on this assessment, appropriate measures must be taken to minimize or control risks.

Data quality: It is expected that high-risk AI systems will use high-quality datasets to minimize the risk of malfunctions and discriminatory outcomes. The quality of the data fed into the system is crucial for performance and safety. AI systems must not make discriminatory or unfairly preferential decisions. It must be ensured that the system treats all customers equally fairly and does not exhibit any biases or prejudices.

Logging and traceability: Appropriate mechanisms are necessary to log the activities of the AI system. These enable traceability of the results and facilitate review in case of malfunctions or complaints.

Documentation and transparency: Providers of high-risk AI systems must provide comprehensive documentation containing all relevant information about the system and its purpose. This allows authorities to verify compliance with regulations, and users can understand the system and assess its impact.

User information: Clear and adequate information should enable users to understand the functioning of the AI system and interact with it appropriately. This is particularly important to minimize the risk of misunderstandings or misinterpretations. AI systems in customer service must be transparent so that users understand decision-making and functionality. This may include providing clear explanations of how the system operates and its decisions, as well as disclosing training data and algorithms.

Human supervision and control: High-risk AI systems must be designed to be adequately supervised and controlled by humans in the future. This is intended to minimize the risk of malfunctions. Human supervision can also help detect and correct potentially problematic decisions made by the system.

Robustness, security, and accuracy: High-risk AI systems must meet high standards of robustness, security, and accuracy to minimize unintentional harm and be protected against unauthorized manipulation. Such systems must operate reliably and stably to avoid unexpected failures or security vulnerabilities. This may require the implementation of security measures such as encryption, access controls, and data protection measures.

Privacy protection and data processing: Customer data must be processed and protected in accordance with applicable data protection regulations. Therefore, AI systems must ensure that personal data is handled confidentially and only used for the intended purposes.

Challenges in technology and cperations will cost money
Adapting existing AI systems to meet the requirements of the law poses both technical and operational challenges. Technical adjustments such as altering algorithms and data processing procedures are necessary to comply with legal requirements. Additionally, companies must review and update their data management processes to meet privacy standards. Compliance with data protection regulations may also require investments in privacy technologies and training. Demands for transparency and explainability may even necessitate a redesign of systems to make their operation more understandable.

Comprehensive risk assessment is also necessary to examine AI systems for potential compliance risks. Integrating new legal requirements into operational processes, in turn, requires internal policy adjustments and training for employees. Companies must implement internal control mechanisms to monitor and document compliance. All these adjustments involve significant costs and resources and require careful planning as well as collaboration among different business areas and interest groups. Involving external legal advisors may also be necessary to interpret and implement complex legislation. Moreover, companies need to allocate resources for risk management to identify and minimize potential risks. In simple terms, compliance is labor-intensive and can become quite expensive.

Data privacy and security
Data privacy and security play a crucial role in the AI regulation, as they serve to protect individuals' privacy and rights and build trust in AI systems. The AI regulation and the GDPR will coexist, but there will be interfaces where both regulations are applicable. This will be particularly relevant when personal data is used in the development or testing of an AI system, or when personal data is processed in the use of artificial intelligence. In these cases, providers must ensure compliance with both regulations. Article 15 of the AI regulation outlines specific requirements for high-risk AI systems regarding "accuracy, robustness, and cybersecurity." At the same time, AI systems with lower risk will also be measured against these requirements should incidents related to AI systems occur.

Opportunities for innovation, improvement, and development
Strengthening trust and reliability in AI-driven customer service solutions is crucial for businesses relying on AI to optimize their customer service. By providing high-quality and transparent AI-driven solutions, companies can enhance customer trust and continuously improve the customer experience. This includes implementing robust data privacy and security measures, clear communication about the use of AI, and ensuring smooth interaction between AI and human customer service representatives.

The development of new AI technologies within the regulatory framework offers companies the opportunity to comply with the AI regulation. This requires a risk and quality management system that enables companies to assess new technologies early and monitor the quality of the AI system and its data.

At the same time, the deployment of new AI technologies naturally poses additional risks. Adjustments to the annex of the AI regulation are explicitly provided to also include novel technologies in the category of high-risk AI systems. General AI is already linked to risky systems in the AI regulation, so stricter regulation of these technologies is indeed possible. However, for companies, this regulatory framework provides the opportunity to establish or utilize an infrastructure that can accommodate any new AI technology and any tightening of regulation.

By establishing a risk and quality management system, implementing comprehensive documentation requirements, and operating AI systems within an infrastructure where recording the activities of AI systems is always ensured, records can be continuously monitored and regularly verified by employees. In this way, companies are also equipped for the deployment of high-risk AI systems. Companies that are already making such adjustments today and adapting their organizational structure, processes, and operational infrastructure for AI are also prepared for future developments in new AI technologies and adjustments to the AI regulation.

Start preparing now!
The introduction of the AI regulation is imminent. Therefore, companies should now begin by inventorying and assessing their AI systems. This forms the basis for classifying the AI systems into one of the predefined categories according to the regulation. It is also important to document the use of General Purpose AI (GPAI) in the AI systems to meet the transparency requirements for GPAI systems.

Risk assessment management systems and continuous monitoring of risks in the use of AI are essential components for taking appropriate measures. At the same time, such management systems also provide the opportunity to identify and reduce potential risks for the company, even for systems classified as not particularly risky. A data quality management system is indispensable for all companies that want to adapt AI systems for their individual business activities. Training, testing, and validation data must meet appropriate quality criteria to comply with ethical principles.

Supporting tools and resources for compliance
For the deployment of high-risk AI systems, an operational environment with the necessary tools and resources is required to meet the requirements of the AI regulation. This primarily includes an infrastructure where inputs and outputs between AI systems and users, as well as between AI systems and other interacting systems (databases, CRM systems, etc.), are recorded. These records must be continuously analyzed and evaluated, and human assessment is required for deviations from thresholds. AI Gateways, which enable the secure use of AI, are particularly helpful.

With an AI Gateway, every company can quickly and securely harness the power of conversational AI for its specific use case. An AI Gateway serves as a sort of selection and monitoring gatekeeper between the application and the AI model, checking each customer request against these requirements. Only then is the request sent to the most suitable AI model. This ensures data protection according to requirements and transparent costs. AI Gateways enable companies to harness the potential of conversational AI quickly, in compliance with regulations, and securely – even in light of the EU AI Act!


Sirke Reimann
Sirke ReimannVIER Chief Information Security Officer
More in our blog