
Interview with Harald Henn: What does Agentic AI do for customer service?
Last updated: 27.05.2025 16:10
Agentic AI is on the verge of profoundly changing customer service and fundamentally reshaping the interaction between companies and customers. Because while chatbots only react, Agentic AI can act. What does this mean for customer service?
Agentic AI refers to autonomous AI systems that can act independently, make decisions and perform complex tasks – without constant human instruction. While traditional chatbots are mainly based on predefined scripts and only respond reactively to input, agentic AI goes far beyond this. These autonomous AI systems can not only communicate with customers in real time, but can also overcome language barriers through real-time translations and even exchange information with each other. However, this technology also harbours risks: Large language models (LLMs), which form the core of Agentic AI systems, can produce errors and hallucinations, which can lead to completely incorrect customer interactions.
Many possibilities, many tasks
A key advantage of Agentic AI lies in its ability to adapt communication and offers to individual customer needs in real time. Unlike traditional automation solutions, Agentic AI can understand and respond to nuanced customer requests over longer interactions. As a result, an AI agent in customer service can not only
Agentic AI processes multimodal input, i.e. text, speech, images, data and videos, and uses advanced cognitive layers such as semantic, episodic and procedural memory to adapt actions to the respective situation.
Beware of wrong decisions and hallucinations
However, despite – or perhaps because of – the promising potential applications, agentic AI in customer service harbors risks that companies should carefully consider before implementing it. This is because the increasing autonomy of these systems reinforces existing challenges and creates new problems at the same time. Large Language Models (LLMs), which form the core of Agentic AI systems, are susceptible to so-called hallucinations – the generation of false information that is presented in a convincing manner. The impact of such misinformation can be significant, especially if the generated content is considered trustworthy. In critical application areas such as medicine, law or research, these misrepresentations can lead to momentous wrong decisions. What is particularly problematic is that the autonomy of Agentic AI can amplify existing risks. Examples of this are
Who takes responsibility?
These wrong decisions therefore raise fundamental ethical questions about accountability if the information provided is misleading or damaging. Who is therefore responsible for wrong decisions and damage caused – the developer, the company or the AI system itself? Humans must therefore remain the central control authority, supported by clear governance guidelines and continuous monitoring.
Author:

Harald Henn
Managing Director
Marketing Resultant GmbH
Harald Henn is Managing Director of Marketing Resultant GmbH in Mainz. He sees himself as a navigator for digital customer service, optimizes business processes in sales, service and marketing using lean management methods and offers best practice consulting for call centers and CRM projects.