...Loading
Missing alt text [EN]

Why 80 percent of AI projects fail. And how to do it better.

Last updated: 24.09.2025 10:00

Artificial intelligence is considered one of the greatest opportunities of our time. It promises cost reductions, process automation and new customer experiences. Studies show potential cost reductions of up to 40 percent in customer service. A clear majority of companies in Germany should therefore have long since taken off with AI. But the reality is different.

Despite available technologies, mature large language models and growing infrastructure, up to 80 percent of AI implementations fail. This is not due to the technology - but to the implementation. And they don't fail at the end, but at the beginning. As a result, AI projects remain in pilot status instead of delivering real business value. This "execution gap" is the biggest challenge for companies. Here are a few practical tips on how to overcome it.

Typical stumbling blocks and tips on how to avoid them

Fear of regulatory violations

One of the main problems, especially for companies in Germany, is the typical "German pondering" - because the development of AI is actually too fast for that. Many companies also remain in a state of shock because they fear violating upcoming or existing AI regulations. Transparency obligations and documentation requirements act as an additional stumbling block due to the effort involved, or at least appear to be.

Tip:

  • Rely on compliance by design: with tools that log activities and automatically generate transparency reports.

  • Use AI gateways to automatically anonymize data and take regulatory requirements into account.

  • Use clear governance guidelines to secure responsibilities.

Concern for data security

Companies do not want to and should not hand over sensitive customer data to external AI models in an uncontrolled manner. The fear of such data leaks or misuse blocks many initiatives. However, secure access is possible without sacrificing the benefits of AI.

Tip:

  • Use privacy management, for example with VIER AI Gateway, to pseudonymize your customer data before it reaches the AI models.

  • Controlled interfaces allow you to keep data within your company.

  • And train your employees in the correct use of AI tools to prevent shadow AI.

Insufficient data quality

An AI is only as good as the quality of the training data. "Garbage in, chaos out" applies more than ever. Poor or unstructured data makes the results worthless.

Tip:

  • Invest in data governance and structuring and check the data quality continuously, not just at the start of the project.

  • Start with use cases based on existing, good quality data and build from there.

Lack of system compatibility

Many companies work with legacy systems that are not accessible via API. The result: AI can understand the concerns, but fails to retrieve the necessary data from the systems or execute actions.

Tip:

  • Make APIs a priority in your digital strategy!

  • Build bridges between existing systems and new AI applications.

  • Do not plan AI projects in isolation, but rather integrated into the existing system landscape.

Useless AI output

Even in 2025, models still tend to hallucinate or give answers that do not match the tone of your brand. This undermines trust and acceptance.

Tip:

  • Introduce validation mechanisms that check every AI response before it is played out (e.g. LLM as a Judge with FOUR evaluation).

  • Define guardrails that control the tonality, content and format of the responses.

  • Incorporate human-in-the-loop mechanisms where necessary.

Resistance from employees

Many things can be solved technically, but projects fail if the workforce does not participate. The reasons for a lack of acceptance are usually the fear of losing one's job and the feeling of being ignored.

Tip:

  • Involve your workforce in the process right from the start and develop solutions together with the teams.

  • Demonstrate how AI relieves the teams, what opportunities it creates and what added value it offers.

  • Provide secure and transparent access, e.g. with FOUR GPT, so that employees can use AI productively and get to know it without risk. This prevents harmful "shadow AI" practices.

Conclusion: AI projects don't fail because of the technology

The biggest risk for companies does not lie in the AI technology itself, but in data, processes, compliance and culture. Those who address these stumbling blocks early on can realize the promised efficiency gains and turn AI into a real value driver - instead of ending up in the statistics of failed projects. After all, oversleeping AI puts jobs and the company as a whole at risk.

Tips & recommendations for action

  • See regulation as an opportunity to set standards.

  • Ensure data security through data protection-compliant access to AI.

  • Invest in data quality and API strategies.

  • Control AI output through validation.

  • Create employee acceptance through involvement and transparency

  • Use VIER AI Gateway

    Author:

    Missing alt text

    Daniel Krantz

    Vice President AI Solutions

    VIER

    Back to the blog
    ...Loading