...Loading
Missing alt text

Preventing shadow AI: 6 tips

Last updated: 09.10.2025 10:00

How companies can use generative AI step by step in a scalable and data protection-compliant manner and prevent uncontrolled use of AI in-house.

Artificial intelligence is a real game changer – it enables companies to offer new services, understand their customers better, increase efficiency, speed up processes and thus boost competitiveness. However, this is not possible without security, control and clear rules, because like any other tool, AI must be used wisely. Fiddling around with it aimlessly is a recipe for trouble. One such nuisance is so-called shadow AI. This refers to the use of AI tools by employees without the company's permission – and often without the company even being aware of it – in which sensitive information, for example about products, company internals or even customer data, is often entered into AI solutions without any suspicion.

"With many AI tools, the data is openly passed on to the generative AI or its manufacturer, for example in the USA. Even with guardrails, i.e. the guardrails that regulate the use of an AI solution within certain specifications, companies are dependent on the manufacturers and cannot impose any restrictions," explains Rainer Holler, CEO of VIER. This can lead to a significant loss of control and massive reputational damage. Companies must rule out these risks when using AI and instead integrate it responsibly into existing processes.

Establishing future-proof AI technologies – here's how!

Tip: Plan the use of AI strategically

The use of AI must be determined and supported by company management. It is not a pure IT project. Accordingly, companies should draw up an AI policy that also regulates who is allowed to use AI for which processes and how this use is monitored.

Tip: Provide employees with practical training

A lot can be solved technically, but it won't work without the workforce. Employees should therefore be given the opportunity to get to know AI without risk. And they need information and knowledge about its use and risks. For example, they should know which sensitive data they are not allowed to enter into public chatbots or other AI tools. Awareness training courses that also provide information about the opportunities of generative AI and convey best practices are suitable for this.

Tip: Ensure data security

Personal data and internal company information must not be entered into generative AI applications. However, they can still be used if it is ensured that sensitive data (name, address, bank details, etc.) is automatically recognized and pseudonymized before it is sent to the AI. This means that business-critical data remains on the in-house server and is not sent to the LLM. The LLM works with the anonymized context. Once the response has been issued by the AI, the data is re-pseudonymized and is clearly available to the company again. AI gateways have established themselves as a valuable tool for this.

Tip: Reliably comply with regulatory requirements

Companies must define where data is processed and stored. They must also check whether their AI provider is GDPR-compliant – for example, by having servers in the EU or Germany – and whether the provider supports the use of AI in accordance with the requirements of the EU AI Act. This also includes Guardrails as a central tool for describing and enforcing policies. AI gateways are also helpful here.

Tip: Ensure access control

AI profiles that organize and control access to AI models, role-based permissions and clear guidelines can be used to control the use of AI models for entire departments and individual users. This means that employees not only use AI tools that are practical for them, but are also approved. And you can precisely allocate the costs incurred and document usage in accordance with compliance requirements. This puts a stop to the use of shadow AI.

Tip: Ensure transparency & traceability

Under the EU AI Act, companies must document in detail which prompts and outputs were used in business-critical processes. In addition, companies must justify their decisions based on AI results and define clear responsibilities for the final approval.

Getting started together

Conclusion: Whether for evaluations, summaries, presentations, assistance and much more, generative AI is changing everyday business life and improving work steps. However, sensitive customer data and internal information must not be handed over to external AI models in an uncontrolled manner. Nevertheless, employees want to and should simply use and get to know AI. This is possible if companies pay attention to a few points and start using AI step by step.

    Author:

    Missing alt text

    Susanne Feldt

    Corporate Communications

    VIER

    Further information

    More about VIER AI Gateway

    Back to the blog
    ...Loading