AI tips
Our tips for a more efficient use of AI


Omni-modal AI can do more than just text
Technological developments are happening one after the other, and this also applies to the further development of LLMs. What used to be text-based language models are now omni-modal.

Secret prompts for deeper insights
AI is more than just a tool for quickly answering questions – it can uncover in-depth patterns and thought processes. This requires the right prompts.

AI tip: It all depends on the context
Large language models (LLMs) are currently outdoing each other not only through better performance, but also through ever larger so-called context windows. But what does that actually mean?

Test LLMs first, then use them successfully
Comprehensive testing is crucial for companies to be able to use large language models (LLMs) safely and effectively. This is because models that are not thoroughly tested can deliver incorrect or biased results.

AI & RAG for really smart bots
How can chatbots and voicebots be used to access current and internal knowledge? This is where RAG comes into play.

Let prompts prompt for you! How AI helps with AI usage
If you've ever wanted to know how to give an AI better instructions or ask better questions, we recommend getting help from an AI!

Protect personal data!
One urgent security issue at the moment is the implementation of the EU AI Act - and therefore the protection of personal data through anonymization and pseudonymization. Tip: There is technical support for this.

Omni-modal AI can do more than just text
Technological developments are happening one after the other, and this also applies to the further development of LLMs. What used to be text-based language models are now omni-modal.

Secret prompts for deeper insights
AI is more than just a tool for quickly answering questions – it can uncover in-depth patterns and thought processes. This requires the right prompts.

AI tip: It all depends on the context
Large language models (LLMs) are currently outdoing each other not only through better performance, but also through ever larger so-called context windows. But what does that actually mean?