Lessons from AI in the real-world
AI is hot. Tools like ChatGPT and Copilot have rapidly taken hold in the workplace, with employees eagerly experimenting with them for tasks such as rewriting emails, preparing texts, or conducting quick analyses. But what happens when companies want to use these same tools for complex datasets or even strategic processes? Then, AI proves to be much more difficult than it initially seems. To bridge the gap between promise and practice, Peter, a business analyst at Equalminds, answers five pressing questions about AI in Agile projects. Discover our lessons from AI!
Why doesn’t AI work as a ready-made solution?
“One of the biggest misconceptions I see in companies is that AI is a simple, plug-and-play solution,” says Peter. “Many customers think a tool will solve their problem automatically. In practice, it doesn’t work that way. It requires a well-thought-out process in which you know exactly what you want the tool to investigate, how your datasets are structured, and which limitations you need to monitor.”
These limitations are real. AI tools aren’t infallible and operate largely statistically: they choose the answer that seems most likely, not necessarily the most correct one. An anecdote illustrates this. “A locally trained model once recommended plugging a USB cable into a boiler… That was purely the result of incorrect correlations in the dataset, but of course, completely useless in practice.” Peter thus confirms that AI doesn’t create value on its own: it can only be successful if organizations approach the technology critically and create the right preconditions.
Which preconditions or pitfalls determine whether AI succeeds in companies?
“When companies want to integrate AI into their processes, they often encounter the same stumbling blocks,” Peter explains. “The first point to consider is the legal context: which tools are you actually allowed to use, and how do you handle sensitive data? With cloud solutions, information is processed externally, which immediately raises questions about GDPR and privacy. There’s also the infamous black box problem: AI can reach conclusions without it being clear how. For organizations that prioritize transparency and accountability, this poses a real risk.” Data quality and context also play a crucial role. An AI tool only delivers value if the input is reliable and consistent. “If one dataset talks about ‘cars’ and another about ‘cars,’ it seems the same. But in a business context, such nuance can make a world of difference,” says Peter. Finally, he points to the margin of error: “AI simply doesn’t capture all the context and therefore remains susceptible to incorrect conclusions, especially with complex decisions. Anyone who underestimates this runs the risk of making the wrong choices with major consequences.”
Does AI fit within your Agile way of working?
“At Equalminds, we work according to the Agile methodology, which revolves around speed, short iterations, and the principle of a Minimum Viable Product,” says Peter. “AI, on the other hand, requires care and extensive testing. One wrong answer can have far-reaching consequences, meaning you can’t simply launch with a half-finished product. That makes the combination of AI and Agile challenging. Yet, the iterative nature of Agile provides a good framework for developing AI projects.” Peter emphasizes that success hinges on managing expectations. “Many clients think AI will instantly solve their problems. In reality, AI isn’t the end goal, but one of the possible tools we employ. It requires research, testing, and adjustment. This is precisely what Agile stands for, but with extra vigilance. It’s up to us as analysts, product owners, and project managers to clarify this nuance and guide clients through the process. Only then will you responsibly realize the promise of AI within Agile projects.”
Should you choose cloud tools or customized models?
“That depends entirely on the context,” Peter explains. “Cloud solutions like Copilot or ChatGPT have the advantage of being quickly deployable and delivering immediate value. The disadvantage is that data processing is done externally, which raises questions about privacy and GDPR. For some organizations, this is an acceptable risk, but for others, it’s a complete dealbreaker.” Custom or local models like OracleAI, on the other hand, offer much more control. They can be specifically trained on company data and run in a fully managed environment. This provides transparency and reliability, but also requires more preparation, expertise, and resources. So it’s not a matter of better or worse, but of making the right choice based on the needs, maturity, and the risks an organization is willing to take.
How do you keep costs and risks under control?
“Many companies underestimate the financial implications of AI. At first glance, a tool seems cheap and easy to use, but behind the scenes, it requires enormous computing power and energy. Moreover, AI costs rise rapidly, subscription fees are on the rise, and the energy costs for local models are significant. A wrong choice can quickly cause costs to spiral out of control, while the added value remains limited,” warns Peter.
“That’s why it’s crucial to proceed consciously and strategically. AI isn’t a free magic bullet, but an investment that can only be made profitable by clearly defining in advance what it should deliver and how it will be deployed. Anyone who skips this exercise not only runs the risk of high costs but also projects that ultimately add little value.”
In short, AI can deliver enormous value, but success doesn’t come easily. It requires critical choices, thorough preparation, and realistic expectation management. Only by investing in data quality, transparency, and the right guidance will AI become more than just hype. For companies that take that step, it offers the opportunity to permanently embed AI as added value in their Agile projects.



