Skip to main content

Your AI Is Only as Smart as Your Data: 7 Mistakes Leaders Make When Combining AI and Analytics

March 12, 2026

Published in: 

Artificial intelligence can transform your business, but only if you understand its limits. Statistician Katharina Schüller (EO Munich) explains the traps that trip up even savvy entrepreneurs.

Katharina_Schüller_26.jpeg

Katharina Schüller (EO Munich) is the founder and CEO of STAT-UP Statistical Consulting & Data Science.
Her book,
“Data is Power: Informed Decision Making in the Time of AI,” is currently available in German.

A few years ago, I spent several days walking alongside customs officers at German airports, observing how they worked. I watched them check passengers, scan luggage, and make split-second decisions under intense pressure. They faced time constraints, potential danger, and physical demands that left me exhausted by day’s end.

After watching them check five or six passengers, I could not remember how many they had processed. Neither could they. Yet later, back in their offices, these officers had to fill out statistics from memory. Was it 13 checks that day? Fourteen? Fifteen?

"Using data and AI well is a mindset: a habit of questioning what the numbers mean, how they were produced, and where they might mislead.

Meanwhile, a risk analytics team in another building was poring over those numbers, trying to understand patterns. “Today we had 13, yesterday we had 15 — what happened?” They had no idea how much uncertainty was baked into the data simply because of how it was generated.

This is the problem I see over and over again when companies try to leverage data and AI: They do not understand the data they are working with. Using data and AI well is a mindset: a habit of questioning what the numbers mean, how they were produced, and where they might mislead.  In the age of generative AI, that problem has become far more dangerous because AI is very good at sounding confident, even when it is wrong.

Here are seven common mistakes I see leaders make with their data, and how to avoid them.

Mistake No. 1: Trusting Data Without Questioning It

Garbage in, garbage out. You have heard it before, but the principle is more important than ever. Any algorithm is only as good as the data it’s trained on.

Leaders need to question the quality of their data, whether it is survey responses, sensor readings, ERP system exports, or text scraped from emails and documents. Is it representative? Is it biased? Is data missing?

A common error is what statisticians call non-response bias. In a survey, the people who respond are often systematically different from those who don’t. You can’t draw conclusions about the silent majority based on the vocal minority.

The same issue applies to machine data. We had a logistics client whose containers were supposed to be scanned upon entry to the wash center. However, when workers were stressed or rushed, some containers were not scanned at all, skewing all the data that resulted .

People assume that if you ask humans questions, they might lie, but objects can lie, too. Data does not speak for itself. You must understand the context: how it was gathered, what it represents, and what is missing.

Mistake No. 2: Skipping Human Review

Think of generative AI as an intern. It is eager, fast, and confident, but prone to mistakes. You would not let a new hire submit a report to a client without checking it first. The same applies to ChatGPT.

AI hallucinates. It invents citations to academic papers that don’t exist. Even when citations are real, research shows AI often summarizes them incorrectly. And because AI often skews toward what is easiest to retrieve—summaries, reposts, and sources that may not be the most authoritative—the information it provides can be incomplete or misweighted.

And yet, the AI sounds authoritative. It does not hedge or express uncertainty the way a human expert would. Worse, even when you challenge it, it may insist it is right.  If you are not skilled enough in the subject matter to catch mistakes, you won’t know what you are missing.

There is the famous case of attorneys who submitted court filings citing rulings that simply did not exist. They were hallucinated by AI. Do not let that be you. Always verify.

Mistake No. 3: Asking the Wrong Questions

With the rise of data scientists and machine learning engineers, there are many skilled professionals who can analyze large datasets and build sophisticated algorithms. What often gets lost is asking whether the data is suitable to answer the question in the first place. For example, the data could come from a context that does not match your problem.

Before you prompt an AI or hand a dataset to an analyst, ask yourself: What do I actually want to know? What problem am I trying to solve?

The better your question, the better your result.

Mistake No. 4: Ignoring Bias Until It Is Too Late

Bias often creeps into data implicitly, simply because of the world we live in. If your data reflects a reality where women and people of color are underrepresented in leadership, any algorithm trained on that data will learn those patterns and perpetuate them.

Amazon famously tried to use recruitment data to train an algorithm that was supposed to objectively decide who got job interviews. Instead, it learned that young white men had the highest historical success rate, and filtered accordingly. The algorithm was a mirror of past discrimination.

Some leaders believe the solution is to remove sensitive variables such as gender from the dataset. But gender correlates with many other factors — parental leave, for instance — so its traces remain everywhere. You still have the bias; you just can’t find it anymore.

The solution is awareness first, then analysis. Build models that identify where bias exists, quantify its impact, and restructure your approach to remove it. This usually requires expert help, but the first step is knowing that you need to look.

Mistake No. 5: Confusing Prediction with Foresight

Leaders love predictions. Tell me what is going to happen next quarter. Tell me which customers will churn. Tell me where the market is heading.

But predictions are only as good as the assumption that the future will resemble the past. When structural factors change, such as new technology, economic shifts, or pandemics, predictions break down.

Consider population forecasting. If fertility, mortality, and migration rates stay stable, projections are easy. But what happens when women get more educated and have fewer children? When healthcare improves, and people live longer? When a war triggers mass migration? The future is no longer structured like the past, and your models fail.

Instead of prediction, think in terms of foresight. Ask “what if” questions. What if this assumption changes? What if that variable shifts dramatically? Scenario planning does not give you certainty, but it gives you resilience. You are prepared for multiple futures instead of betting everything on one.

AI can help by generating and analyzing thousands of scenarios far faster than humans can. We have used this approach to understand how non-responders in a survey might behave differently, calculating countless variations to see how results would change. Before AI, we could model five or ten scenarios. Now we can model thousands.

But the creativity — the ability to imagine a future no one has seen before — is still uniquely human. AI recombines existing information. Humans imagine endless possibilities.

Mistake No. 6: Chasing Wrong Incentives

There is a concept in economics called Goodhart’s Law: The moment you use a metric to control or incentivize behavior, it becomes unreliable as a measure because people will game it.

Imagine you want to measure theater performance in your city, so you count visitors. The theaters with low attendance will be slated for closure. If attendance is your KPI, a theater owner might give away free tickets. Now the theater is always full, and your metric is meaningless.

The same dynamic applies to data-driven management. When people know their behavior is being measured, they optimize for the metric, not necessarily for the outcome you actually want.

Mistake No. 7: Ignoring Data Literacy

Another important concept is circular data literacy. Those who generate data must understand what happens to it. Those who use data must understand its origin. Everyone needs to see the whole circle.

This is why we say that data literacy is not just about the people who analyze data, but about everyone in the organization who touches it. Customs officers compiling statistics need to understand how their numbers are used and why accuracy matters. The risk analysts need to understand where the data comes from and how much uncertainty it carries. When these two groups don’t talk to each other, you get bad incentives and flawed conclusions.

In Europe, the EU AI Act is coming into effect in stages and is pushing companies to ensure their employees are trained to use AI responsibly. But legislation treats this as an individual skill. I believe it needs to be an organizational capability — a culture of questioning, verifying, and understanding.

AI and data can be incredibly powerful tools, but they are not magic. They require the same rigor, skepticism, and judgment that any good business decision requires. The leaders who thrive will not be the ones who adopt AI fastest: Instead, they will be the ones who use it wisely.

Interested in becoming an EO member like Katharina? Learn more here.

More EO Member Insights

Master Mental Resilience, Perform Under Pressure to Succeed

Skill is not the only tool entrepreneurs need; you must also develop mental resilience under pressure to succeed. A serial entrepreneur shares how focusing on Movement, Mindset, and Connection can strengthen a founder’s ability to perform when it matters most.

March 6, 2026

Why Brand Consistency Builds Trust, Loyalty, and Growth

Many entrepreneurs think branding is about standing out visually, but the real driver of recognition is consistency. Businesses that show up with the same values, message, and experience over time turn awareness into trust—and trust into loyalty.

March 11, 2026

Why Supporting Women Leaders Is Smart Business

Sindhu Srivastava (EO Silicon Valley) is proving that inclusive leadership is more than equity—it is smart business. Through Girls Who CEO, she is building the next generation of confident women leaders by teaching presence, voice, and financial fluency early, because the companies of tomorrow cannot afford to wait.

March 3, 2026