3 Reasons Most Companies Are Getting AI Wrong (And What To Do About It)
April 29, 2026
Published in:
What a week at Stanford taught EO members about the gap between AI ambition and AI reality: Instead of chasing tools, leaders must simplify operations, target repeated decisions, and build cultures capable of using AI effectively.
In March 2026, I spent five days at EO’s inaugural executive education program, the EO Stanford Graduate School of Business: AI Integration Lab, sitting across the table from 11 of the world's leading researchers in artificial intelligence. Computer scientists, economists, organizational behaviorists, and political scientists. People who have spent decades studying not just what AI can do — but what happens to organizations, markets, and societies when it arrives.
I went to learn. What I came back with was something different: A clear-eyed diagnosis of why the conversation about AI in most boardrooms is focused entirely on the wrong questions.
The question most leaders are asking is: Which AI tools should we deploy? The question they should be asking is something else entirely. But to get there, we need to understand three mistakes that are costing companies time, money, and competitive position right now.
Mistake 1: Adding AI Before Subtracting Complexity
Professor Huggy Rao, one of Stanford's foremost organizational behavior researchers, opened his session with a cooking analogy. When you scale a recipe from four people to forty, you don't just multiply the ingredients. You reinvent the dish. The same logic applies to scaling organizations — and to implementing AI.
His observation is deceptively simple: Humans have a deep bias toward adding. We solve problems by adding — new tools, new processes, new software layers. We almost never subtract. And when you deploy AI on top of a cluttered, friction-heavy organization, you don't fix the clutter. You amplify it.
The evidence from his case studies is striking. Hawaii Pacific Health removed unnecessary administrative steps from clinical workflows — no AI involved, just subtraction — and gave back thousands of hours of clinical time per year. AstraZeneca's simplification initiative saved millions of employee hours in R&D. Both organizations called it a “gift of time.”
"When using AI, think of subtraction, not just addition. AI is not something you just add. You have to think about what it is you want to subtract."
— Prof. Huggy Rao, Stanford GSB
The instruction Rao gave the room has stayed with me: Mow the lawn before you plant new seeds. Map your most friction-heavy processes. Eliminate those that exist only because of habit or hierarchy. Then — and only then — consider what AI can accelerate.
Most organizations are doing it in the wrong order. They are deploying AI on top of chaos and wondering why the results disappoint.
Mistake 2: Starting with the Technology Instead of the Decision
This is the conceptual heart of what I learned, and it came from an unlikely source: A consumer psychology professor named Jonathan Levav, who opened his session by quoting Snoop Dogg.
The quote: “I like going to areas where the murder rate is high and dropping it.” His point: the most effective innovators don't ask “What can our technology do?” They ask “Where is the need most acute?” — and remain agnostic about the solution until they have answered that question.
Applied to AI, Levav offered a reframe that I think is the most practically useful thing I heard all week: Every business decision is, at its core, a prediction. You choose option A over option B because you predict it will produce a better outcome. What AI uniquely offers is the ability to mechanize the judgment underlying that prediction — at scale, without fatigue or the emotional bias that clouds human decision-making under pressure.
This reframe changes everything. The question shifts from “Where can I use AI?” to: Which of my organization’s most important repeated judgment calls would improve if I could make them more consistently, at scale, with better data?
Go where the need is most acute. Find the highest-stakes repeated decision in your business — the one made 50 times a week, with imperfect information, under time pressure — and start there.
The operational framework to follow from there is straightforward: Frame the decision precisely, identify the data that drives it, build and validate a model, deploy it with a monitoring plan, and connect the model’s output to the decision process. That last step — connecting prediction to decision — is where most pilots quietly die.
Mistake 3: Believing Technology is the Hard Part
This was the most humbling lesson of the week, delivered by Professor Charles O’Reilly, who has spent 30 years studying why successful companies fail when disrupted.
>He told the story of the Swiss watch industry in the mid-1960s. Omega’s own engineers invented the quartz movement and presented it to senior management. The response: It doesn't work like our watches; we will never make money on something this cheap; it will damage our brand; and anyway, we are mechanical engineers — we know nothing about electronics. They passed.
Six months later, Seiko saw the same technology at a trade show in Paris. Over the next 15 years, 60,000 Swiss watchmaking jobs were lost. Omega survived only through a radical portfolio restructuring that took decades to implement.
Omega had the technology, but they rejected it. This was not a technological failure — it was a leadership and culture failure.
O’Reilly’s research across hundreds of companies shows that this pattern repeats. Kodak invented digital photography. Blockbuster had the chance to buy Netflix for $50 million USD. Nokia understood smartphones before the iPhone launched. In almost every major case of corporate disruption, the incumbent had the technology or access to it. What they lacked was the culture to use it — and the leadership to build that culture before the window closed.
The comparison that landed hardest: Pfizer can license the same mRNA technology as Moderna. What Pfizer cannot buy is Moderna’s 15 years of digital-native culture, embedded in every process, every hire, and every default assumption about how work gets done. Moderna designed its COVID-19 vaccine in two days and entered clinical trials within 66 days. That speed is not a technological advantage. It is entirely a cultural advantage.
A 2023 PWC survey found that 40 percent of CEOs were unsure whether their company would be economically viable in 10 years. O’Reilly’s reading of that statistic: In most of those cases, it will not be a technology problem that finishes them. It will be a leadership failure to build the culture that technology requires.
So, what should you actually do?
Three Practical Moves, in Sequence
- Subtract first. Run a two-hour session with your leadership team this week with one question on the table: What slows us down for no good reason? Pick three answers. Eliminate them. Do this before touching any AI tool.
- Identify one high-stakes repeated decision. Not a once-a-year strategic choice — a judgment call made dozens of times a week, under time pressure and with imperfect data. Frame the prediction underlying it. That is your first AI project.
- Ask the cultural question honestly. If Moderna’s digital-native culture replaced yours on Monday morning — what would change? The gap between your answer and your current reality is your transformation roadmap.
The Question Worth Sitting With
I want to close with a story that Professor Huggy Rao told on the final day, because it reframed the entire conversation for me.
A major retailer had deployed large language models to monitor millions of network log entries and flag potential outages before they occurred. The technical results were excellent — network reliability improved measurably. But when the EVP responsible was asked what he was most proud of, his answer had nothing to do with uptime metrics.
"My network engineers can sleep better," he said.
That single sentence, I think, is the most honest measure of AI done right. Not efficiency gains. Not cost reductions. Not benchmark scores. The question worth asking in every AI initiative — before you buy the tool, before you run the pilot, before you present the business case — is this:
What is the equivalent of the midnight call in your business — and how would your team’s lives change if AI eliminated it?
The organizations that will lead in the next decade are not the ones moving fastest. They are the ones asking better questions.
Contributed by Robert van der ZwartOff-site link., an EO Netherlands member, who is a coachOff-site link., keynote speaker, founder of AIPO NetworkOff-site link., and a host and organizer of the virtual EO Global AI Summit #4: Transforming to an AI-First CompanyOff-site link., which took place on 26 February 2026.
Related posts of interest:
-
Attendee Takeaways from EO’s Stanford AI Integration Lab
-
EO Global AI Summit 2026: Transforming to an AI-First Company
-
Your AI Is Only as Smart as Your Data: 7 Mistakes Leaders Make When Combining AI and Analytics
-
5 Ethical AI Pillars To Ensure Responsible Use in Your Organization
-
How AI Competitions Turn Curiosity Into Business Capability
-
Why Most AI Projects Fail (And What Leaders Miss)