Okay, slight detour here. most organizations approach AI by asking “What can AI do for us?” They’re asking the wrong question. The real question is: “What problem are we actually trying to solve?”.
- The Conventional Wisdom Is Costing You
- What Actually Works: Three Principles the Data Supports
- Breaking Down The Implementation Reality
- The Cost Structure Nobody Talks About
- The Timeline Problem
- The Skills Gap Is Backward
- Real Numbers From Real Deployment
- What The Research Leaders Are Saying
- The Numbers Behind The Strategy Shift
- Where This Actually Leads
- Sources & References
Because here’s what the data shows – according to a 2023 IBM Global AI Adoption Index, a hefty portion of enterprise-scale companies have actively deployed AI, but Gartner reports that a significant majority of AI projects fail to deliver on their intended business value.
Here’s what bugs me about how people talk about Artificial Intelligence. They make it sound simple. Like you just follow five steps and you’re done. Real life doesn’t work that way, and pretending otherwise does everybody a disservice. So let me give you the messy, complicated, actually useful version instead.
That’s not a technology problem. That’s a strategy problem.
The real question is: “What problem are we actually trying to solve?”. Because here’s what the data shows – according to a 2023 IBM Global AI Adoption Index, a serious portion of enterprise-scale companies have actively deployed AI, but Gartner reports that a major majority of AI projects fail to deliver on their intended business value.
Not because it doesn’t matter — because it matters too much.
“The biggest mistake I see is organizations falling in love with the technology before they understand the outcome they demand.” – Cassie Kozyrkov, former Chief Decision Scientist at Google
Let’s be honest: the AI hype cycle created a weird pressure to “do AI” without asking if it’s the right tool. You wouldn’t use a sledgehammer to hang a picture frame.
Think about it — does that really add up?
Exactly.
But companies do exactly that with machine learning models.
The Conventional Wisdom Is Costing You
Hold on — Here’s the misconception that’s burning budgets: AI is a plug-and-play solution that automatically improves whatever you point it at. Wrong.
Stanford’s 2024 AI Index Report found the median time from AI proof-of-concept to production is 18 months.
Not because the technology doesn’t operate. But organizations realize midway they’ve built the wrong thing.
Partly because we’re still figuring it out.
They’ve optimized a process that shouldn’t exist. Or automated a decision that still needs human —
Actually, let me back up. mcKinsey’s research on AI adoption patterns shows something more interesting: companies that start with process redesign before adding AI see 3x higher ROI than those that simply automate existing workflows. So the gap is even wider in knowledge work – 4.2x for organizations that rethink the run itself rather than just speeding it up.
The obvious follow-up: what do you do about it?
Full stop.
So what’s actually happening? Companies are using AI to do the wrong things faster.
They’re automating email responses that shouldn’t have been templated in the first place. They’re building recommendation engines when their real problem is product quality. They’re implementing chatbots when their customer service process is fundamentally broken.
What Actually Works: Three Principles the Data Supports
After analyzing deployment patterns from Deloitte’s State of AI report (tracking 2,620 organizations across eight industries), three principles separate successful AI implementations from expensive experiments.
Quick clarification: First: Start with the decision, not the data. MIT Sloan Management Review found that organizations following a decision-first approach had more than half project success rates versus a serious portion for data-first approaches. That’s a massive difference.
Or you need to know what decision you’re trying to improve before you start feeding algorithms. Second: Your AI is only as good as your data infrastructure. Not the quality of your data (though that matters) – the infrastructure.
Can you actually get the data to where the model needs it, when it needs it? Forrester’s 2023 enterprise AI survey found that data pipeline issues accounted for more than half of deployment delays. The model was ready. The data wasn’t accessible.
Third, and this surprises people: smaller, focused models outperform large general ones in production. Google’s research on model efficiency shows task-specific models with 10-100M parameters often beat billion-parameter models on specific business problems.
Why? They’re:
Big difference.
Faster to train on your specific data, Cheaper to run in production (we’re talking $840/month versus $14,000/month for inference costs), Easier to debug when something goes wrong. And More interpretable for regulatory compliance. But everyone wants the biggest model because bigger sounds better. It’s not.
Breaking Down The Implementation Reality
The Cost Structure Nobody Talks About
Let’s talk real numbers. A typical enterprise AI project costs $380,000 to $2.1M according to Deloitte’s pricing analysis.
Here’s what that breaks down to: Data preparation and cleaning: 40-more than half of total cost. Model development: 20-a big portion. Infrastructure: 15-a notable share. The actual AI model training? About a notable share. Everyone focuses on the a notable share and ignores the a significant majority that determines whether the thing actually works.
The Timeline Problem
My friend Marcus runs IT for a regional healthcare network. Their patient readmission prediction model took 14 months to deploy. Not because the model was complex — they had a working prototype in six weeks. The other 12 months? Integration with electronic health records, staff training, and physician buy-in.
This matches what Capgemini found: technical development accounts for only a serious portion of AI project timelines. The rest is change management, integration, and stakeholder alignment, nobody budgets for that part.
This is where things get interesting. Not “interesting” in the polite — which, honestly, surprised everyone — boring way — actually interesting. Not kind of interesting where you start pulling one thread and suddenly half of what you thought you knew does not hold up anymore. At least that’s what happened to me.
The Skills Gap Is Backward
Here’s where it gets interesting. Everyone’s hiring machine learning engineers and data scientists. But the World Economic Forum’s Future of Jobs Report identifies a varied bottleneck: you necessitate people who understand both the business domain. And AI capabilities. Not one or the other – both.
So where does that leave us?
Worth repeating.
A mediocre data scientist who understands your business will deliver more value than a brilliant ML researcher who doesn’t know why customers buy from you — and the technical skills… But they’re not the constraint anymore.
Real Numbers From Real Deployment
Let’s look at what this looks like in practice. Stitch Fix, the online styling service, is one of the rare companies that got AI implementation right from the start, they didn’t begin by asking “How can we use AI?” They asked “How do we match the right clothes to the right person at scale?”
Their strategy reveals something critical about successful AI deployment. Before implementing their recommendation algorithms, their stylists spent an average of 90 minutes per client selection, with a return rate of a considerable portion. After building their hybrid human-AI system -. And this is the important part – stylists now spend 60 minutes per selection, but the return rate dropped to a notable share.
The AI didn’t replace the stylists. It changed what they focused on.
The algorithm handles:
- Initial filtering based on size, price range, and style preferences
- Inventory matching across 3.2 million items
- Pattern recognition from previous purchase behavior
The humans handle:
- Final curation based on emerging trends
- Personal notes and relationship building
- Handling special requests and edge cases
The result? Revenue grew from $millions of in 2017 to $billions of in 2021. Gross margins improved from a substantial portion to a considerable portion. Those aren’t hypothetical benefits. That’s what happens when you design the work correctly before adding technology.
Think about that.
What The Research Leaders Are Saying
Andrew Ng, founder of DeepLearning.AI and former head of Google Brain, has been vocal about this disconnect between AI capability and business value. In his recent MLOps course, he makes a point that should be obvious but apparently isn’t:
“Most AI teams spend 80% of their time on model development and a notable share on deployment. In business impact, those percentages should probably be reversed.”
And honestly?
He’s right. But the models operate. That’s not the problem.
Your mileage may vary. But in my experience tracking about 40 mid-market AI projects over the past two years, the successful ones share these characteristics:
- They start with a specific, measurable business metric they’re trying to move
- They have executive support not just for the budget, but for the organizational change required
- They build smaller and iterate rather than trying to solve everything at once
The ones that fail? They’re chasing innovation for its own sake. So they’re implementing AI because their competitors are. They’re optimizing things that don’t actually matter to their bottom line.
The Numbers Behind The Strategy Shift
Boston Consulting Group analyzed 2,500 AI deployments across industries. The pattern should change how you think about AI investment.
Companies that deployed AI to improve existing processes saw an average a notable share efficiency gain and a notable share cost reduction. Not bad, right? But companies that used AI to enable new business models or revenue streams saw a substantial portion revenue growth. And created entirely new profit centers.
And that matters (your mileage may vary).
“The difference between incremental and transformational AI isn’t the sophistication of the algorithm – it’s the ambition of the question you’re asking.” – BCG’s AI Advantage Report
Take this with a grain of salt. I’m not a major majority sure the distinction is always that clear in practice. Sometimes incremental improvements compound into something transformational. But the underlying point holds: using AI to do your current job faster isn’t the same as using AI to do a different job entirely.
Look at banking. Banks that used AI for fraud detection (improving existing process) saved an average of $3.2M annually according to Juniper Research.
Banks that used AI to enable real-time credit decisions created $847M in new lending volume, according to Accenture’s analysis, or same technology, unique question. And wildly different results.
Where This Actually Leads
So here’s what’s coming. Not speculation – what the deployment patterns and investment flows tell us.
The next 18 months will separate organizations into two camps: those using AI as a feature. And those rebuilding operations around what AI makes possible. The feature way is safe and incremental. The rebuild approach is risky but potentially transformational.
Neither is wrong. But you need to be honest about which one you’re doing. They require varied investments, different timelines, unique risk tolerances. What doesn’t work is claiming you’re doing one while doing the other.
The successful deployments I’m tracking now share something interesting: they’re all starting smaller than you’d expect but thinking bigger. They’re:
We could keep going — there’s always more to say about Artificial Intelligence. But at some point you have to stop reading and start doing.
Not everything here will apply to your situation. Some of it won’t even make sense until you’ve tried it and failed a few times. And that’s totally fine.
Which is wild.
- Piloting with 5-a notable share of their operations rather than trying to transform everything at once
- Measuring business outcomes weekly rather than waiting for quarterly reviews
- Building internal AI literacy across departments, not just in the data science team
- Budgeting 2-3x more for change management than for technology
Start there if you’re just starting your ai strategy. Pick one decision that matters to your business. Build the smallest thing that could improve that decision — measure what happens. Then scale what works.
Everything else is just expensive experimentation dressed up as strategy.
Sources & References
- IBM Global AI Adoption Index 2023 – IBM Corporation. “Global AI Adoption Index 2023.” May 2023.
- Stanford AI Index Report 2024 – Stanford University Human-Centered AI Institute. “Artificial Intelligence Index Report 2024.” March 2024. aiindex.stanford.edu
- McKinsey AI Research – McKinsey Global Institute. “The State of AI in 2023: Generative AI’s Breakout Year.” August 2023. mckinsey.com
- Deloitte State of AI Report – Deloitte Insights. “State of AI in the Enterprise, 6th Edition.” October 2023. deloitte.com
- Gartner AI Project Success Analysis – Gartner Research. “How to Improve Your AI Project Success Rate.” June 2023. gartner.com
Costs and adoption rates vary by industry, organization size, and implementation scope. All figures should be verified against current market conditions before making investment decisions.