by Rhoda Wilson, Expose News:

In July 2025, MIT published a study titled ‘The GenAI Divide: State of AI In Business 2025’. The study found that despite $30 – 40 billion of investment into GenAI, a surprising 95% of organisations are getting zero return.
Generative AI (“GenAI”) is a type of artificial intelligence (“AI”) that creates new content – such as text, images, music, or code – by learning patterns from existing data. It powers tools like ChatGPT, DALL·E and Google Gemini.
Josh Anderson is a fractional chief technology officer (“CTO”), a part-time executive who provides high-level technology leadership to organisations without the commitment and cost of a full-time CTO. A fractional CTO is particularly beneficial for startups, small and medium-sized businesses and companies in transition that require strategic technology guidance but cannot afford or do not need a full-time executive. In the following, he explains, from personal experience, why 95% of AI initiatives fail.
TRUTH LIVES on at https://sgtreport.tv/
“We’re about to face a crisis nobody’s talking about,” he writes.
I Went All-In on AI. The MIT Study Is Right.
By Josh Anderson, 22 October 2025
You’ve seen the MIT study: 95% of corporate AI initiatives FAIL. You’ve probably shared it in meetings, posted about it on LinkedIn and used it to justify your AI concerns. But do you know why that number is so high? I do. Because I lived it. I spent three months becoming part of that 95% on purpose.
My Three-Month Experiment in Failure
As a fractional CTO and advisor, I kept getting the same question: “How should we use AI in our engineering teams?” I could have given the standard consultant answer about augmentation and efficiency. Instead, I decided to find out what actually happens when you go all-in.
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering – 100% AI adoption. I needed to know firsthand why that 95% failure rate exists.
I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realised I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
Twenty-five years of software engineering experience, and I’d managed to degrade my skills to the point where I felt helpless looking at code that I’d directed an AI to write. I’d become a passenger in my own product development.
Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure – that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realise nobody actually understands what you’ve built.
The Pattern Every Failed Initiative Follows
The company gets excited about AI. Leadership mandates AI adoption. Everyone starts using AI tools. Productivity metrics look great initially. Then something breaks, or needs modification, or requires actual judgment, and nobody knows what to do anymore.
The developers can’t debug code they didn’t write. Product managers can’t explain decisions they didn’t make. Leaders can’t defend strategies they didn’t develop. Everyone’s pointing at their AI tools, saying, “It told me this was the right approach.”
During my experiment, I found myself in constant firefighting mode. Claude Code would generate something, it would be slightly off, I’d correct it, it would make the same mistake again, I’d correct it again. I was working harder than if I’d just written the code myself, but with none of the learning or skill development.
Bob Galen watched me go through this and called it perfectly in our latest podcast: “Who owns that product, Josh? You or Claude Code?” The answer was Claude Code. I’d abdicated ownership while telling myself I was being innovative.
The Right Balance (That Few Achieve)
The formula should be AI + HI, where HI (Human Intelligence) is larger than AI. What’s actually happening in those 95% of failures? It’s AI with a tiny bit of human oversight, if any.
When AI helps you write better code faster while you maintain architectural understanding – that’s augmentation. When AI writes code you don’t understand – that’s abdication.
When AI helps you analyse customer feedback while you make product decisions – that’s augmentation. When AI tells you what to build next – that’s abdication.
When AI helps you write better, faster while maintaining your voice – that’s augmentation. When AI writes for you in a voice that isn’t yours – that’s abdication.
I know the difference because I’ve been on both sides. The abdication side feels easier initially. You’re shipping more! You’re moving faster! Then you realise you’re not actually in control anymore, and when something goes wrong – and something always goes wrong – you’re helpless.
The Masters We’re Losing
We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.


