We are effectively building a trillion-dollar engine and forgetting to hook up the transmission.
The numbers surrounding the current “AI Gold Rush” are intoxicating. McKinsey projects that generative AI could add between $2.6 and $4.4 trillion in annual value to the global economy. According to Deloitte, a staggering 94% of executives believe AI will transform their industries within the next five years. Yet, if you look beneath the hood of most enterprise implementations, the engine is idling. Roughly 74% of companies fail to capture sufficient value from their AI investments. We’ve bought the shovels, but we’re digging in the wrong place.
The purpose of this post is to move past the hype and identify the friction points preventing a real return on investment. If your organization is struggling to see the “AI dividend,” it’s likely because you’ve prioritized the tool over the transformation.
Stop Buying “Gilded Shovels”: The Problem-First Framework
The FOMO-driven mandate to “do AI” is arguably the greatest destroyer of operational margin we’ve seen this decade. In the rush to appear “innovative,” organizations are practicing solution-first thinking: buying flashy, expensive tools—what I call “gilded shovels”—and then hunting for a problem to justify the purchase.
This is the strategic equivalent of walking through a dark room; you are guaranteed to smack your shins on the furniture. Strategic utility must always precede technological adoption. As Ridge Carpenter, AI Product Manager at Kelly Services, observes: “Most organizations are asking the wrong question. They should be asking: ‘What problems are causing us the most pain right now, and how could AI help solve them?'”
Organizations feel a relentless pressure to perform for the hype cycle, but the gold isn’t in the tool itself. It is found in the specific, mapped-out pain points of your business operations. If you haven’t identified the friction, the tool is just overhead.
“AI Helps, You Decide”: The Boundary of Human Authority
To find missing productivity, we must stop treating AI as a “worker” and start seeing it as a specialized assistant. AI excels at high-volume, low-stakes cognitive labor: searching, summarizing, drafting, and classifying. It fails—often spectacularly—at relationship building, long-term strategic planning, and nuanced decision-making.
The boundary of success is defined by a simple framework: “AI helps—you decide.” As Kelly Services research suggests, the authority and accountability must remain human. When we outsource the final call to an algorithm, we risk a workplace version of “Dead Internet Theory.”
In this context, the danger isn’t just “fake content”; it’s the incentivization of non-human factors. When employees start gaming the system by writing inputs specifically for the AI rather than for their human colleagues, the resulting decisions become impossible to explain in human terms. We lose the ability to observe why a choice was made, leading to an environment where the “black box” runs the boardroom.
The Hidden Friction: Automation Shifts Work, It Doesn’t Delete It
There is a seductive, flawed intuition that automation “removes” tasks. In reality, AI often introduces a new, heavier form of labor: the “Review Cycle.” Productivity dies when a human has to spend more time validating AI-generated garbage than it would have taken to do the work manually.
This leads to the most dangerous productivity killer of all: Automation-induced complacency. Think of it like a cockpit autopilot. When the machine catches 99% of errors, the human brain stops looking for the 1%. This “skill erosion” means that when a human does need to intervene, they are no longer sharp enough to handle the catastrophic outlier.
To find ROI, you must maintain “calibrated trust”—keeping humans “in the loop” to avoid three primary risks:
- Automation Bias: Blindly trusting the system’s output until a mistake becomes a disaster.
- Skill Erosion: Losing core competencies (like critical thinking) because we’ve outsourced the mental reps.
- Blind Spots: Missing systemic errors that fall outside the AI’s specific training data.
The Perception Gap: Why Your Staff Thinks AI is “Theater”
There is a massive disconnect between the view from the C-suite and the view from the cubicle. While 70% of executives believe AI is freeing up employee time, the Kelly Global Re:work data shows that fewer than half of workers feel they are actually getting that time back. Even more damning: fewer than 3 in 10 workers agree that workplace satisfaction is rising, despite executive optimism.
When AI training is a “box-checking” exercise, workers perceive it as “productivity theater.” This cynicism grows when leadership focuses solely on raw output—how many more emails can we send?—while ignoring the human intangibles.
“The worst habits in AI adoption reduce workers to their outputs while disregarding the thought, planning, and relationships that drive real impact. We need to shift from a hyper-focus on measurable outputs toward a holistic Return on Employee.”
To bridge this gap, AI must be presented as a career-enhancer, not a replacement. If the staff thinks the tool is a prelude to a layoff, they won’t use it to innovate; they’ll use it to hide.
Measuring the “Invisible”: Beyond the Output Spike
Traditional productivity benchmarks—volume and hours—are useless in the AI age. AI can create “Activity Inflation”: an employee might send 3x more emails because an AI wrote them, but if they have 0x more focus time for strategy, the ROI is exactly zero.
Data from ActivTrak suggests that we must move away from “activity” and toward “capacity.” If your AI tools are just creating more “Review Meetings” for AI-generated content, you are actually decreasing efficiency.
What to Measure Instead of Raw Output:
- True Focus Time: Are employees getting more uninterrupted hours for deep, strategic work?
- Collaboration Load: Is AI actually reducing meeting frequency, or is it just increasing the “coordination tax”?
- After-hours Work: Is the “workday” actually ending, or has AI just increased the volume of tasks to be checked at 9:00 PM?
- Stability vs. Spikes: Is the productivity gain a sustainable plateau or just a temporary “Trough of Disillusionment” activity spike?
Conclusion: The Human-Centered Future
The future of work isn’t a competition between human and machine; it’s a race between the organizations that use AI to amplify people and those that use it to replace them.
To find your missing productivity gains, you must stop being a prospector hunting for the next “gilded shovel” and start being an architect. Look at your business’s actual pain points, calibrate your trust in the technology, and measure the things that actually matter—like focus, strategy, and human connection.
The “gold” isn’t in the tool; it’s in the human time that the tool is supposed to set free.
Are you using AI to help your people decide, or are you asking it to decide for them?

