The AI Adoption Gap Is Widening. Which Side Are You On?
Same tools. Same access. Wildly different results.
OpenAI surveyed 9,000 workers. The data should make every leader uncomfortable.
TL;DR
OpenAI’s first State of Enterprise AI report dropped yesterday with data from 9,000 workers across 100 enterprises
The headline stats are impressive (8x usage growth, 40-60 min saved daily), but the real story is the widening gap between AI leaders and laggards
Frontier workers get 6x more value from AI than median workers at the same company, using the same tools
The constraint isn’t technology. It’s organizational readiness and leadership.
Five practices separate leading firms from the rest. I break down each one with specific actions you can take this week.
OpenAI just released its first State of Enterprise AI report. The headline numbers are impressive: ChatGPT Enterprise usage grew 8x year-over-year. Workers report saving 40-60 minutes per day. 75% say AI has improved either the speed or quality of their work.
You’ll see those stats everywhere today.
But buried on page 13 is the number that should keep every leader awake tonight: frontier workers are getting 6x more value from AI than the median worker at the same company, using the same tools.
Same licenses. Same access. Same technology. Wildly different results.
This isn’t a technology gap. It’s a leadership gap.
The Fork in the Road
The report makes one thing clear: enterprise AI has moved past the “should we adopt?” phase. Over 1 million businesses now use OpenAI’s tools. Seven million people have ChatGPT workplace seats. The question isn’t whether AI works. It does.
The question is whether your organization is capturing the value that’s sitting on the table.
And most aren’t.
Among active ChatGPT Enterprise users, 19% have never used data analysis. 14% have never used reasoning. 12% have never tried search. These are people with access to the most capable AI tools available, and they’re using a fraction of what’s possible.
The pattern holds at the firm level too. Frontier firms generate 2x more messages per seat than the median enterprise and 7x more messages through Projects and Custom GPTs. They’re not just using AI more. They’re embedding it into how work actually gets done.
OpenAI’s COO Brad Lightcap told TechCrunch: “There are firms that still very much see these systems as a piece of software, something I can buy and give to my teams and that’s kind of the end of it.”That mindset is now a competitive liability.
The International Wake-Up Call
One data point surprised me: the fastest-growing enterprise AI markets aren’t in Silicon Valley.
Australia grew 187% year-over-year. Brazil: 161%. The Netherlands: 153%. France: 146%.
All outpacing the United States (142%) and the global average (143%).
If you’ve been waiting to see how AI adoption plays out before making big moves, your global competitors aren’t waiting with you. The window for “fast follower” strategy is closing.
What I’m Seeing in Practice
The OpenAI data confirms something I experienced firsthand this fall.
I recently completed a six-week AI training engagement with a mid-sized technology company. But here’s what made it work: it didn’t start with training the employees. It started with the CEO and leadership team taking my four-week Maven course first.
They wanted to learn what they didn’t know before asking their teams to change.
After experiencing it themselves, a member of the executive team committed to a six-week upskilling program for 80+ employees focused on Copilot and Gemini. Not generic AI training. Hands-on application to high-value use cases specific to their work.
More importantly, leadership created the conditions for success. They built a culture of learning and carved out dedicated space and time for employees to explore how AI could improve their workflows. No one was expected to figure it out in the margins of an already-packed schedule.
The catalyst wasn’t a technology decision. It was leadership curiosity and the courage to be learners first.
This is what the OpenAI report is really pointing to. The tools are available to everyone. What separates frontier firms is leaders who model the behavior they’re asking for.
What Leading Firms Actually Do
The report identifies five practices that separate frontier organizations from the rest. I’ve added the specific leadership action that makes each one real.
1. Deep system integration
Leading firms turn on connectors that give AI secure access to company data. This enables context-aware responses and automated actions. One in four enterprises still hasn’t taken this basic step.
Your move: Schedule 30 minutes with your CIO this week. Ask one question: “Which data sources are connected to our AI tools, and which aren’t?” The answer will tell you how much value you’re leaving on the table.
2. Workflow standardization and reuse
Frontier firms actively promote the creation and sharing of repeatable AI solutions. Projects and Custom GPTs power this work. BBVA, for example, has over 4,000 Custom GPTs in regular use, turning AI from a novelty into institutional infrastructure.
Your move: Create an internal channel for AI workflows. Ask your top 5 power users to share one thing that saves them time. Make wins visible. Most organizations have pockets of excellence that never spread because no one knows about them.
3. Executive leadership and sponsorship
Clear mandates, dedicated resources, and space for experimentation. This isn’t about cheerleading AI. It’s about treating it as strategic infrastructure rather than an IT project.
The company I mentioned earlier got this right. Leadership didn’t just approve a training budget. They participated first, then protected time for their teams to learn. That sequence matters. When employees see their executives investing personal time to learn AI, it signals that this isn’t optional.
Your move: Put AI on the agenda for your next leadership meeting. Not as an update from IT. As a strategic question: “If every employee saved 5 hours per week, what would we do with that capacity?” Better yet, take a course yourself before asking your team to.
4. Data readiness and evaluations
Leading firms codify institutional knowledge into machine-readable formats, build APIs for key data pipelines, and run continuous evaluations to track what’s actually working.
Your move: Identify your top 3 “knowledge bottlenecks,” the questions your team asks repeatedly that require senior expertise. That’s where AI can have immediate impact. Start there.
5. Deliberate change management
This means structured training, clear governance, and embedded AI champions, not “roll it out and hope.”
The six-week program I ran wasn’t just about teaching tools. It was about creating space for employees to experiment, fail safely, and share what they learned. That culture doesn’t happen by accident. It requires leaders who explicitly protect time for learning and normalize the discomfort of being a beginner again.
Your move: Identify one AI champion per department. Give them explicit permission to experiment and 2 hours of protected time per week. Change doesn’t happen by memo. It happens through people who have room to try.
The Real Constraint
Here’s what struck me most about this report. The technology works. The ROI is measurable. The use cases are proven.
And yet, most organizations capture only a fraction of the available value.
The report says it directly: “The primary constraints for organizations are no longer model performance or tooling, but rather organizational readiness.”
Organizational readiness is a leadership problem. It’s about whether executives are willing to change how they lead, how they allocate attention, and how they define what “good” looks like for their teams.
The data show a clear pattern: workers who engage across more AI task types save significantly more time. Those using 7 task types save 5x more than those using 4. The difference isn’t intelligence or technical skill. It’s the willingness to experiment, to push past the first use case, to become the kind of professional who works with AI rather than around it.
That willingness starts at the top.
Leaders who are curious enough to learn themselves create permission for everyone else to do the same.
What the Case Studies Reveal
The report includes six detailed case studies. Here’s the pattern worth noting:
Intercom → Voice AI for phone support → 53% of calls resolved end-to-end
Lowe’s → Expert guidance chatbot → 1M questions/month, 2x conversion
Indeed → Job matching with explanations → 20% more applications started
BBVA → Legal signatory verification → 9,000 queries automated annually
Oscar Health → Benefits navigation → 58% of questions answered instantly
Moderna → Document analysis → Weeks compressed to hours
Notice what these have in common. They’re specific, bounded use cases. One workflow. One bottleneck. Measured outcomes.
None of these started with “transform everything.” They started with “fix this one painful thing.” That’s where momentum begins.
The Question This Report Is Really Asking
The gap between AI leaders and laggards is widening, not closing. The data is unambiguous.
But this isn’t a technology story. It’s a leadership story.
The tools are available. The playbook is emerging. The results are measurable.
The only question left is whether you’re willing to lead differently to capture them.
I’m curious: Where does your organization fall on the spectrum? And what’s the one thing blocking you from moving up? Hit reply or drop a comment. I read every response.
For paid subscribers: I’m putting together a deeper breakdown of how I structured the 6-week AI upskilling program mentioned above, including the session-by-session framework and how we identified high-value use cases. Reply if you’d find that useful, and I’ll prioritize it. Feel to post your feedback in our Slack workspace.
Useful links:


