
THE BIG STORY
97% of Organizations Have AI Initiatives. Only 5% Say Their Data Is Ready. That 92-Point Gap Explains Everything.
Dun & Bradstreet's AI Momentum Survey — 10,000 businesses across 32 countries, Q1 and Q2 2026 — lands the most precise diagnosis of the enterprise AI bottleneck this series has found. The constraint is no longer the model. It is the data the model is operating on.
Every survey in this series has been circling the same underlying question: why are 46% of enterprise AI initiatives falling short, why is only 34% of organizations genuinely reimagining their business, and why is the 16x Codex gap between frontier and typical firms so persistent? The D&B AI Momentum Survey answers that question with the most direct number yet: 97% of organizations have active AI initiatives, and only 5% say their data is adequately ready to support them.
The D&B finding is not surprising to anyone who has tried to deploy AI at scale in a large enterprise. What it reveals is that the AI adoption problem is fundamentally a data infrastructure problem. Organizations have invested heavily in AI models, AI tools, AI governance platforms, and AI training programs. The vast majority have not invested proportionally in making their data verified, current, structured, and accessible in the way that AI systems require to operate reliably. The result is pilots that work on clean sample data and fail in production on the messy, outdated, siloed data that actually runs the business.
"The constraint is no longer the model; it is whether AI can operate on verified, continuously refreshed business identity across systems." -- Cayetano Gea-Carrasco, Chief Strategy Officer, Dun & Bradstreet, May 4, 2026
The specific data problem D&B identifies is commercial identity — the verified, structured information about business entities that underpins KYC, KYB, and onboarding workflows. But the underlying principle is broader. AI systems that need to make reliable decisions require data that is verified against authoritative sources, continuously refreshed rather than periodically updated, and structured consistently across systems. Most enterprise data has none of these properties.
The 60% ROI figure is the important context. 60% of businesses report at least some measurable ROI from AI, and 24% report broad or strong returns. The data gap has not prevented the market from generating value. It has prevented the market from generating the full value that frontier-grade AI capabilities would enable if the data were ready. The data readiness gap and the transformation gap are the same gap viewed from different angles.
The practical action item: before investing in the next AI model upgrade, governance platform, or agentic automation initiative, ask what percentage of the data the AI system will operate on is verified against an authoritative external source, refreshed in real time, and structured consistently enough that the AI can reason about it without disambiguation errors. Organizations that can answer those questions precisely are in the 5%. The rest are in the 92% paying for AI initiatives that will continue to underperform.
Sources: Dun & Bradstreet AI Momentum Survey, May 4, 2026 / Computerworld, May 14, 2026 / Yahoo Finance / PRNewswire, May 4, 2026
THE NUMBER
5%
of enterprises say their data is adequately ready to support their AI initiatives — from a survey of 10,000 businesses across 32 countries.
This is the most precisely quantified expression of the enterprise AI bottleneck this series has found. Deloitte's 37% surface-level cohort. Coastal's 46% shortfall rate. OpenAI's 16x Codex gap. The Google GTIG zero-day exploiting ungoverned agents. Every gap this series has documented traces back to the same foundational condition: organizations are deploying AI on data that was never prepared to be AI's operating environment. The 5% who say their data is ready are, almost certainly, the same population generating the 74% of AI's economic returns that PwC documented in this series' opening edition.
Moving Pieces
[⚠ Conflict] Dun & Bradstreet Brings Verified Commercial Identity Into Claude via MCP
On May 5, Dun & Bradstreet announced a collaboration with Anthropic to bring D&B risk data directly inside Claude through a Model Context Protocol server — enabling businesses to create KYC/KYB workflows inside Claude that operate on verified commercial identity data. The integration allows compliance teams to onboard business customers, verify supplier relationships, and assess credit risk using D&B's Commercial Graph of 500 million business records, all within Claude's interface. "Claude isn't just being given more data; it's being given the verified context and decision logic required to act," said Alex Zuck, General Manager of Risk at D&B. This represents the first integration of a major verified business identity data source into a frontier AI model via MCP — addressing the exact data quality gap the D&B AI Momentum Survey documents.
⚠ Conflict: Spearhead is an Anthropic partner; this item involves Anthropic directly. See disclosures.
Source: D&B / PRNewswire, May 5, 2026
[Infrastructure] Nvidia + Fortinet + Red Hat: The AI Security Stack Gets Its Own Ecosystem Insurer
Nvidia announced partnerships with Fortinet and Red Hat this week pushing its AI platform deeper into the enterprise security and governance stack. With Fortinet, Nvidia is integrating its AI computing infrastructure with FortiAIGate — a next-generation firewall purpose-built for AI workload security. With Red Hat, Nvidia is expanding the Red Hat AI Factory reference architecture to include Nvidia's accelerated compute stack and governance tools. The strategic logic mirrors the $40B equity investment strategy: Nvidia is ensuring its hardware sits at the center of every enterprise AI deployment path — from training and inference through governance and security. For enterprise CISOs evaluating AI security infrastructure, the Fortinet-Nvidia integration establishes a validated reference architecture jointly backed by Nvidia, Fortinet, and Red Hat.
[Product] Virtana Launches AI Factory Observability for Dell AI Factory
Virtana announced AI Factory Observability for Dell AI Factory on May 13, providing a centralized view of AI usage, costs, and governance across AI tools and workloads running on Dell's AI Factory hardware. The product monitors GPU utilization, model inference latency, cost per query, and governance compliance status in real time. For enterprise IT teams, this addresses the same visibility blind spot that Monte Carlo documented in the builder survey two weeks ago: 62% of enterprise engineers cannot trace agent behavior across layers. The Virtana announcement is the infrastructure observability response to the same gap that ServiceNow AI Control Tower addresses at the workflow orchestration layer — different layers of the same stack, converging on the same requirement.
Source: HPCwire / AIwire, May 13, 2026
[Research] Owkin + AstraZeneca: Federated AI Drug Discovery at Pharmaceutical Scale
Owkin and AstraZeneca announced an agreement on May 13 to use Owkin's federated learning platform for AI-assisted drug discovery — enabling AI models to train on patient data held across multiple institutions without that data leaving each institution's secure environment. Federated learning is the data architecture that resolves a specific version of the 5% data readiness problem: the data needed to train clinical AI models is maximally sensitive, locked in silos by regulation, and unavailable for centralized AI training. Owkin's federated approach allows AI to learn from that data in place rather than moving it. For enterprise AI leaders in regulated industries where the most valuable data is also the least accessible, the Owkin-AstraZeneca model is the architectural response to the D&B data readiness gap — not cleaning the data and centralizing it, but building AI that can operate on distributed, privacy-preserved data in its native environment.
Source: HPCwire / AIwire, May 13, 2026
COUNTER - SIGNAL
60% Report Measurable ROI. The Data Problem Hasn't Stopped the Market. It Has Capped the Returns.
The 5% data readiness figure is arresting, but the full D&B picture is more nuanced. 60% of businesses now report at least some measurable ROI from AI, and 24% report broad or strong returns. The data problem has not prevented the enterprise AI market from generating value. It has prevented it from generating full value.
The organizations that have captured the 24% "broad or strong returns" position have done so despite imperfect data. They have found use cases where the data that exists is clean enough, current enough, and structured enough for AI to produce reliable outputs — the structured workflows with defined inputs and measurable outputs that the Travelers Insurance and Cisco case studies represent.
The two numbers together describe the opportunity: capture the returns available today while building the data infrastructure that will unlock the returns available tomorrow. The data problem is real. It is also the most solvable problem in enterprise AI, if organizations treat it as a prerequisite rather than a parallel workstream.
FROM THE FIELD
Data Readiness Is the Unglamorous Work That Makes Everything Else Possible.
Eighteen editions of this series have documented the enterprise AI transformation gap from every angle: governance, security, process redesign, workforce readiness, compute infrastructure, sovereign AI strategy, and model reliability. The D&B AI Momentum Survey's 97/5 finding resolves all of those angles into a single foundational explanation. The reason 46% of enterprise AI initiatives are falling short, the reason only 34% of organizations are genuinely reimagining their business, the reason the 16x Codex gap persists — is that 95% of enterprises are running AI systems on data that was never designed to be AI's operating environment.
This is not a new problem. The enterprise data quality crisis predates AI by twenty years. What AI does is make the cost of these data quality failures visible in a new way — not as reporting errors or analytical inconsistencies, but as AI systems that produce unreliable outputs, fail in production, and erode trust.
The D&B-Anthropic MCP integration is one market response to this. By bringing D&B's verified Commercial Graph directly into Claude via MCP, D&B is making verified, structured, continuously refreshed business identity data available as context for AI-assisted compliance and onboarding workflows. The Owkin-AstraZeneca federated learning agreement is a different response: in regulated healthcare, you cannot centralize the data, so you build AI that operates on distributed, privacy-preserved data in place. Both are practical responses to the same 95% data unreadiness problem.
The governance infrastructure that ServiceNow, Microsoft, Google, and IBM have all shipped in the past three weeks is partly a response to this problem as well. AI Control Tower, Agent 365, and watsonx Orchestrate all need to monitor what agents are doing — which means they need data about agent behavior. Building the observability layer is building the data layer for AI governance.
The practical question for enterprise AI leaders this week: pick your three highest-priority AI use cases and ask whether the data those use cases require is verified, current, and consistently structured. If the answer is no for any of those three dimensions, the use case will underperform. That is the conversation most organizations are not having yet. The D&B survey confirms it is the conversation most of them need to have first.
AK / Spearhead / Building AI systems, not tools