Gartner's new AI leadership research, released April 16, 2026, cuts through the hype cycle with a sharp finding. Organizations with successful AI initiatives invest up to four times more of their revenue in foundational areas like data quality, governance, AI-ready people, and change management than peers with poor AI outcomes.
What Gartner Actually Found
The April 16, 2026 Gartner release draws on a global survey of 353 data and analytics (D&A) and AI leaders conducted from November through December 2025. Two headline numbers stand out.
Successful AI programs spend up to 4x more on foundations. As a percentage of revenue, leaders put four times more into data quality, governance, AI-ready people, and change management than laggards do. This is not raw model spend. It is the work underneath the model: pipelines, metadata, policies, and the organizational muscle to use AI outputs responsibly.
Only 39 percent of technology leaders are confident their AI investments will improve financial performance. That confidence gap is not a survey quirk. A separate Gartner survey of 360 IT leaders in Q2 2025 found only 23 percent very confident in managing security and governance for GenAI deployments. The same executives buying AI tools do not trust the foundations those tools are running on.
Gartner Distinguished VP Analyst Rita Sallam frames the implication directly. Through 2030, the D&A leader's mandate is to deliver new trusted data, context foundations, and perceptive intelligence. In plain language, the work of making AI useful now sits upstream of the model.
Why Data Foundations Determine AI Outcomes
The 4x finding is not surprising to anyone who has shipped an AI system into production. It is, however, a useful corrective to the narrative that better models alone will solve enterprise AI.
Models inherit the data they run on. A frontier model fed inconsistent customer records, stale product catalogs, or siloed operational data will produce answers that look confident and are quietly wrong. The problem is not the model. It is the input layer.
Context is now mission-critical. Gartner's coverage emphasizes that agents cannot function autonomously without high-quality context and absolute trust. Semantics, metadata, and governed relationships between data objects act as the brain for AI. Without that context layer, agentic systems drift, confabulate, or stall. We wrote about the underlying issue in the five most common data problems that derail AI projects.
Workforce readiness is part of the foundation, not a separate program. Gartner deliberately includes AI-ready people and change management alongside data quality and governance. That framing matters. A clean data warehouse that no one trusts or uses is not a foundation. It is a dashboard nobody opens.
The Confidence Gap Behind the Numbers
The 39 percent confidence figure is the more uncomfortable finding for executive teams. Most organizations have now spent 18 to 24 months on GenAI experimentation. If six in ten technology leaders are still unsure their AI spend will move the P&L, something structural is wrong.
This matches the pattern in PwC's April 13, 2026 AI Performance Study, which found that 20 percent of companies are capturing 74 percent of AI economic value and that leaders invest 2.5 times more than peers. Two different research houses, looking at two different angles of the same problem, land in the same place. The gap between AI leaders and laggards is widening, and what separates them is preparation, not ambition.
Our take: The 4x investment ratio is the clearest single diagnostic we have seen for predicting whether an AI program will scale or stall. If your organization is spending heavily on models and lightly on data infrastructure, governance, and people, you are buying a faster engine for a car with no fuel line.
What This Means for Your AI Budget
For CIOs, CDOs, and operating executives planning the next twelve months of AI spend, three implications follow directly.
The ratio matters more than the total. Gartner's 4x finding is not an argument to spend more. It is an argument to rebalance. Leaders are not writing bigger checks at every line item. They are shifting proportion toward foundations. Plenty of mid-size organizations can match the leaders' spending mix on a smaller budget.
Foundational spend is not glamorous and is rarely the loudest ask. The business case for a new model tier writes itself. The business case for replatforming a metadata layer does not. Budget discipline now means funding the unsexy work first and treating it as the condition for model-layer ROI, not a competitor to it. Moving from manual exports and brittle ETL to production-grade data pipeline architecture is usually the first measurable step.
Foundations unlock the use cases the next wave of AI will require. Agentic workflows, autonomous decision loops, and cross-system automation all depend on trustworthy, well-governed data. Without that substrate, the 2026 and 2027 class of agent-based products simply will not work inside your environment.
How to Close the Investment Gap
Rebalancing is a multi-quarter effort. The practical sequence looks like this.
-
Benchmark your current allocation. Split your AI-related spend into model and tool layer, data and infrastructure layer, governance and compliance layer, and people and change layer. Most organizations we work with discover the first category is 60 to 80 percent of spend and the others are residuals. Gartner's finding suggests leaders look closer to even.
-
Audit data readiness for the next three use cases on the roadmap, not the last one. A common mistake is fixing data for a pilot that already shipped. Instead, pull the three planned use cases forward and assess whether pipelines, metadata, and access controls actually support them. If not, scope that work first.
-
Invest in governance as enablement, not overhead. Gartner's 23 percent figure for GenAI security and governance confidence is a warning. Governance that blocks work is theater. Governance that lets teams deploy safely, quickly, and with audit trails is infrastructure. The practical mechanics live in a governance framework for growing companies.
-
Budget AI-ready people as part of the foundation. Training, role redesign, and change management belong in the data and analytics line, not in HR's discretionary pool. If people cannot interpret AI outputs, challenge them, and incorporate them into workflows, the model spend will not translate into outcomes.
-
Set a target mix and review quarterly. Publish an internal target for the percentage of AI-related spend going to foundations. Review it every quarter alongside program metrics. The number itself is less important than having leadership actually track it.
What Not to Do
Do not treat this as a one-time data cleanup. Foundational spend is recurring. Data pipelines drift, governance needs refresh, and workforce readiness decays as tools evolve. The 4x finding is a run-rate pattern, not a project.
Do not collapse all foundation spend into the data team. Governance, change management, and workforce enablement cut across IT, legal, HR, and the business. Treating foundations as a single department's line item is how they get underfunded.
Do not wait for an AI strategy refresh to start rebalancing. Gartner's 2030 horizon sounds distant, but the organizations pulling ahead right now are already spending on foundations. Every quarter you delay the rebalance widens the gap against competitors who started earlier. For SMBs and mid-market companies weighing where to begin, the phased approach to AI implementation is a practical starting point.
Key Takeaways
- Gartner's April 16, 2026 research found organizations with successful AI initiatives invest up to 4x more of revenue in data quality, governance, AI-ready people, and change management than laggards.
- The finding came from a survey of 353 D&A and AI leaders conducted November through December 2025.
- Only 39 percent of technology leaders are confident their current AI investments will improve financial performance.
- Only 23 percent of IT leaders surveyed in Q2 2025 were very confident in GenAI security and governance.
- The practical lesson is to rebalance AI budgets toward foundations, not to spend more in absolute terms. Model-layer ROI depends on data, governance, and workforce readiness being funded first.
Not sure where data foundations fit in your AI roadmap? Book a discovery call and we will help you figure that out, no strings attached.