Skip to main content Skip to footer

The 17% Problem: Why AI stalls without data readiness

Cat Carroll

Head of Legal Operations, iManage

There's a stat from the iManage Knowledge Work 2026 Benchmark Report that should stop every legal leader in their tracks: 85% of professional services firms are piloting or implementing AI, but only 17% have embedded it into daily operations. The gap between AI ambition and AI readiness is real — and in legal, it's mostly a data problem.

Legal leaders aren't stuck because AI technology is immature. They're stuck because their data isn't ready.

According to the iManage Knowledge Work 2026 Benchmark Report, only that 17% of organizations have operationalized AI in a way that delivers real, scalable impact. Most legal departments have spent years executing and storing documents under inconsistent standards: counterparties filed under different names, executed agreements mixed with drafts, contracts scattered across multiple systems. When AI is deployed on top of this disorder, it only amplifies the friction and risk. 

The real barrier to AI success

During a recent webinar discussing these findings, I was struck by three structural problems that emerged repeatedly from panelist perspectives and audience feedback:

  • Inconsistent metadata: Counterparties filed under legal entity names, trade names, abbreviations, and subsidiaries. A human would apply judgment. AI treats them as distinct entities, breaking critical document relationships.
  • Poor document classification: Executed agreements mixed with drafts, amendments, and signature pages with no clear relationships. The AI surfaces all of them with equal authority.
  • Fragmented repositories: Contracts scattered across document management systems, CRM platforms, procurement tools, and shared drives makes it difficult for AI to get the full picture. An AI analyzing 90% of active agreements will still produce systematically wrong portfolio-level insights if it’s missing 10% of critical data.

The work that comes first

Based on the report, the legal leaders who have successfully moved beyond experimentation do three things before deploying AI:

  1. Data normalization: Establish consistent standards through templates and embedded metadata requirements. When the system won't accept a document without a counterparty name in a specified format, consistency becomes automatic.
  2. Taxonomy development: Build a classification system that distinguishes document type from document status. This also forces resolution of legacy ambiguities — e.g., what's the difference between a master services agreement and a professional services agreement? It matters for AI accuracy.
  3. Historical remediation: Apply current standards to legacy documents, tiered by risk. Prioritize active contracts above material dollar thresholds, auto-renewal provisions, and high-exposure jurisdictions. The goal isn't perfection. It's ensuring the high-stakes portion is trustworthy before AI draws conclusions from it.

Why this matters

This pre-work can be difficult to sell internally. The timeline is months, and the output is better metadata, not a flashy new capability.

But here's why it matters: Without a strong knowledge foundation, AI doesn't just underperform — it produces outputs that carry the authority of systematic analysis without the reliability to back it up. And this isn't an iManage-only finding. A recent Harvard Business Review Analytic Services study found that just 7% of enterprises consider their data completely ready for AI, while 73% say they still struggle with AI data preparation. The gap is structural, not technical. 

In legal work, that has concrete consequences: missed renewals, untracked auto-renewal provisions, liability caps that reflect inaccurate views of the portfolio. And the cost of deferring this work doesn't stay flat — it compounds. Every document executed without consistent metadata is another entry in the remediation backlog.

The path forward

The 17% who have operationalized AI successfully did not skip the foundation work. They invested in knowledge work maturity first — building the governed, trustworthy information environment that AI depends on.

Before committing budget to AI tools, ask yourself: Do you have a single authoritative repository? Is counterparty naming normalized and enforced in your workflow? Do you have a system-enforced taxonomy? What percentage of active contracts have complete metadata?

If you can't answer these questions confidently, your team may struggle to operationalize AI right now — and that's okay. It's fixable. But it has to come before the tool.

Want the full picture? Jump straight into our webinar The 17% Problem: Why most AI investments aren't delivering. You can watch it on demand here. And for the underlying research, the iManage Knowledge Work 2026 Benchmark Report is worth your time. It benchmarks where your organization stands on the maturity curve that separates AI experimentation from AI impact.

Cat Carroll

Head of Legal Operations

Cat Carroll is Head of Legal Operations at iManage. She’s passionate about legal tech and process optimization, and loves being part of the amazing legal ops community.  When not thinking about all things DMS, AI, or CLM, you can find her spending time with her two adorable dogs.  

"

Making Knowledge Work

Request a demo

Ready to see how iManage can make a difference to your organization?