There is a particular kind of confidence that surrounds AI adoption, and everybody wants it. Whether for marketing, sales, or legal, the budgets go up, and press releases of newly deployed AI-powered tools go out. But somewhere between the launch event and the eighteen-month check-in, the same story quietly starts to unfold.
It is not a new story. But in LegalTech, where the market has expanded faster than most teams’ ability to evaluate it, the pressure to act before the strategy is clear has never been higher.
A track record that speaks loud and clear
Let us go back to the 1990s when ERP systems were going to unify the enterprise. Organizations committed serious money to these projects, they deployed broadly only to then discover that the tech alone could not carry what a weak implementation strategy refused to support. The CRM era followed the same logic in the early 2000s.
Digital transformation repeated it across the 2010s, often with a third-party consulting firm and a timeline nobody wants to revisit.
McKinsey found that roughly 70% of large scale transformations fail to meet their stated objectives, and in most cases the tools were not the problem. The organizational conditions for success were simply never built.
For ERP and CRMs, the investment logic was the same each time. The announcement energy was the same but so was the gap between expectations and outcomes. This is not so different with AI: the story is the same, just louder.
With AI adoption the story is the same, just louder
These patterns repeat with each new wave of technology. By the time teams realize little has actually changed, the investment is already locked in.
Thinking about adoption without impact
AI adoption runs the same cycle with even larger budgets. As a consequence, the expectations follow and are equally, if not higher, because they build on more market noise than operational clarity.
Here is the hard part: letting the features and tech shine more than the actual answer to one’s needs. For legal professionals, it is a mixed situation, where on one side teams can get blinded by the overpromises of AI while at the same time presenting strong resistance to change. For General Counsel, who tend to lead such transformation processes, the challenge is to find a middle ground between excitement, expectations, and reality (which is means addressing real needs).
The gap between rolling something out and getting something back is not always a technology problem, but maybe the result of letting expectations take over logic.
Letting the pressure set expectations
Legal feels this more acutely than most functions. AI adoption comes up in every board review now, and with other business units embracing the AI frenzy faster and louder, the pressure grows. In addition, the LegalTech market has expanded considerably, giving legal teams more options than ever. Before you know it, a tool has been chosen and deadlines are set for deployment, but the strategy behind it is nowhere to be found. The instinct is understandable: move fast, move wide, deploy something, show momentum. But that instinct is exactly what the historical record warns against.
It is the same organizational discipline failure that wrote every previous post-mortem. AI tools can only perform as well as the environment they are deployed into. There must be a plan, clean data, structured records, and specific use cases, but many organizations are not yet building that environment for legal teams.
Key learnings from the pattern
Rupali Patel Shah, Head of Legal Solutions at DiliTrust puts it directly in a recent article:
The problem has never been the tool itself; it has been everything the organization chose not to build around it.
There is a four-step failure sequence Patel Shah mapped out:
While not unique to AI tech, this sequence is almost natural amongst organizations and teams. It is the result of everything we mentioned previously: letting the pressure define the importance of the tool, not focusing on the real impact. As a result, teams respond to external forces rather than responding to the problem.
Breaking the pattern
Starting with the right question: The Watson MD Anderson case
There are many examples in modern tech history that illustrate the failure sequence. In 2013, the MD Anderson Cancer Center announced a partnership with IBM to deploy Watson with a stated mission to “eradicate cancer.” The ambition and budget were real: over $62 million invested across six year
This project was cancelled before reaching clinical use and disappointed more than one person on the team. What was missing here was the answer to one question: what, specifically, are we trying to do better, and how will we know when we have done it?
An independent audit found the system could not perform reliably for its intended purpose. So what was the problem? Was it the tool or the absence of a precise and measurable challenge?
Bold missions do not replace problem statements
That is the pattern: the tool is chosen for its reputation and hype, and teams work under that pressure, not necessarily because it is the right fit. The investment is made before the baseline exists. And when results do not follow, the technology absorbs the blame that belongs to the decision-making process.
Nothing about the current wave of AI adoption suggests organizations have learned this lesson. The pressure, the announcements, the energy and excitement are the same, but in too many cases the question that should come first is not clear. Before opting for a LegalTech solution, ask your team:
Breaking the pattern does not require a new methodology. It requires a different starting point: asking the right question, setting the right expected answer, and ultimately paving the way for intentional AI adoption.
Recognize the pattern, ask the right questions and plan
GCs who see this pattern clearly are better positioned than those reacting to the noise. It is about moving with clarity that reactive organizations rarely manage: define the problem before funding the solution, invest in people and processes with the same weight as the platform, measure from day one. Building this foundation is not a one-time exercise, it requires a modern governance approach that can hold over time.
The organizations that get AI adoption right will not be the ones that moved first. They will be the ones that recognised where they were in the sequence and made a different call.
The tech is not your enemy, and it does not have to be the hard part, history has told us as much.
