By Rupali Patel Shah, Head of Legal Solutions, DiliTrust
We have been here before.
The ERP rollout that was going to unify the entire organization. The CRM deployment that was going to transform the sales process. The digital transformation initiative that consumed three years, two consulting firms, and a budget that no one likes to talk about anymore.
Each time, the story follows the same arc: a significant spend, a confident announcement, and then crickets, until the post-mortem when leadership blames the technology. Spoiler: it is not the technology. No technology can be held accountable, so how can it reasonably be at fault? It is almost always the failure to implement well, to invest proportionally, and to bring people along through the change.
Right now, all anyone wants to talk about is AI and the future of work, and the conversation has an urgency to it that is hard to ignore. AI is genuinely powerful technology, but it is still new, still evolving, and in many ways we are still discovering what it is capable of. Think of Jack Jack from The Incredibles: clearly powerful, possibly of the mythical proportions LinkedIn keeps proclaiming, but still figuring out the full range of what those powers actually are. It is absolutely transformative technology; we just do not fully know how yet. And organizations are struggling with that ambiguity, many of them having been practically bullied into investing in the future of work before they have a clear picture of what they are trying to achieve or how they will measure it.
The challenge of adopting transformative technology is not new, though. Not for companies, not for industries, and not for society. We have navigated seismic technological shifts before, and the approach that leads to failure has remained remarkably consistent across all of them.
A colleague recently shared the observation that history may not repeat itself, but it rhymes, and it has stuck with me because of how precisely it describes what we are watching unfold across organizations right now, across legal departments, operations teams, and boardrooms. The four-step formula for failure is running on new technology in real time, and when things go wrong, it will not be because the AI was insufficiently capable. It will be because the organization never built the process around it, never invested meaningfully in incremental change, and never communicated clearly about what success was supposed to look like. The problem has never been the tool itself; it has been everything the organization chose not to build around it.
The four-step formula for failure
The pattern is worth naming plainly, because recognizing it is the first step toward breaking it.
1. Overinvest in the tool. The budget flows to the license, the vendor relationship, and the announcement, while the implementation plan becomes an afterthought and the change management budget shrinks to a rounding error. Organizations often commit significant capital before they fully understand the total cost of ownership, which is a classic budgeting mistake that quickly calcifies into a sunk-cost trap. The reasoning becomes circular in a way that is difficult to escape: the tool was so expensive that replacing it feels unthinkable, even when the evidence for doing so is clear.
2. Underinvest in people and process. Companies behave as though the technology has already replaced human effort before it has, and they assume, incorrectly, that process will naturally follow the tool once it is deployed. It will not. Tools are objects and technology is a thing; neither can be held accountable for outcomes. Accountability requires people, and responsibility requires structure. Without someone genuinely responsible for the outcome, there is no reliable way to know whether anything meaningful is getting done at all.
3. Expect transformation. Organizations get transformation alright, just not the kind they had in mind. Real transformation takes time, discipline, and a concrete definition of what success actually looks like, none of which can be purchased alongside a software license. When transformation is the stated goal but not the operational plan, what tends to emerge instead is disruption: workflows interrupted, teams frustrated, and leaders searching for someone to hold responsible.
4. Get disappointment. Adoption stalls, ROI proves elusive, and someone loses their job, usually the person who championed the tool rather than the person who approved the budget.
This cycle has played out across every major wave of technology for decades, and with AI, organizations are repeating it with higher stakes, bigger budgets, and expectations so inflated they would make even the most optimistic CTO uncomfortable. A hammer cannot build a house without a blueprint, a skilled hand, and a clear sense of what is being constructed, and AI is no different in that regard. The tool is more complex, the expectations are more inflated, and the failure modes are harder to detect until they are already entrenched.
The transformation trap
The impulse to move broadly and ambitiously is understandable. AI feels like a generational shift, and no organization wants to be the one that hesitated while competitors moved forward. So the instinct is to deploy widely, announce boldly, and call it transformation.
The problem with defining success as transformation is that transformation, without an operational definition, is unmeasurable, and when something is unmeasurable it cannot be defended or iterated upon. It becomes a declaration rather than a strategy.
I have seen this play out more than once: a company deploys AI tools across multiple departments simultaneously, with different platforms, no shared data governance framework, and no agreed-upon baseline for what good looks like. Eighteen months later, when leadership asks whether it is working, no one in the room can answer that question with confidence, not because the tools underperformed but because the organization never established what performance was supposed to look like before they started.
The organizations most eager to transform are often the ones least equipped to do it, because real transformation requires discipline alongside ambition, and the willingness to slow down before scaling up. That particular quality of patience is genuinely difficult to cultivate when the board is raising AI in every quarterly review and competitors seem to be sprinting forward. But the organizations that skip this step do not avoid the hard work; they simply defer it to a much more expensive and public moment.
So what is the right way to evaluate, implement, and deploy technology across an enterprise? It begins with replacing the four-step formula for failure with something grounded in how change actually works.
The four-step formula for success
Each of the following steps is a direct response to the failure it replaces. None of them are complicated, but all of them require intention before they require investment.
1. Define the problem before you fund the solution. Before any conversation about platforms, vendors, or budgets, the organization needs a precise and specific answer to one question: what, exactly, are we trying to do better? Improving efficiency is a category, not a problem statement. The answer needs to be specific enough to measure, time-bound enough to hold someone accountable for it, and narrow enough that a focused solution could realistically address it. If that answer is not available in concrete terms, the organization is not ready to purchase a tool.
2. Invest in people and process with the same seriousness as the platform. The way a budget is structured sends a signal about what the organization believes matters, and people interpret that signal accurately. When training, change management, and process redesign receive only a fraction of the attention and resources directed at the technology itself, the people responsible for adoption will respond in kind. Meaningful implementation means committing real resources to helping people understand not just how to use the tool but why the workflow is changing and what they are responsible for within it. People and process are not supporting elements of a technology rollout; they are the substance of it.
3. Set milestones, not missions. Replacing “transform the organization” with a sequence of specific and measurable checkpoints creates something that transformation theater cannot: an honest signal about whether the approach is working, early enough to make adjustments before the investment becomes the argument for continuing regardless of results. What does success look like in 90 days? In six months? Which metric moves, by how much, and by when?
4. Build accountability into the structure from day one. Every implementation needs one owner, someone accountable for outcomes rather than just deployment, someone who will still be in the room at the six-month check-in with data and an honest read on what is and is not working. Accountability is not punitive in this context; it is what separates an implementation that learns and adapts from one that quietly deteriorates while everyone avoids the conversation.
These four steps are not revolutionary. They are operational discipline applied to a new context, which is precisely the point. AI may be a new technology, but the management challenge it presents is one organizations have faced before.
The case for starting small
With that foundation in place, the implementation strategy itself becomes clearer, and it starts smaller than most organizations are comfortable admitting.
Starting small has a reputation problem. It reads as timidity, as a lack of vision, or as the position of organizations not yet ready to commit. In practice, starting small is how you learn what works before you invest in scaling what does not, and it is the only approach that has a consistent track record of producing results that hold up over time.
In practice, it looks like one defined use case, not “AI for legal operations”, which is a category rather than a problem, but something specific: we want to reduce contract review turnaround time by 40% for a particular repeatable contract type, within this team, by this date. That level of specificity surfaces every important question before a dollar is committed.
It looks like one team, close enough to the problem to give honest operational feedback and senior enough to drive real adoption rather than polite compliance. It looks like one success metric, something measurable and time-bound that actually tells you something true about whether the approach is working. And it looks like one owner, for the reasons already outlined.
The discipline of that specificity forces the questions that matter most before the contract is signed: What problem are we solving? Who will use this, and how will their workflow change? What does success look like, and what does it feel like to the people doing the work? What data does this require, and is that data clean, governed, and accessible? If those questions do not have solid answers before the contract is signed, the organization is not purchasing a solution so much as funding a future problem.
Intentionality is the strategy
Intentional does not mean slow; it means deliberate, which is a distinction worth holding onto. Moving quickly and moving with clarity are entirely compatible with each other, while moving broadly and ambiguously tends to produce results that are difficult to interpret and even harder to defend.
Intentional implementation also means investing proportionally, which is a conversation most organizations sidestep. When the budget skews heavily toward the technology and lightly toward training, change management, and process alignment, the implementation is already compromised, not because the technology is insufficient but because the investment structure signals what leadership believes matters. When training and process are treated as line items rather than commitments, the people responsible for adoption treat them accordingly.
Before any AI deployment, three questions deserve real answers, not aspirational ones or slide-deck ones, but operational and honest ones: What, specifically, are we trying to do better? Who is responsible if it does not work? How will we know whether it is working? Vague answers to those questions produce vague outcomes.
The compounding effect of getting it right
When organizations resist the pressure to do everything at once and commit to getting one thing genuinely right, something predictable happens: the next use case is easier to execute, and the one after that easier still.
The reason is not that the technology has improved. It may be identical. The reason is that the organization has built something more durable than a tool: the operational knowledge of how to implement well, a governance framework that people actually use, a change management approach that does not exist only as a document, and the kind of cultural trust that comes from demonstrating, concretely, that this works when the right conditions are created.
That compounding effect is where the real AI dividend lives, not in the launch announcement or the features list, but in the patient and unglamorous work of getting the fundamentals right and building from there. It is available to any organization willing to resist the pull of transformation theater long enough to do that work.
Better decisions. Sustainable outcomes.
Every organization is making a choice right now, whether or not they are framing it in those terms. Some are building the foundation for AI that compounds over time, delivering defensible value and making their people genuinely more effective. Others are moving through the same four-step cycle that has produced the same disappointed post-mortem across every major wave of technology, directing blame at a tool that was never the actual source of the problem.
The technology has never been the hard part. The hard part is the discipline to define the problem before deploying the solution, to invest in people and process with the same seriousness as the platform, to measure what matters honestly, and to start with one thing, do it well, and earn the right to scale.
Starting small and being intentional about defining the problem before deploying the solution is not a constraint on ambition. It is the path that actually leads to transformation.