
17 April 2026
How AI Will Define the Next Era of M&A
Three weeks into an integration, and the business lead accountable for extracting value from an acquisition is staring at a synergy model that no longer reflects reality. Two business units are fighting over which technology platform will survive. The acquired company's data is in worse shape than diligence suggested. The integration thesis that looked compelling at signing is quietly eroding, as the market shifts relative to the assumptions underwritten at signing.
None of these problems are analytical. All of them however, are potentially fatal to the deal's value. The synergy model can be rebuilt in an afternoon, but the platform decision cannot. It is not a question of architecture, it is a question of power, identity, and whose team gets cut. The data remediation can be scoped and costed but the trust required to get the acquired company’s best engineers to stay cannot be scoped at all. The eroding thesis requires someone with the standing, the judgment, and the nerve to walk into a board meeting and say: the number we underwrote is wrong, and here is what we should do about it.
This is the tension at the centre of M&A transformation right now. The analytical work that used to consume the first ninety days of any integration can now be compressed from weeks to hours. AI handles it competently and, in many cases, better than a team of consultants working from a playbook. The work however, that actually determines whether a deal delivers its value has never been analytical. It has always been about judgment, alignment, and the willingness to make consequential decisions under genuine uncertainty.
The End of Analytical Scarcity
Historically, M&A transformation has been organised around a specific form of scarcity, analytical throughput. A complex carve-out has required weeks of effort to map Transitional Service Agreements (TSA), model the standalone cost base, design target operating models to determine stranded costs and build implementation plans granular enough to execute against (workstream, activity, tasks). This work required smart people, long hours, structured methodologies and ultimately steering committee and board packs for sign-off. The scarcity of analytical capacity justified large teams, extended timelines and significant fees.
That scarcity is gone. A synergy model that took two weeks to build can now be drafted and stress-tested across multiple scenarios in a single working session. An integration risk assessment that required three weeks of interviews, workshops, and consolidation can be generated as a structured first draft in hours, leaving the experienced operator to refine, challenge, and contextualise rather than build from zero. A TSA framework that went through six drafts as scope shifted can be regenerated with each scope change, because the cost of regeneration has collapsed to nearly nothing.
The tools that do this are not proprietary. They are not locked behind enterprise contracts or specialist vendors. Claude, Chat GPT, Gemini and other foundational models, are available to anyone with a browser. Deal frameworks, integration playbooks, scenario models, risk taxonomies, all of it can be produced at a quality level that would have been considered excellent consulting output five years ago, by anyone who knows how to structure a prompt and evaluate the result.
The M&A transformation market priced analytical throughput as though it were rare and difficult. It was. Now it is neither and the firms and practitioners still pricing their services based on analytical production, on the volume of slides, the thickness of the integration plan, the number of workstreams on the Gantt chart are selling a commodity at a premium, in a market that is beginning to notice.
Where the Compression Happens
Consider the moments in a deal lifecycle where time destroys value. Not in the abstract, but specifically, the three-week gap between identifying a synergy risk and having a quantified options paper ready for the steering committee. The two weeks it takes to assemble the first integration risk assessment into something that translates operational findings into commercial outcomes. The five drafts of a TSA framework as scope shifts with each new discovery about shared infrastructure. The two days every month that the best people on the integration team spend assembling a board report that is already stale by the time it reaches the boardroom.
AI compresses every one of these to a structured first draft that an experienced operator can pressure-test, refine, and make decision-ready in hours rather than days. The quality of the first draft is high enough that the human work shifts from construction to judgment: not “build the model” but “challenge the model’s assumptions.”
AI also does things that human teams have never been able to do at scale, regardless of headcount. Continuously monitoring execution data against the integration plan and surfacing emerging variances in real-time before they become crises. Stress-testing the synergy model against four or five macro scenarios simultaneously and showing the steering committee not just the base case but the distribution of outcomes. These are current capabilities, and they change the quality of decision-making available to leadership teams, not just the speed at which a PowerPoint deck gets assembled.
Compressing the analytical work is table stakes. What matters is what you do with the time and capacity that compression frees up.
Human Judgement Defines Success

The hardest problems in any integration, separation, or joint venture design are not problems that better analysis solves. They are problems that require a specific kind of human capability that AI does not possess.
So who decides which technology platform survives a merger? On paper, this is a technical question. In practice, it is a decision shaped by human and political realities, including:
- which business unit's leadership has more political capital
- which platform’s engineering team is more likely to walk if their system gets deprecated
- which CTO has the board’s ear
- which choice signals to the acquired company’s employees that they are valued or expendable.
AI can model the total cost of ownership of both platforms. It cannot navigate the room where the decision actually gets made.
The same is true for talent retention. How do you keep the acquired company’s best people when reporting lines are unclear, equity is being restructured, and the former CEO has just exited? That does not come from analysis. It comes from trust built through direct conversation. It requires judgment, including the ability to read what sits beneath surface language:
- whether a stated concern about role clarity is really about compensation
- whether it is about status, influence, or loss of identity
- or whether the individual has already decided to leave
It also requires knowing when a retention package genuinely solves the problem and when it merely delays an inevitable departure.
Then there is the most difficult moment of all. When the deal thesis starts to erode, when the market moves, when a key customer relationship turns out to be weaker than diligence suggested, when the cost base is structured differently than the model assumed, who has the courage and the organisational standing to escalate that to the board. Not by presenting a revised model, which AI can generate in minutes, but to stand in front of a group of directors who approved the deal at a specific price and tell them that the value they underwrote is not going to materialise in the form they expected. That is a judgment call that carries career risk. No model makes it for you.
The human capability in M&A transformation leading through ambiguity, building coalitions, reading rooms, and deciding when to push and when to pause, is what ultimately determines whether a deal delivers. Everything else is infrastructure.
Two Paths for the Advisory Market
This is where the M&A advisory market is heading, a split between practitioners who use AI to compress the analytical work and then apply experienced judgment to the decisions that matter, and practitioners who use AI to produce more analytical output, faster, without upgrading the judgment layer at all.
The first group treats AI as a capability amplifier. Every hour saved on model construction is an hour redirected to stakeholder alignment, thesis pressure-testing, or the difficult conversations that integration leadership requires. The analytical compression is not the product. It is the means by which experienced operators get to the judgment-intensive work faster and spend more time on it. The client gets better decisions, made sooner, with greater confidence in the underlying analysis.
The second group treats AI as a cost reduction tool. The same playbook, the same waterfall plans, the same steering committee decks, just produced with fewer people in less time. The output looks similar. The speed improves, however, the quality of the decisions embedded in the output does not improve, because nobody upgraded the judgment that informs those decisions. The team generates AI-produced analyses without the experienced eye to recognise when those analyses are misleading, insufficiently nuanced, or disconnected from the organisational reality of the deal.
A synergy model rebuilt in two hours is worthless if nobody in the room has the pattern recognition to know that the assumptions are optimistic by 30% based on how these kinds of integrations actually play out. An integration plan generated overnight is dangerous if the workstream sequencing ignores the political dependencies that will stall execution in month three. Faster production of the wrong plan does not create value. It destroys it with greater efficiency.
The market will sort these two groups. Clients are rational. When one advisory team consistently delivers better outcomes, not faster slides, but better decisions that lead to better results, the differentiation becomes visible in ways that are difficult to unsee. The CFO notices. The PE operating partner who sits across four portfolio companies notices and the reallocation of advisory spend follows.
An Emerging Operating Model
From my experience, nobody has yet sufficiently evolved the operating model for AI-augmented M&A transformation. Although the contours are becoming clearer with each engagement, as the underlying models and AI-enabled processes are built. The framework that is emerging maps the deal lifecycle to a deliberate split between AI-accelerated work and human-led judgment.
In the assessment phase, AI rapidly maps the option space, surfaces patterns from comparable transactions, and stress-tests the assumptions that underpin the integration thesis. Work that used to consume two weeks of a senior team’s capacity becomes a two-hour synthesis that the team can interrogate rather than build. The human role shifts from “create the analysis” to “challenge the analysis and decide what it means for this specific deal.”
In the structuring phase, AI organises information, builds decision trees, and models scenarios at a speed that allows real iteration. Bespoke integration plans, TSA frameworks, and governance models can be drafted, challenged, and redrafted in the time it used to take to produce a single version. The human role is to apply commercial judgment to the structure, not “what does the framework say” but “what will actually work given the politics, the talent risk, and the client’s appetite for change.”
In the decision phase, AI generates options and humans choose between them. The decisions that determine whether a deal delivers, which synergies to pursue and which to defer, how aggressively to integrate versus preserve, when to escalate a deteriorating thesis, are judgment calls that require a practitioner who has seen enough deals to recognise the patterns and enough organisational reality to know which patterns apply here.
In the mobilisation phase, humans lead. The work of translating a strategy into implementation, of building alignment across two organisations, and of sustaining momentum through the inevitable setbacks, this is leadership work. AI contributes administrative compression that frees leaders to spend more time leading and less time assembling status reports, but the mobilisation itself is entirely human.
In the adaptation phase, AI-powered feedback loops continuously monitor whether execution remains aligned to strategy. Variance detection, milestone tracking, early warning indicators, all of this can be automated in ways that give leadership better visibility than any monthly reporting cycle ever provided. The response to what the data reveals, the decision to course-correct, to accelerate, to pause, to escalate, remains human.
Across all of it, governance matters. Every output that reaches a steering committee or a board should be synthesised by a human who owns the recommendation. Every consequential decision should have a named owner. The speed that AI provides is valuable precisely because it gives decision-makers more time to think, not less reason to.
The Compounding Advantage
The practitioners who figure this out first will compound the advantage, and the gap will widen with each successive deal.
Every engagement where AI compresses the analytical work and experienced judgment is applied to the decisions will create a deposit of encoded expertise, reusable processes and frameworks refined against live conditions, prompt architectures calibrated to specific deal types, pattern libraries built from actual outcomes rather than theoretical playbooks. The next deal starts from a higher baseline. The one after that starts higher still. The gap between a team that has run this operating model across twenty transactions and a team encountering it for the first time is not incremental. It is structural.
The inverse loop is equally powerful. Teams that continue to sell analytical throughput as their primary value proposition will find the market repricing that throughput downward. Fewer engagements at lower fees means fewer reps for their people. Fewer reps mean slower development of the judgment that the market is increasingly willing to pay a premium for. The talent that recognises the shift leaves for firms where they can develop faster.
Client-side pressure accelerates both loops. CFOs compare advisory spend across deals. When one advisory relationship consistently delivers better outcomes at comparable or lower cost, the question of why the other relationships are not performing at the same level gets asked. That question, asked at enough companies, cascades through the market faster than most advisory firms expect.
The Timeline
The analytical compression is already here. The foundational models are already capable enough. The question is how quickly the practitioners who combine AI with genuine judgment separate themselves from those who use AI to produce more of the same. How long it takes for implementation knowledge to be codified remains an open question.
The deals closing in the next twelve months will be the first where the gap becomes visible at scale. Not because the AI improved, it was good enough six months ago, but because enough practitioners have now had enough reps with the new operating model to produce results that are qualitatively different from what the old model delivers. The clients who see those results will not unsee them.
The M&A practitioners who will lead in the next cycle are not the ones with the most sophisticated AI strategy deck. They are the ones who have been doing the work to design the models, compressing the analytical production, redirecting the capacity toward judgment-intensive problems, building the client relationships that come from consistently better outcomes, and encoding what they learn into infrastructure that makes the next deal better than the last.
We are actively exploring what this looks like in live deals, pressure‑testing where AI genuinely improves outcomes and where experienced human judgment must remain decisive. We are keen to exchange perspectives with clients and fellow practitioners on what they are seeing in practice and what execution advantage really looks like in today’s M&A environment.