
Last month, JTC's shareholders voted 99.9% in favour of Permira's £2.3 billion cash offer—roughly 26.2x adjusted EBITDA. JTC stated explicitly that going private would enable "greater investment in technology and artificial intelligence."
Permira isn't paying £2.3 billion for client relationships. It's paying for the operational infrastructure to scale Artificial Intelligence ( ‘AI’) across fiduciary services.
That signal should concern every trust company CEO who hasn't yet put AI governance on a board agenda.
In my previous PCD articles, I argued that operations beat tax and that independent trustees face an adapt-or-consolidate moment. The responses confirmed what I see across jurisdictions: the industry knows change is coming. What it hasn't grasped is that AI governance is becoming a fiduciary obligation, not just an operational choice.
The gap that's opening
While most independent trustees are still debating whether to adopt AI, the scaled players have already deployed it.
Vistra launched GENI in March 2025—the first global AI compliance advisor, covering 300+ jurisdictions. GENI doesn't just answer compliance questions. In agentic mode, it proactively scans client data—director passport expiries, regulatory deadlines, entity status changes—and escalates before humans know there's an issue. Vistra's 9,000 experts train GENI continuously. Their institutional knowledge doesn't retire when they do.
IQ-EQ achieved ISO 42001 certification in December 2025—the first major trust company to meet the international AI management standard. Their Cosmos platform now delivers cashflow forecasting, one-click reporting, and automated portfolio connections. This wasn't a compliance exercise. It was infrastructure for scale.
Ocorian partnered with Landytech to deploy Sesame across their operations. The result: 80% reduction in report preparation time.
These aren't pilot programs. These are deployed, operational, revenue-generating systems.
A boutique trust company cannot match Vistra's investment. But the gap isn't primarily about technology spending. It's about whether you've documented your answer to a question that's coming regardless of your size.

The question that's coming
ISO 42001—the first international AI Management System standard—creates the benchmark. When a regulator, litigant, or next-generation beneficiary asks whether your AI governance is adequate, this is the standard they'll reference. Just as ISO 27001 became the reference for information security.
MAS published AI risk management guidelines in November 2025. UAE regulators issued joint enabling technology guidelines in November 2021. The EU AI Act phases in through 2026. Colorado's AI Act takes effect in June 2026. In the US, Marchand v. Barnhill established personal director liability when boards fail to implement oversight systems for known risks.
CIMA, BVI FSC, JFSC, and FINMA haven't issued AI-specific guidance yet. But every one of these regulators already requires adequate systems of control. When competitors are certifying to international standards and your board hasn't considered the question, silence stops being caution.
Your clients are probably already asking—or they will very soon:
“What is your AI governance policy?”
The trust company without a documented answer won’t lose on fees or better AI. They’ll lose on silence.
The knowledge that's walking out the door
Here is what concerns me more than regulation.
Capgemini projects 48% of relationship managers retiring by 2040. I see this playing out now across Switzerland and the Channel Islands. The partners who built client relationships over decades carry institutional knowledge that exists nowhere in the firm's systems.
They know the settlor's daughter prefers WhatsApp. That the family's second generation disagrees on investment philosophy. That Jersey's regulator informally signals enforcement priorities months before publishing guidance.
None of this is documented. When these professionals leave, their judgment leaves with them.
Vistra understood this. GENI learns from their experts—and the AI retains that knowledge permanently. The trust companies that capture institutional judgment systematically will hold an advantage that no competitor can replicate through hiring alone.
For a five-person trust company, this isn't about building an AI platform. It's about asking: if your most senior officer left tomorrow, what decisions would the firm no longer know how to make?
“AI governance is becoming a fiduciary obligation, not just an operational choice”
The Fiduciary AI Governance Framework
Based on what I'm seeing work across jurisdictions, boards that want to address this systematically can structure the conversation around three questions:
1. Decision Documentation: Can you reconstruct your reasoning?
When you approve a distribution, decline an investment, or exercise discretion, do you record what you considered, what alternatives existed, and what risks you weighed?
Not the standard resolution citing the trust deed clause. A structured record that answers the question regulators will ask: "When did you know, and what did you do?"
This costs nothing to implement. It requires only discipline.
2. Knowledge Capture: Where does judgment live?
Identify the five decisions your most experienced officer makes better than anyone else on the team. Ask whether that judgment exists anywhere in your systems—documented criteria, decision trees, recorded reasoning.
If it doesn't, you have a single point of failure disguised as a competitive strength.
3. Governance Position: What has the board documented?
AI governance belongs on the board agenda as a fiduciary discussion, not an IT briefing. The question isn't whether to deploy AI tomorrow. The question is whether the board has considered AI's implications for the firm's operations, risks, and obligations.
Document that the question was considered. Reference ISO 42001 as the emerging benchmark. Note what the firm decided and why.
The process matters more than the conclusion. But the process needs to be on the record.
What this looks like in practice
A trust company doesn't need Vistra's budget to have a governance position. It needs documented answers to three questions:
- How do we record discretionary decisions? (Decision Documentation)
- How do we preserve institutional knowledge? (Knowledge Capture)
- What has the board determined about AI governance? (Governance Position)
An ISO 42001 gap assessment typically runs €5,000–15,000. But the framework above can be implemented with internal resources. The investment isn't primarily financial. It's the discipline of putting the question on the agenda and recording what you decided.
The firms that will thrive
Within 18 months, I expect at least one offshore regulator to issue AI governance guidance referencing ISO 42001. The trust companies that wait for that guidance will be 18 months behind those that acted on the standard directly.
But this isn't ultimately about regulatory compliance. It's about institutional resilience.
The firms that document their reasoning will be able to defend their decisions. The firms that capture institutional knowledge will retain their competitive advantage through succession.
The firms that put AI governance on the board agenda will have an answer when the question arrives.
Permira paid £2.3 billion for AI-ready infrastructure. Vistra deployed global AI compliance. IQ-EQ certified to international standards. Ocorian cut reporting time by 80%.
If a regulator, a litigant, or a next-generation beneficiary asks your board "What steps did you take to govern AI in your fiduciary operations?"—what is your documented answer?
The firms answering that question now won't need to defend it later.
Frédéric Sanz
Frédéric Sanz is the Founder of FiduciaCorp and author of AI Unleashed for Trustees & Family Offices. With 20+ years managing structures across 12 jurisdictions, he advises trust companies on AI governance and operational transformation.







