Published on
Firms across the industry are under intense pressure to “do something with AI.” Clients are relentlessly asking about it. Competitors are marketing AI-powered capabilities. Boards want to understand the risks and opportunities. Despite the momentum, many firms remain paralyzed, not sure how to adopt. Those firms that have adopted AI still struggle to turn it into measurable business value.
Recent industry surveys echo this gap. EY reports that only a small share of firms have deployed generative AI at scale, despite strong expectations for broader adoption by 2027.1 Meanwhile, 57% of investment-advisor firms cited AI as a top compliance concern at the start of 2025.2
Before onboarding and investing in AI, executives should step back and ask five essential questions.
1. What problem are we actually trying to solve?
The most common misstep is starting with the technology instead of the business objective. In the financial services industry, firms may wish to increase advisor productivity, enhance the client experience or scale without increasing headcount. AI may be able to help in specific areas to support these goals, such as reducing manual reconciliation and reporting tasks, strengthening compliance workflows or improving client personalization.
Analysts emphasize that firms create value when they focus on a small set of high-impact use cases rather than broad experimentation.3
A simple test:
Would this initiative still matter if it didn’t use AI?
If not, it’s an experiment — not a strategy.
2. Is our foundation ready for AI?
AI is only as good as the data beneath it. Firms typically manage data from multiple custodians, alternative assets, illiquid investments, legacy onboarding systems and bespoke family office structures. Without strong data quality and governance, AI outputs will be unreliable.
A recent digital transformation report highlighted data strategy, governance, architecture and quality as critical prerequisites for AI readiness in financial services.4
Executives should consider:
Where does our data live, and who owns it?
Is it accurate, complete and timely?
Can we explain what data will feed the AI system and how it’s controlled?
Do we have documentation and auditability aligned with our regulatory requirements?
A fragmented data ecosystem makes AI more risky than helpful.
3. How are we managing AI risk and regulatory expectations?
This industry is fiduciary-driven, compliance-heavy and closely regulated. AI introduces new risks: biased outputs, inaccurate insights, misinformed recommendations, data leakage or client-facing errors.
In a recent study, 77% of firms cited data quality, transparency or training bias as the biggest barrier to responsible AI usage.5 Executives should ask:
Who owns AI risk — compliance, operations, technology or a combined governance body?
Do we have clear guardrails for how AI can be used in client workflows?
How will we monitor, detect and address errors or harmful outputs?
Are we prepared for evolving SEC and FINRA scrutiny around AI-driven processes, vendor oversight and model risk?
Without a firm governance foundation, AI can amplify risk instead of reducing it.
4. Are our people and workflows ready for AI-driven change?
AI doesn’t just change technology; it changes how investment professionals and financial advisors, client-service teams, analysts and operations staff work. Many firms want or expect employees to use emerging AI tools, yet most do not have formal AI policies or training in place.6
Executives should think through:
How will workflows change in practice (e.g., meeting prep, prospecting, portfolio reviews, reporting)?
What training will staff need to interpret and validate AI outputs safely?
How will incentives encourage adoption rather than drive resistance?
Where do humans remain essential in the loop — reviewing, verifying, approving?
AI succeeds only when people trust it and understand how it supports their role.
5. How will we measure success and avoid “AI theater”?
Announcing new AI pilots is easy. Scaling them into real value is harder. A recent industry report notes that test and learn velocity is high, but very few firms have moved AI deployments beyond the pilot stage.7
Executives should define:
What specific outcomes matter — reduced cycle time, fewer errors, improved advisor capacity, stronger client retention?
Which KPIs will we measure from day one?
How will we decide which experiments to scale and which to retire?
Do we have an ROI timeline that matches the complexity of our business?
AI should be measured with the same rigor as any transformation initiative.
Bringing it together
AI can transform client service, operations, compliance and investment workflows, but only if firms start with the right foundation. Executives who succeed are those who:
Anchor AI to clear, measurable business outcomes
Build a dependable and governed data foundation
Elevate AI risk to an executive and board-level priority
Prepare their teams for new workflows
Use disciplined measurement to scale what actually works
With these fundamentals in place, firms are far better positioned to capitalize on AI as the technology matures.
If your firm is exploring how AI can enhance strategy, operations or client experience, connect with us to learn how Addepar can help.
References
Unlocking strategic advantage: Generative AI in wealth and asset management, EY, 2025.
Survey: Artificial Intelligence (AI) Identified as Top Compliance Concern Among Investment Adviser Firms, ACA Group, 2025.
Demystifying AI for Wealth Management: From Talk to Action, SolomonEdwards, 2025.
Financial Services: Data Management Strategies for AI Readiness, Ankura, 2025.
Using generative AI to power growth for wealth managers, Accenture, 2025.
AI Adoption Set to Rise Amongst Advisors, ISS Market Intelligence, 2025.
Artificial Intelligence Accelerates Transformation in Wealth, Broadridge, 2021.