AI and Advisor Trust: Using Technology Without Losing the Human Relationship
- Charlie Van Derven
- Apr 2
- 7 min read
By Charlie Van Derven
Artificial intelligence is making advisors faster. That part is obvious. Meeting notes can be summarized in seconds. Follow-up emails can be drafted before the coffee gets cold. Marketing ideas that used to take an afternoon can appear in a few prompts.
That kind of efficiency is hard to ignore. Still, speed has never been the product in this business. Trust is.
Clients do not stay with an advisor because the workflow is impressive. They stay because someone helps them make sense of uncertainty. They stay because a real person remembers what matters to them, explains complicated things clearly, and shows up with calm judgment when life gets messy. The SEC continues to treat the adviser-client relationship as fiduciary in nature, and that duty applies across the full relationship, not just during portfolio construction.
That’s why AI creates both opportunity and tension for advisory firms. Used well, it can remove friction, improve consistency, and give advisors more time to focus on clients. Used carelessly, it can make communication feel generic, introduce compliance risk, and slowly chip away at the human confidence that strong relationships depend on.
The real question is not whether firms should use AI. That debate is over. The real question is whether firms can use it in a way that makes clients feel more supported, not less seen.
The Relationship Is Still the Business
Wealth management is a relationship business dressed in technical language. Planning matters. Investment discipline matters. Operational efficiency matters. None of those things land the way they should unless the client trusts the person delivering them.
A client who is worried about retirement, caring for aging parents, selling a business, or navigating a divorce is not looking for a highly optimized machine. That client is looking for judgment. Clarity. Reassurance. Perspective. No software tool can sit in the emotional weight of those moments and respond with true human understanding.
That does not make AI the enemy. It simply puts it in the right seat. AI should support the advisor, not replace the relationship. The firms that win with this technology will not be the ones that automate the most client touchpoints. They will be the ones that use automation to create more room for meaningful human ones.
That distinction matters more than most marketing copy admits. A polished system can impress people once. Feeling understood keeps them around.
Where AI Can Add Real Value
There is no reason to fear practical, supervised use of AI. In fact, some of the most useful applications are refreshingly unglamorous. Internal meeting summaries, note organization, workflow routing, task creation, document classification, and first-draft content support can all improve efficiency without interfering with the advisor’s duty to apply professional judgment. FINRA has noted that firms are increasingly using GenAI for internal efficiency, with summarization and information extraction standing out as especially common use cases.
That’s where the best opportunity often lives. A cleaner back office means a calmer front office. A faster internal process means more time to return calls, prepare for meetings, and personalize communication. Advisors do not need more digital noise. Advisors need more time to think.
Clients notice that difference. They may never ask what software is being used behind the scenes, yet they absolutely notice when follow-up is sharper, details are remembered, and service feels more consistent.
A practical truth sits underneath all of this: technology becomes valuable when it gives the advisor more space to be human.
The Risk is Not AI Itself
Most compliance problems do not start with some dramatic act of recklessness. They usually start with convenience wearing a friendly face.
A staff member pastes client information into a public tool to save time on an email draft. A marketing team member publishes AI-assisted copy that sounds polished but includes an unsupported implication. A firm rolls out a chatbot that answers too confidently, even when the answer is wrong. None of those choices feel huge in the moment. All of them can become huge quickly.
FINRA has been very clear that its rules are technology neutral. Member firms using GenAI are still subject to the same regulatory obligations that apply when using any other tool, including obligations related to supervision, communications, recordkeeping, and fair dealing. FINRA’s 2026 oversight guidance also points firms to model integrity, reliability, accuracy, governance, testing, monitoring, and human-in-the-loop review.
That should be the governing mindset for firms. AI may assist with the work. It does not inherit the responsibility.
Supervision remains a human obligation. Approval remains a human obligation. Suitability, fiduciary care, disclosure, recordkeeping, and oversight remain human obligations. A clever prompt does not change that. A slick vendor demo definitely does not change that.
Trust Grows When Firms Are Honest About the Tool
Some advisors worry that acknowledging AI use will make the firm seem less personal. In many cases, the opposite is true.
Clients are not shocked that firms use modern tools. Most assume it. What they want to know is whether the advisor is still thinking, still reviewing, still protecting their information, and still accountable for the final outcome. Transparency, when handled thoughtfully, tends to build confidence rather than reduce it.
That does not mean every workflow needs a long disclosure paragraph. It means the firm’s posture should be honest and clear. Technology may support efficiency. Humans remain responsible for advice, communications, and decisions. That message is simple, credible, and reassuring.
The SEC’s fiduciary interpretation is useful here because it reinforces a principles-based duty across the full advisory relationship. Once that standard is the anchor, the AI conversation becomes much clearer. Any use of technology should strengthen the firm’s ability to serve the client’s best interest, not weaken it.
Clients can live with modern tools. Clients struggle when those tools create distance, confusion, or the sense that no one is really at the wheel.
Marketing Is a Powerful Use Case and a Fast Way to Get in Trouble
AI can be tremendously helpful in marketing. It can generate topic ideas, tighten rough drafts, improve structure, help repurpose approved content, and reduce the time it takes to move from blank page to polished article. For busy firms, that sounds like a gift.
It can also become a compliance headache in a hurry.
The SEC’s marketing rule still governs advisor advertising, and the SEC’s Division of Examinations issued additional observations in December 2025 focusing on disclosure requirements, oversight and compliance practices for testimonials and endorsements, and due diligence and disclosure requirements for third-party ratings. The SEC’s marketing FAQs also continue to remind advisers that written compliance policies and procedures must be reasonably designed and updated to reflect the rule, including related record retention requirements.
That means AI-generated content cannot be treated as harmless draft material that somehow escapes review. A blog post, email campaign, landing page, social post, webinar invitation, or video script can all create regulatory risk if claims are exaggerated, implications are misleading, disclosures are omitted, or performance references are handled incorrectly.
There is also a practical brand issue. Readers can tell when content sounds manufactured. Generic content may fill a calendar, but it rarely builds authority. Advisors who want stronger impressions and more clicks do not need louder content. They need sharper insight, clearer positioning, and a point of view that sounds like it came from a professional who actually understands the audience.
AI can help shape the draft. The advisor’s voice still needs to carry the message.
Communication Is Where Clients Feel the Difference
A technically correct email is not the same thing as a trusted communication.
That gap matters most when clients feel anxious. Market volatility, a health scare, a liquidity event, an inheritance, or a family transition can make even routine conversations emotionally loaded. In those moments, clients do not need generic reassurance wrapped in elegant grammar. They need calm, context, and care.
AI can absolutely assist with drafting routine follow-ups or organizing key talking points. Sensitive communication should still be reviewed with real empathy and sound judgment. A client should never feel like a difficult moment was handed off to autocomplete.
There is a little levity in this, too. No one wants the financial equivalent of a sympathy card written by an appliance manual. Clients know the difference between polished language and genuine attention. Most of the time, they can feel it in a sentence or two.
That is why firms should not ask whether AI can write the message. The better question is whether the final message sounds like someone who knows the client actually meant it.
Operations Can Be Automated. Accountability Cannot
Many firms will get the fastest return from AI in operations, not advice delivery. That is a good thing. Internal workflows are often full of repetitive tasks that consume time without adding relationship value.
Still, operational use is not risk free. The SEC’s 2025 Examination Priorities noted continued focus on adviser policies and procedures, fiduciary obligations when outsourcing certain functions, and how firms protect against loss or misuse of client records and information that may occur from the use of third-party AI models and tools. The same priorities also highlighted vendor oversight and cybersecurity concerns tied to third-party products and services.
That should push firms toward disciplined implementation. Approved tools matter. Data handling rules matter. Vendor diligence matters. Testing matters. Documented review matters. Internal policies should actually match how the tool is being used in real life, not how the firm wishes it were being used.
A surprising number of firms can explain their AI ambitions better than they can explain their AI controls. That is usually a sign that the rollout is moving faster than the governance.
A Better Standard for AI in Advisory Firms
The most effective AI strategy for advisory firms is not complicated. It is disciplined.
Use AI where it improves efficiency without replacing human judgment. Keep humans responsible for anything client-facing. Do not enter sensitive client information into unapproved tools. Review marketing and communication outputs before they are published or sent. Maintain records where required. Train staff in plain language. Revisit the process as the technology and use cases evolve.
None of that sounds flashy. Trust rarely is.
That may be the most important lesson in this entire conversation. Advisors do not need to chase novelty to stay relevant. Advisors need to show clients that innovation is being adopted with care, restraint, and a clear respect for the relationship.
Trust Is Still the Advantage
AI is not the story clients will remember. They will remember whether their advisor was present, thoughtful, responsive, and steady when it counted.
That is the opportunity in front of firms right now. Use the technology to remove friction. Use it to create capacity. Use it to make the client experience smoother and more consistent. Just do not confuse efficiency with connection.
In a business where trust is the real differentiator, the best use of AI is the one that makes the human relationship stronger. Everything else is just software.




Comments