The AI Bull in a Compliance China Shop

The Securities and Exchange Commission’s recent withdrawal of the proposed Conflicts of Interest and Predictive Data Analytics Rule—often called the “A.I. Rule”—means that, for now, compliance managers must fend for themselves in mitigating the risks posed by the increasing usage of and reliance on this emerging technology. Artificial intelligence tools have existed in various forms for decades, but their complexity and capabilities have surged in the past several years. Artificial intelligence is a broad category of technology that encompasses machine learning, deep learning and—more recently—generative A.I. Generative A.I., such as Chat GPT and DALL-E, has seen rapid expansion thanks to advancements in computing power, availability of large datasets, and other technological breakthroughs. These developments have made A.I. tools increasingly accessible and valuable for both retail and institutional uses.

For registered investment advisors (RIAs), implementing A.I. tools can boost efficiency and productivity, but firms and compliance personnel must also consider the associated risks and limitations of this rapidly evolving technology. For example, A.I.  tools—especially free versions—are known to occasionally “hallucinate,” generating answers that are irrelevant or inaccurate in relation to the user’s query. Additionally, as discussed during A.I. Roundtables hosted by the SEC earlier this year, the use of A.I.  tools exacerbates other regulatory risks for RIAs, including concerns related to fiduciary duties and conflicts of interest, cybersecurity and data protection, and recordkeeping requirements.

In 2023, the SEC sought to further address the fiduciary duty concerns related to artificial intelligence by proposing the Conflicts of Interest and Predictive Data Analytics rule. Had the rule been adopted as proposed, it would have required advisors to “eliminate or neutralize” the conflicts of interests associated with the use of this emerging technology, as well as require policies and procedures designed to prevent violations of the rule. After withdrawing the proposed rule in June 2025, Mark Uyeda clarified the Commission’s “wait-and-see” approach, choosing to engage in outreach with industry and academic leaders to better understand A.I. before considering new direct regulations. The SEC continues to emphasize compliance with existing regulatory obligations and as highlighted in the SEC’s Division of Examinations 2025 priorities, has highlighted A.I. as a risk area in examinations, particularly regarding digital advisory services, trading, recordkeeping, and marketing disclosures. As the regulatory landscape evolves, Chief Compliance Officers and other compliance personnel should consider the following best practices before implementing A.I. tools:

  • Conduct an “A.I. inventory” to assess how various departments are currently using A.I. in daily operations.

  • Adopt an “A.I. appropriate use” policy and provide associated training on technology risks.

  • Perform thorough due diligence on A.I. tools, including ensuring data isolation, auditing inputs and outputs, and establishing oversight committees for monitoring.

  • Disclose A.I. usage transparently in Form ADV Part 2A and other relevant filings.

  • Ensure marketing materials referencing A.I. are clear and accurate to avoid “AI-washing.”

  • Analyze potential conflicts of interest, implement controls to mitigate them, and regularly test adopted A.I. tools for compliance.

RIAs have taken notice of the efficiencies that A.I. tools can provide and some have been eager to adopt them. Before firms begin integrating A.I. into their investment advisory operations, compliance officers should carefully assess the associated risks and ensure their firm is following the best practices for operations, due diligence, and disclosure.

Previous
Previous

The Secret Sauce: Ingredients for a Successful SEC Exam

Next
Next

Protect Your Firm with a Strong Remediation Plan