BastionAI Blog

Insights on AI compliance, regulatory technology, and the future of finance.

Featured Post

Featured

The Compliance Bill for AI in Asset Management Is Coming Due

As AI adoption accelerates across asset management, firms face an unprecedented compliance challenge. The regulatory framework is evolving rapidly, and the cost of non-compliance is rising.

Read on Substack

FINRA Just Made AI Compliance Mandatory — Here’s What RIAs Need to Know

The 2026 Oversight Report explicitly names GenAI chatbot communications as a supervisory priority. Every RIA and broker-dealer now has a documented regulatory obligation for AI supervision.

The Regulatory Shift

FINRA’s 2026 Annual Regulatory Oversight Report marks a turning point for firms using artificial intelligence. For the first time, the regulator explicitly requires the retention of GenAI chatbot communications and mandates supervision of AI-generated client content under Rule 3110.

This is not guidance or a suggestion. It is a documented supervisory obligation that applies to every FINRA-regulated firm deploying AI tools in client-facing or advisory capacities.

What Rule 3110 Now Demands

Rule 3110 has always required firms to establish and maintain a system to supervise the activities of each associated person. The 2026 Report extends this obligation squarely to AI-generated content. Firms must now demonstrate they can supervise AI outputs with the same rigor applied to human communications, retain complete records of AI-generated interactions, produce audit trails that show what AI content was reviewed, approved, or blocked, and implement real-time or near-real-time supervision of AI chatbot communications.

The Compliance Gap

Most RIAs and broker-dealers have adopted AI tools faster than their compliance infrastructure can support. A recent industry survey found that while 70% of financial professionals are using AI tools, virtually none have real-time input/output scanning or immutable audit logs for AI-generated content.

This gap between adoption and supervision is precisely what FINRA is targeting. Firms that cannot demonstrate active implementation of AI supervision controls face increasing examination scrutiny and potential enforcement action.

What Firms Should Do Now

The path forward requires more than updating a compliance manual. FINRA is looking for operational evidence of supervision: real-time scanning of AI inputs and outputs, automated policy enforcement on AI-generated content, immutable audit logs that capture every AI interaction, and documented procedures for escalation when AI content triggers compliance flags.

The firms that move first to implement these controls will not only satisfy regulatory requirements but gain a competitive advantage as AI compliance becomes table stakes in financial services.

Analysis based on FINRA’s 2026 Annual Regulatory Oversight Report, published December 2025. Read the full report at finra.org.

Read More

The SEC’s AI-Washing Crackdown: Four Enforcement Actions and Counting

The SEC has brought four enforcement actions in the past year against firms that misrepresented their use of artificial intelligence. The message is clear: AI claims without substance will be punished.

A Pattern Emerges

Over the past twelve months, the Securities and Exchange Commission has pursued enforcement actions against four separate firms for what the market has come to call “AI-washing” — the practice of overstating or fabricating AI capabilities in marketing materials, client communications, or regulatory filings.

These actions signal a clear enforcement priority. The SEC is not waiting for comprehensive AI legislation to act. It is using existing anti-fraud and marketing rules to hold firms accountable for AI-related misrepresentations right now.

What Constitutes AI-Washing

The enforcement actions have targeted several categories of misrepresentation. Firms have been cited for claiming AI-driven investment processes that were actually manual, marketing AI capabilities that did not exist in production, overstating the role of AI in portfolio construction or risk management, and using AI terminology in advertising without substantiation.

The SEC has applied the same standards it uses for any misleading marketing claim under the Advisers Act and the Marketing Rule. AI is not exempt from truth-in-advertising obligations simply because the technology is novel.

The Broader Chilling Effect

These enforcement actions have created a legitimate fear of AI deployment among compliance-conscious firms. The irony is significant: firms that want to use AI responsibly are hesitant to deploy it, while firms that make unsupported AI claims face enforcement risk.

This dynamic creates an opportunity for firms that can demonstrate verifiable, audited AI usage. The ability to prove — through immutable logs and real-time scanning — that AI outputs are compliant, substantiated, and properly supervised becomes a competitive differentiator.

Safely Deployable AI

The SEC’s enforcement posture points to a clear standard: firms must be able to demonstrate what their AI actually does, show that AI-generated content is supervised and compliant, produce audit evidence of AI oversight during examinations, and ensure marketing claims about AI capabilities are substantiated.

Firms that build these controls into their AI infrastructure from day one will navigate the enforcement landscape with confidence. Those that do not are operating on borrowed time.

Analysis based on SEC enforcement actions and Division of Examinations priorities through Q1 2026. For current SEC guidance, visit sec.gov.

Read More

SEC Exam Priorities Shift: AI Policies Alone Won’t Cut It Anymore

The SEC’s 2026 examination priorities make clear that having an AI policy on paper is no longer sufficient. Examiners want to see active implementation and operational evidence of AI oversight.

From Policy to Proof

The SEC’s Division of Examinations has drawn a line in the sand with its 2026 priorities: the era of checkbox AI compliance is over. For the first time, the examination framework explicitly distinguishes between firms that have AI policies and firms that can demonstrate those policies are actively implemented.

This shift reflects a maturation in regulatory thinking. In the early days of AI adoption, regulators accepted that firms were developing governance frameworks. Now, with AI tools embedded in daily operations across the industry, the expectation has moved from “do you have a policy?” to “show me it’s working.”

What Examiners Will Ask For

Based on the 2026 priorities, examination teams are expected to request evidence of real-time or automated AI supervision controls, logs demonstrating that AI outputs are being monitored and reviewed, documentation of policy enforcement actions — instances where AI content was flagged, modified, or blocked, and proof that compliance teams have visibility into AI-generated communications before they reach clients.

Firms that can produce this evidence from an automated system will have a fundamentally different examination experience than those scrambling to reconstruct AI oversight from scattered records.

The Implementation Gap

Many firms updated their compliance manuals in 2024 and 2025 to reference AI governance. These updates typically included acceptable use policies for AI tools, lists of approved AI platforms, and general statements about supervisory obligations for AI-generated content.

While necessary, these policy documents do not satisfy the 2026 examination standard. The SEC wants to see the controls in action: automated scanning, real-time enforcement, immutable audit trails, and documented escalation procedures with timestamps.

Building Exam-Ready Infrastructure

The firms best positioned for 2026 examinations are those investing in operational compliance infrastructure, not just documentation. This means deploying systems that scan every AI input and output in real time, enforcing compliance rules automatically based on regulatory frameworks, generating immutable records of every AI interaction and policy decision, and providing supervisory dashboards that demonstrate active oversight.

The cost of building this infrastructure now is a fraction of the cost of an adverse examination finding or enforcement action later.

Analysis based on SEC Division of Examinations 2026 Examination Priorities, published December 2025. For current SEC examination guidance, visit sec.gov/exams.

Read More

Recent Posts

What U.S. Financial Regulators Expect From Firms Using AI

The SEC, CFTC, and FINRA have yet to issue AI-specific regulations, but their existing guidance makes one thing clear: firms deploying artificial intelligence in financial services must treat compliance as a first-order concern, not an afterthought. Here is what every market participant should know about the current regulatory landscape.

No New Rules — But the Old Ones Still Apply

Despite growing attention to AI across the financial industry, none of the three major U.S. financial regulators have introduced rules written specifically for AI. Instead, each agency has taken a technology-neutral approach, reminding firms that obligations around supervision, recordkeeping, disclosure, and customer protection apply to AI just as they do to any other tool. The message is consistent: if you adopt AI, you are still responsible for what it does.

The SEC: Fiduciary Duty Meets Automation

The SEC has made AI a priority area for examinations. The Division of Examinations has flagged digital advisory services, automated trading, fraud detection, and regulatory technology as areas of focus. Firms using AI in these functions should expect examiners to ask whether adequate policies and procedures are in place to supervise those systems. A recent enforcement action reinforced this point — failure to address known vulnerabilities in automated trading models was treated as a breach of fiduciary duty of care. The SEC has also emphasized that AI-related disclosures may be necessary in risk factor sections and management discussion sections of public filings. Equally important, the agency has pursued enforcement actions against firms that overstated AI capabilities in their marketing — a practice regulators have termed “AI washing.”

FINRA: Supervision at Every Level

FINRA’s guidance underscores that its technology-neutral rules apply fully to AI. Member firms are expected to supervise AI usage at both the enterprise and individual levels, maintain robust technology governance, and assess risks related to accuracy, bias, and data provenance. FINRA’s 2025 oversight report also highlights AI-driven cybersecurity threats and the risks of relying on third-party AI vendors, urging firms to implement strong cyber programs to counter increasingly sophisticated attacks.

The CFTC: Cautious Engagement

The CFTC took a measured step in late 2024 by releasing a nonbinding staff advisory on AI use in derivatives markets. The advisory reminds regulated entities to update their policies and procedures and to exercise particular caution around risk management, recordkeeping, and customer protection. It also encourages ongoing dialogue with CFTC staff about emerging AI use cases. Notably, this advisory was rooted in the Biden-era executive order on AI, which the current administration has since revoked. A new executive order directs agencies to develop plans that prioritize American AI leadership, so the regulatory posture may shift in the months ahead.

What Firms Should Do Now

Regardless of how the political winds shift, the core compliance obligations remain. Firms using AI should take stock of every AI tool in use across the organization, maintain a formal inventory, and implement standard risk-management processes for each one. Preventing employees from accessing unapproved or unmonitored AI tools is equally critical — many publicly accessible AI platforms train on user data, creating uncontrollable privacy and cybersecurity risks. Building a defensible AI governance program today is the best protection against regulatory uncertainty tomorrow.

This analysis draws on regulatory developments discussed by Sidley Austin LLP. For the full legal analysis, see their February 2025 update.

Read More

The SEC Is Already Using Existing Law to Police AI — Here’s How

While the SEC’s proposed AI-specific rules remain stalled, the agency isn’t waiting. Through examination sweeps, enforcement actions, and existing regulatory frameworks, the SEC is actively scrutinizing how financial firms develop, deploy, and disclose their use of artificial intelligence.

Proposed Rules Are Stuck — But the SEC Is Moving Anyway

The SEC proposed ambitious rules in 2023 that would require broker-dealers and investment advisers to identify and eliminate conflicts of interest created by predictive data analytics, including AI. Those rules drew sharp industry criticism for their breadth — covering technology well beyond AI, applying to institutional and even prospective investors, and requiring firms to document conflicts arising from systems whose outputs are, by the SEC Chairman’s own admission, often unexplainable. With no final vote scheduled, the rules remain in limbo. But the SEC has moved forward on other fronts.

AI Examination Sweeps Are Underway

The Division of Examinations has launched targeted sweeps focused on how investment advisers develop and use AI models. Firms have been asked to describe their models and techniques, identify their data sources and providers, and produce internal reports of any incidents where AI use raised regulatory, ethical, or legal concerns. Examiners are also requesting copies of AI-specific compliance policies, contingency plans for system failures, client profile documents used by AI systems, and all marketing materials that reference AI capabilities.

Enforcement Is Watching for “AI Washing”

The Division of Enforcement has confirmed active investigations into AI-related misrepresentations. The SEC has made clear that firms overstating what their AI can do — a practice the agency calls “AI washing” — face the same scrutiny as any other misleading disclosure. This extends beyond broker-dealers and advisers to public issuers and anyone making AI-related claims in the market.

Existing Law Already Covers AI Risks

Even without new rules, the current regulatory framework reaches AI in several important ways. AI models trained on datasets the firm lacks authority to use could implicate insider trading laws. Outputs that drive investment recommendations may trigger fiduciary duties and Regulation Best Interest obligations. And firms making public statements about their AI capabilities must ensure those disclosures remain accurate over time — including as models “drift” from their original training. The SEC also expects firms to maintain compliance policies that specifically address AI risks, safeguard client data processed by AI systems, and monitor for cybersecurity vulnerabilities that third-party AI tools may introduce.

Why Firms Can’t Afford to Wait

The regulatory posture is clear: whether or not AI-specific rules are finalized, the SEC views existing securities law as sufficient to address the risks AI creates. Firms that treat AI governance as a future problem are exposing themselves to enforcement risk today. Building defensible AI policies, documenting model behavior, and ensuring accurate disclosures are no longer optional — they’re the baseline the SEC already expects.

This analysis draws on regulatory developments discussed by Skadden, Arps, Slate, Meagher & Flom LLP. For the full legal analysis, see their February 2024 publication.

Read More