Skip to main content

Regulatory Compliance

AI Regulations for Financial Services: CFTC and FDIC

Cftc

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

CFTC

The Commodity Futures Trading Commission’s (“CFTC”) which regulates derivatives market activity, not particular technologies, issued in January 2024 a Request For Comment on current and potential uses and risks of AI in CFTC-regulated derivatives markets. After receiving significant industry-provided feedback, in May 2024, the Technology Advisory Committee of the CFTC issued a report on Responsible Artificial Intelligence in Financial Markets.

The report recommended that the agency develop a sector-specific AI Risk Management Framework. The report also called for the CFTC to engage with industry and develop firm-level governance standards for AI systems. The same report urged the agency to create an inventory of existing regulations related to AI and use it to identify potential risks and opportunities for rulemaking, and then encouraged a stick approach to regulation, urging penalties for AI-related misconduct should be high enough to deter entities from viewing the potential rewards as outweighing the risks.

As of the fourth quarter of 2024, no specific AI-related rules or regulations have been proposed or enacted by the CFTC.

FDIC

Fdic Official Sign

The Federal Deposit Insurance Corporation (FDIC), which is the primary federal regulator for insured state-chartered banks that are not members of the Federal Reserve, was the lead bank regulator when in June 2021 it issued a Request for Information seeking comments and information on the use of AI by financial institutions it regulated. In addition to the FDIC, the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the Consumer Financial Protection Bureau, and the National Credit Union Administration also distributed the same RFI to the financial institutions they regulated. Together, the federal regulators publicly sought to better understand:

  • the use of AI by financial institutions;
  • appropriate governance, risk management, and controls over AI;
  • challenges in developing, adopting, and managing AI;
  • and whether any clarification would be helpful.

At the time, the agencies noted they supported responsible innovation by financial institutions as the use of AI, had the potential to augment decision-making and enhance services available to consumers and businesses. They also noted that, as with any activity or process in which a bank engages, identifying and managing risks are key.

After the results of the RFI were collected, the FDIC created FDITech, a tech lab focused on all areas of technology including, but not limited, to AI. However, in 2024 the FDIC reduced its public-facing role.

Although the FDIC has not issued specific AI regulations, the FDIC regulates the use of AI by financial institutions it regulates in a number of ways, including:

  • Compliance with existing laws
    • Banks must use AI in compliance with existing laws, including consumer protection, safety, and soundness.
  • Model risk management
    • Banks should review the FDIC’s Supervisory Guidance on Model Risk Management, which outlines the agency’s approach to quantitative models, including those using AI.
  • Explainable AI
    • AI systems that are part of banks’ risk management models must be explainable.
  • Reporting
    • Reporting lines and formats should be structured to ensure communication is risk appropriate.
  • Risk assessment
    • A documented risk assessment should be carried out when relying on third-party services.
  • Senior management
    • Senior management must have sufficient technical expertise and be responsible for all significant business decisions.

As noted in a January 2024 industry conference by the FDIC Chairperson, “It doesn’t matter what label you put on it and what the underlying technique is. Financial institutions and banks understand what model risk management is and how they’re expected to conduct it. If they began to use newer techniques of artificial intelligence, including language learning models, then they need to make sure that those comply with model risk management expectations.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Carl Aridas

Carl is certified in the Scaled Agile Framework (SAFe), a Scrum Master, and a Six Sigma Green Belt project manager with more than 25 years of experience in financial services overseeing large-scale development global, multi-currency accounting, regulatory reporting, and financial reporting software platforms. He has hands-on experience completing, reviewing, and filing Federal Reserve, FFIEC, and IRS reports, including Call Reports, Y9C reports, 2900 reports, TIC reports, and arbitrage rebate reports.

More from this Author

Follow Us