Skip to main content

Regulatory Compliance

AI Regulations for Financial Services: OCC

Occ Cryptocurrency

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

OCC

In December 2023, the Office of the Comptroller of the Currency (OCC) classified AI as an emerging risk to the banking industry in an industry report they produced. As noted at the time by the OCC, advances in computing capacity, increased data availability, and improvements in analytical techniques have significantly expanded opportunities for banks to leverage AI for risk management and operational purposes.

The utilization of AI has seen tremendous growth over the last few years, including:

  • Customer chatbots
    • Customer chatbots serve to streamline operations by reducing the need for extensive phone center staffing. This mitigates the risk of customer service representatives providing incorrect information and ensures compliance with regulatory disclosures, ultimately enhancing the overall customer experience while reducing costs.
  • Fraud detection
    • AI-driven fraud detection proves instrumental in curtailing the time required for addressing stolen debit and credit cards, thereby minimizing losses resulting from identity theft.
  • Credit scoring
    • Credit scoring AI enhances credit accessibility for deserving customers who might otherwise be overlooked by traditional credit algorithms. By continuously improving and adapting over time, AI-driven credit scoring ensures a fairer assessment and broader availability of credit.

However, offsetting the positive aspects, the OCC cautioned that risks can still arise, such as:

  • Lack of explainability
  • Reliance on large volumes of data
  • Potential bias
  • Privacy concerns
  • Third-party risk
  • Cybersecurity risks
  • Consumer protection concerns

The OCC report emphasized the importance of banks identifying, measuring, monitoring, and controlling these risks associated with AI, applying the same standards as using any other technology.

While existing guidance may not explicitly address AI, the OCC maintains that safety and soundness standards and compliance requirements remain applicable. The supervision risk management principles, outlined in the OCC issuances, provide a solid framework for banks implementing AI to operate safely, soundly, and fairly.

On June 6, 2024, the Office of the Comptroller of the Currency (“OCC”) Acting Chairman, Michael J. Hsu, addressed the 2024 Conference on Artificial Intelligence (AI) and Financial Stability, provided critical regulatory insights on the OCC’s thinking on AI. Hsu discussed the systemic risk implications of AI in banking and finance using a “tool or weapon” approach.

In his speech, Hsu emphasized that the rapid adoption of technology during periods of change, without corresponding adjustment in controls, allows risks to grow undetected until they culminate in financial crises. Learning from history, he referenced the lack of regulatory controls in derivatives and financial engineering before the 2008 financial crisis, and more recently, the unregulated growth of cryptocurrencies leading to the “Crypto Winter” of 2022.

To avoid repeating such scenarios, of that rather dire history, Hsu advocated for regulators and the industry to proactively identify points where growth and development should pause to ensure responsible innovation and build trust. He argued that well-designed checkpoints could help balance the need for innovation with necessary safeguards to prevent runaway growth.

 

Risk Management Control Gate Graphic

The evolution of electronic trading provides a valuable case study to consider. Traditionally, trading was manual. Market making eventually transitioned to phone-based systems, with computers providing real-time information, valuations and forecasts for traders. In time, computers took on a more active role, not only providing information but also assisting and guiding traders’ actions, supporting faster execution and more complex strategies. Eventually, algorithms took over entirely, automatically buying and selling securities according to pre-determined instructions without the need for human intervention.

Using the evolution of electronic trading as a reference, Hsu outlined three phases in its history:

  1. Inputs: Computers provided information for human traders to consider.
  2. Co-pilots: Software supported and enabled traders to operate more efficiently and swiftly.
  3. Agents: Computers executed trades autonomously based on algorithms programmed by software developers.

Hsu highlighted that each phase requires different risk management strategies and controls. For example, mitigating the risk of flash crashes—exacerbated by algorithmic trading—demands more sophisticated controls than those needed when traders are simply receiving information on a computer screen and execute trades manually.

Artificial Intelligence (AI) is following a similar evolutionary path: initially producing inputs for human decision-making, then acting as a co-pilot to enhance human actions, and finally becoming an agent that makes decisions independently on behalf of humans. As AI progresses from an input provider to a co-pilot and ultimately to an autonomous agent, the risks and potential negative consequences of weak controls increase significantly.

For banks interested in adopting AI, establishing clear and effective gates between each phase can help ensure that innovations are beneficial rather than harmful. Before advancing to the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established for the new phase being entered.

Since Chairman Hsu’s remarks, in early October 2024, the OCC began a solicitation of academic research papers on the use of artificial intelligence in banking and finance for submission by December 15, 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Carl Aridas

Carl is certified in the Scaled Agile Framework (SAFe), a Scrum Master, and a Six Sigma Green Belt project manager with more than 25 years of experience in financial services overseeing large-scale development global, multi-currency accounting, regulatory reporting, and financial reporting software platforms. He has hands-on experience completing, reviewing, and filing Federal Reserve, FFIEC, and IRS reports, including Call Reports, Y9C reports, 2900 reports, TIC reports, and arbitrage rebate reports.

More from this Author

Follow Us