Skip to main content

Financial Services

AI Regulations for Financial Services: US Treasury Department

Us Treasury Bldg

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

In November 2022, the Treasury Department explored opportunities and risks related to the use of AI in its report assessing the impact of new entrant non-bank firms on competition in consumer finance markets, for which the department conducted extensive outreach. Among other findings, that report found that innovations in AI were powering many non-bank firms’ capabilities and product and service offerings. The reader is urged to think back to when embedded finance and embedded financial services were deemed to be the future of banking in America. The same 2022 report noted that firms’ use of AI may help expand the provision of financial products and services to consumers, particularly in the credit space. The report also found that, in deploying AI models and tools, firms use a greater amount and variety of data than in the past, leading to an unprecedented demand for consumer data, which presents new data privacy and surveillance risks. Additionally, the report identified concerns related to bias and discrimination in the use of AI in financial services, including challenges with explainability. Explainability is the ability to understand a model’s output and decisions, or how the model establishes relationships based on the model input. That challenge imposes on AI developers and users to ensure compliance with fair lending requirements; the potential for models to perpetuate discrimination by using and learning from data that reflect and reinforce historical biases; and the potential for AI tools to expand capabilities for firms to inappropriately target specific individuals or communities (i.e. low- to moderate-income communities, communities of color, women, rural, tribal, or disadvantaged communities). The report concluded that new entrant non-bank firms and AI innovations they were utilizing in financial services may be able to help improve financial services, but that further steps should be considered to monitor and address risks to consumers, foster market integrity, and help ensure the safety and soundness of the financial system.

The following year, in December 2023, the US Treasury Department issued an RFI that sought input to inform its development of a national financial inclusion strategy; that RFI included questions related to the use of technologies such as AI in the provision of consumer financial services.

In March 2024, the Department of the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection issued a report in response to requirements from the 2023 executive order on AI, entitled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.” The report identified opportunities and challenges that AI presents to the security and resiliency of the financial services sector. The report outlined a series of next steps to address AI-related operational risk, cybersecurity, and fraud challenges, as a response to Executive Order 14110. Perficient’s Financial Services Risk and Regulatory Center of Excellence consultants noted while reading that report that the “Next Steps: Challenges & Opportunities” chapter contains a small section that notes “Regulation of AI in Financial Services Remains an Open Question”.

Two months later, in May 2024, the US Treasury Department issued its 2024 National Strategy for Combatting Terrorist and Other Illicit Financing (National Illicit Finance Strategy), noting that innovations in AI, including machine learning and large language models such as generative AI, have significant potential to strengthen anti-money laundering/countering the financing of terrorism (AML/CFT) compliance by helping financial institutions analyze large amounts of data and more effectively identify illicit finance patterns, risks, trends, and typologies. One of the objectives identified in the National Illicit Finance Strategy is industry outreach to improve Treasury’s understanding of how financial institutions are using AI to comply with applicable AML/CFT requirements.

In June 2024, the US Treasury issued a Request for Information (RFI) on the uses, opportunities and risks presented by developments and applications of artificial intelligence within the financial sector. The Treasury noted particular desire to gather information from a broad set of stakeholders in the financial services ecosystem, including those providing, facilitating, and receiving financial products and services, as well as consumer and small business advocates, academics, nonprofits, and others.

The Treasury Department noted that AI provides opportunities for financial institutions to improve efficiency, reduce costs, strengthen risk controls, and expand impacted entities’ access to financial products and services. At the same time, the use of AI in financial services can pose a variety of risks for impacted entities, depending on its application. Treasury was interested in perspectives on actual and potential benefits and opportunities to financial institutions and impacted entities of the use of AI in financial services, as well as views on the optimal methods to mitigate risks. In particular, the Treasury Department expressed interest in perspectives on bias and potential discrimination as well as privacy risks, the extent to which impacted entities are protected from and informed about the potential harms from financial institutions’ use of AI in financial services.

Written comments and information were requested on or before August 12, 2024, but the results were not published as of the writing of this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Carl Aridas

Carl is certified in the Scaled Agile Framework (SAFe), a Scrum Master, and a Six Sigma Green Belt project manager with more than 25 years of experience in financial services overseeing large-scale development global, multi-currency accounting, regulatory reporting, and financial reporting software platforms. He has hands-on experience completing, reviewing, and filing Federal Reserve, FFIEC, and IRS reports, including Call Reports, Y9C reports, 2900 reports, TIC reports, and arbitrage rebate reports.

More from this Author

Follow Us