Regulatory Compliance Articles / Blogs / Perficient https://blogs.perficient.com/category/industries/financial-services/regulatory-compliance/ Expert Digital Insights Thu, 21 Nov 2024 22:21:29 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Regulatory Compliance Articles / Blogs / Perficient https://blogs.perficient.com/category/industries/financial-services/regulatory-compliance/ 32 32 30508587 AI Regulations for Financial Services: Hong Kong https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/ https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/#respond Thu, 21 Nov 2024 15:14:09 +0000 https://blogs.perficient.com/?p=370864

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

In the summer of 2024, the Hong Kong Monetary Authority (“HKMA”) issued multiple guidance documents to financial services firms covering their use of artificial intelligence in both customer-facing applications as well as anti-money laundering and detecting and countering terrorist financing (“AML/CTF”). Specifically, the HKMA issued:

  1. The guiding principles issued by the HKMA on August 19, 2024 (“GenAI”) in customer-facing applications (“GenAI Guidelines”). The GenAI Guidelines built on a previous HKMA circular “Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions” dated November 5, 2019 (“2019 BDAI Guiding Principles”) and provide specific guidelines to financial services firms on the use of GenAI; and
  2. An AML/CTF circular issued by the HKMA on September 9, 2024, that requires financial services firms with operations in Hong Kong to:
    1. undertake a study to consider the feasibility of using artificial intelligence in tackling AML/CTF, and
    2. submit the feasibility study and an implementation plan to the HKMA by the end of March 2025.

Leveraging the 2019 BDAI Guiding Principles as a foundation, the GenAI Guidelines adopts the same core principles of governance and accountability, fairness, transparency and disclosure, and data privacy and protection, but introduces additional requirements to address the specific challenges presented by GenAI.

Core Principles Requirements under GenAI Guidelines
Governance and Accountability The board and senior management of financial services firms should remain accountable for all GenAI-driven decisions and processes and should thoroughly consider the potential impact of GenAI applications on customers through an appropriate committee which sits within the firm’s governance framework. The board and senior management should ensure the following:

  • Clearly defined scope of customer-facing GenAI applications to avoid GenAI usage in unintended areas;
  • Proper policies and procedures and related control measures for responsible GenAI use in customer-facing applications; and
  • Proper validation of GenAI models, including a “human-in-the-loop” approach in early stages, i.e. having a human retain control in the decision-making process, to ensure the model-generated outputs are accurate and not misleading.
Fairness Financial services firms are responsible for ensuring that GenAI models produce objective, consistent, ethical, and fair outcomes for customers. This includes:

  • That model generated outputs do not lead to unfair outcomes for customers. As part of this, firms are expected to consider different approaches that may be deployed in GenAI models, such as
    1.       anonymizing certain data categories; and
    2.       using comprehensive and fair datasets; and
    3.       making adjustments to remove bias during validation and review; and
  • During the early deployment stage, provide customers with an option to opt out of GenAI use and request human intervention on GenAI-generated decisions as far as practicable. If an “opt-out” option is unavailable, AIs should provide channels for customers to request review of GenAI-generated decisions.
Transparency and Disclosure Financial Services firms should:

  • Provide appropriate transparency to customers regarding GenAI applications; and
  • Disclose the use of GenAI to customers; and
  • Communicate the use, purpose, and limitations of GenAI models to enhance customer understanding.
Data Privacy and Protection Financial Services firms should:

  • Implement effective protection measures for customer data; and
  • Where personal data are collected and processed by GenAI applications, comply with the Personal Data (Privacy) Ordinance, including the relevant recommendations and good practices issued by the Office of the Privacy Commissioner for Personal Data, such as the:
  1. “Guidance on the Ethical Development and Use of Artificial Intelligence” issued on August 18, 2021, and
  2. “Artificial Intelligence: Model Personal Data Protection Framework” issued on June 11, 2024.

Consistent with the HKMA’s recognition of the potential use of GenAI in consumer protection in the GenAI Guidelines, the HKMA Circular also indicates that the HKMA recognizes the considerable benefits that may come from the deployment of AI in improving AML/CTF. In particular, the HKMA Circular notes that the use of AI powered systems “take into account a broad range of contextual information focusing not only on individual transactions, but also the active risk profile and past transaction patterns of customers…These systems have proved to be more effective and efficient than conventional rules-based transaction monitoring systems commonly used by covered firms.”

Given this, the HKMA has indicated that financial services firms with operations in Hong Kong should:

  • give due consideration to adopting AI in their AML/CTF monitoring systems to enable them to stay effective and efficient; and
  • undertake a feasibility study in relation to the adoption of AI in their AML/CTF monitoring systems and based on the outcome of that review, should formulate an implementation plan.

The feasibility study and implementation plan should be signed off at the board level and submitted to the HKMA by March 31, 2025.

]]>
https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/feed/ 0 370864
1033 Open Banking Mandate Blueprint for Success https://blogs.perficient.com/2024/11/21/open-banking-is-coming-to-stay/ https://blogs.perficient.com/2024/11/21/open-banking-is-coming-to-stay/#respond Thu, 21 Nov 2024 14:30:47 +0000 https://blogs.perficient.com/?p=361726

The Consumer Financial Protection Bureau (CFPB) recently issued a final rule § 1033.121(c) supporting open banking and personal financial data rights. Under this ruling, banks, credit unions, credit card issuers, and other financial service providers must enhance consumer access to personal financial data.

The first compliance deadline of April 1, 2026, impacts the largest organizations.

  • The ruling demands action from all non-depository firms (e.g., institutions that issue credit cards, hold transaction accounts, issue devices to access an account, or provide other types of payment facilitation products or services). The compliance deadline, however, depends on the firm’s total receipts from calendar years 2023 and 2024.
    • April 1, 2026: $10B+ total receipts in either calendar year
    • April 1, 2027: <$10B total receipts in both calendar years
  • The ruling also impacts depository institutions that hold at least $850 million in total assets. Compliance deadlines follow a staggered rollout based on total assets.
    • April 1, 2026: $250B+ total assets
    • April 1, 2027: $10B to <$250B total assets 
    • April 1, 2028: $3B to <$10B total assets
    • April 1, 2029: $1.5B to <$3B total assets
    • April 1, 2030: $850M to <$1.5B total assets

Accelerating the shift to open banking with 1033 

Open banking changes how financial data is shared and accessed, giving customers more control of their information. The 1033 Personal Financial Data Rights rule ensures that:

  • Personal financial data is made available to consumers and agents at no charge
  • Data is exchanged through a safe, secure, and reliable digital interface
  • Consumers aren’t surprised with hidden or unexpected charges when accessing their personal financial data
  • Consumers can walk away from bad financial services and products
  • Safeguards protect consumers and financial firms from surveillance, data misuse, and risky data practices

Open banking is going to do for the banking industry what the introduction of the Apple smart phone did for cell phones.

CFPB 1033 open banking requires financial firms to ease personal financial data access for consumers 

CFPB first proposed the rule in the Federal Register on October 31, 2023, accepted public comments on the regulation though December 29, 2023, then issued its final rule November 18, 2024. This effort carries out the personal financial data rights established by the Consumer Financial Protection Act of 2010 (CFPA).

The final rule § 1033.121(c) “requires banks, credit unions, and other financial service providers to make consumers’ data available upon request to consumers and authorized third parties in a secure and reliable manner; defines obligations for third parties accessing consumers’ data, including important privacy protections; and promotes fair, open, and inclusive industry standards.”  

The implications of the CFPB’s regulation on open banking will be enormous for consumers, banks, and data providers.

Impact on consumers 

Without open banking, consumers struggle to switch between bank deposit and lending offerings. For example, switching checking accounts to one with a better interest rate involves resetting direct deposits and recurring bill-paying, printing new checks, and obtaining a new ATM card. Mistakes resulting in overdrafts are costly, both financially and to one’s credit score and reputation.   

As a result, larger banks have a much smaller net interest margin, as shown in the chart below:

Open Banking Chart For Carl's Blog

In addition, the stickiness of deposits causes a considerable lag between when a bank raises deposit rates and when deposit balances increase proportionately. 

As open banking, mandated by Rule 1033, takes effect, consumers will be able to:

  • Switch credit cards within seconds while retaining terms and rewards of their current account
  • Transfer deposits and multiple years of transaction history into a new checking account  

Impact on data providers 

Data providers, including digital wallet providers, will be able to move on from “screen scraping” and instead provide API-driven real-time balances, transaction history, and reward balances to their retail customers. Of course, providing this “new and improved” service will require re-writing front ends and processing engines to provide the necessary data in a timely manner. 

Impact on banks 

Banks and their affiliates must look toward building an open, larger ecosystem as part of continued digital transformation efforts.

While challenging, this work is necessary for banks that aim to grow revenue through collaboration and cooperation. Ultimately, banks that don’t satisfy their borrowers or lenders will be hard-pressed to compete in the ever-challenging financial landscape.

Navigate 1033 open banking compliance deadlines with confidence 

We encourage leaders to identify mandates’ silver lining opportunities. After all, to remain competitive and compliant, financial services firms must innovate in ways that add business value, meet consumers’ evolving expectations, and build trust. Achieving transformative outcomes and experiences requires a digital strategy that not only satisfies mandates but also aligns the enterprise around a shared vision and actionable KPIs, ultimately keeping customers at the heart of progress.

A holistic approach could include:

  • Strategy + Transformation: current-state assessment, future-state roadmap, change management
  • Platforms + Technology: pragmatically scalable, composable architecture and automations to accelerate progress
  • Data + Intelligence: well-governed “golden source of truth” data and secure integrations/orchestration
  • Innovation + Product Development: engineering and design for what’s now, new, and next
  • Customer Experience + Digital Marketing: human-centered, journey-based engagement
  • Optimized Delivery: Agile methodologies, deep domain expertise, and scalable global teams

Our financial services experts continuously monitor the regulatory landscape and deliver pragmatic, scalable solutions that meet the mandate and more. Discover why we’ve been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and are regularly recognized by leading analyst firms.

Ready to explore your firm’s compliance with Rule 1033? Contact us to discuss your specific risk and regulatory challenges.  

]]>
https://blogs.perficient.com/2024/11/21/open-banking-is-coming-to-stay/feed/ 0 361726
AI Regulations for Financial Services: Japan https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/ https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/#respond Tue, 19 Nov 2024 15:13:32 +0000 https://blogs.perficient.com/?p=370870

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

Japan currently has yet to pass a law or regulation specifically directed to regulating the use of AI at financial services firms. Currently, the Japanese government and regulators are taking an indirect approach of supporting a policy goal of prioritizing innovation while minimizing foreseeable harms.

On April 19, 2024, the Japanese government published new “AI Guidelines for Business Version 1.0” (the “Guidelines”). While not legally binding, the Guidelines are expected to support and induce voluntary efforts by developers, providers, and business users of AI systems through compliance with generally recognized AI principles and are similar to the EU regulations discussed previously in that they propose a risk-based approach.

As noted on page 26 of the English version of the Guidelines, the Guidelines promote “agile governance” where “multiple stakeholders continuously and rapidly run a cycle consisting of environment and risk analysis, goal setting, system design, operation and then evaluation in various governance systems in companies, regulations, infrastructure, markets, social codes and the like”.

In addition to the Guidelines, an AI Strategy Council, a government advisory body, was established to consider approaches for maximizing the potential of AI while minimizing the potential risks to the financial system. On May 22, 2024, the Council submitted draft discussion points concerning the advisability and potential scope of any future regulation.

Finally, a working group in the Japanese Parliament has proposed the first specific Japanese regulation of AI, “the Basic Act on the Advancement of Responsible AI,” which proposes a hard law approach to regulate certain generative AI foundation models. If passed as-is, the Japanese government would designate the AI systems and developers that are subject to regulation; impose obligations on them with respect to the vetting, operation, and output of the systems; and require periodic reports concerning AI systems.

The proposed obligations would provide a general framework, while industry groups for financial services firms would work with the Japanese Financial Services Agency (“JFSA”) to establish the specific standards by which firms would comply. It is further thought that the government would have the authority to monitor AI developers and impose fines and penalties for violations of the reporting obligations and/or compliance with the substance of the law.

]]>
https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/feed/ 0 370870
AI Regulations for Financial Services: South Korea and the UK https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/ https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/#respond Thu, 14 Nov 2024 15:27:01 +0000 https://blogs.perficient.com/?p=370878

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

South Korea

Map Of South Korea

In South Korea, efforts to enact the AI framework act have been underway since 2020. Nine different regulations have been proposed, but none have been passed into law. While the Personal Information Protection Act (PIPA) includes provisions related to AI, such as the exercise of data subjects’ rights concerning automated decision-making, comprehensive legislation has yet to be enacted.

United Kingdom

Bank executives who have their London trading desks over in Canary Wharf must remember Brexit and that Great Britain is not in the EU. The Financial Conduct Authority (FCA) regulates artificial intelligence (AI) in the UK by focusing on identifying and mitigating risks, rather than prohibiting specific technologies. The FCA and FSA’s approaches to AI regulation are based on the five principles of safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and lastly contestability and redress. Similar to Japan and South Korea, the UK is yet to pass a law or regulation specifically directed to regulating the use of AI at financial services firms.

]]>
https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/feed/ 0 370878
AI Regulations for Financial Services: European Union https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/ https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/#respond Tue, 12 Nov 2024 15:19:00 +0000 https://blogs.perficient.com/?p=370843

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

EU Regulations

European Union lawmakers signed the Artificial Intelligence (“AI”) Act in June 2024. The AI act, the first binding worldwide horizontal regulation on AI, sets a common framework for the use and supply of AI systems by financial institutions in the European Union.

The new act offers a classification for AI systems with different requirements and obligations tailored to a ‘risk-based approach’. In our opinion, the proposed risk-based system will be very familiar to bankers who remember the original rollout and asset-based classification system required by regulators in the original BASEL risk-based capital requirements of the early 1990s. Some AI systems presenting ‘unacceptable’ risks are outright prohibited regardless of controls. A wide range of ‘high-risk’ AI systems that can have a detrimental impact on people’s health, safety or on their fundamental rights are permitted, but subject to a set of requirements and obligations to gain access to the EU market. AI systems posing limited risks because of their lack of transparency will be subject to information and transparency requirements, while AI systems presenting what are classified “minimal risks” are not subjected to further obligations.

The regulation also lays down specific rules for General Purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with “high-impact capabilities” that could pose a systemic risk and have a significant impact on the EU marketplace. The AI Act was published in the EU’s Official Journal on July 12, 2024, and became effective August 31, 2024.

Eu Ai Act Risk Based Approach

The EU AI act adopts a risk-based approach and classifies AI systems into several risk categories, with different degrees of regulation applying.

Prohibited AI practices

The final text prohibits a wider range of AI practices than originally proposed by the Commission because of their harmful impact:

  • AI systems using subliminal or manipulative or deceptive techniques to distort people’s or a group of people’s behavior and impair informed decision making, leading to significant harm;
  • AI systems exploiting vulnerabilities due to age, disability, or social or economic situations, causing significant harm;
  • Biometric categorization systems inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation;
  • AI systems evaluating or classifying individuals or groups based on social behavior or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts or unjustified or disproportionate to their behavior;
  • AI systems assessing the risk of individuals committing criminal offences based solely on profiling or personality traits;
  • AI systems creating or expanding facial recognition databases through untargeted scraping from the Internet or CCTV footage; and
  • AI systems inferring emotions in workplaces or educational institutions.

High-risk AI systems

The AI act identifies a number of use cases in which AI systems are to be considered high-risk because they can potentially create an adverse impact on people’s health, safety or their fundamental rights.

  • The risk classification is based on the intended purpose of the AI system. The function performed by the AI system and the specific purpose and modalities for which the system is used are key to determine if an AI system is high-risk or not. High-risk AI systems can be safety components of products covered by sectoral EU law (e.g. medical devices) or AI systems that, as a matter of principle, are classified as high risk when they are used in specific areas listed in an annex.13 of the regulation. The Commission is tasked with maintaining an EU database for the high-risk AI systems listed in this annex.
  • A new test has been enshrined at the Parliament’s request (‘filter provision’), according to which AI systems will not be considered high risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. However, an AI system will always be considered high risk if the AI system performs profiling of natural persons.
  • Providers of such high-risk AI systems will have to run a conformity assessment procedure before their products can be sold and used in the EU. They will need to comply with a range of requirements including testing, data training and cybersecurity and, in some cases, will have to conduct a fundamental rights impact assessment to ensure their systems comply with EU law. The conformity assessment should be conducted either based on internal control (self-assessment) or with the involvement of a notified body (e.g. biometrics). Compliance with European harmonized standards to be developed will grant high-risk AI systems providers a presumption of conformity. After such AI systems are placed in the market, providers must implement post-market monitoring and take corrective actions if necessary.

Transparency risk

Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception, irrespective of whether they qualify as high-risk AI systems or not. Such systems are subject to information and transparency requirements. Users must be made aware that they interact with chatbots. Deployers of AI systems that generate or manipulate image, audio or video content (i.e. deep fakes), must disclose that the content has been artificially generated or manipulated except in very limited cases (e.g. when it is used to prevent criminal offences). Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, interoperable, effective and robust techniques and methods (such as watermarks) to enable marking and detection that the output has been generated or manipulated by an AI system and not a human. Employers who deploy AI systems in the workplace must inform the workers and their representatives.

Minimal risks

Systems presenting minimal risk for people (e.g. spam filters) are not subject to further obligations beyond currently applicable legislation (e.g. GDPR).

General-purpose AI (GPAI)

The regulation provides specific rules for general purpose AI models and for general-purpose AI models that pose systemic risks.

GPAI system transparency requirements

All GPAI models will have to draw up and maintain up-to-date technical documentation and make information and documentation available to downstream providers of AI systems. All providers of GPAI models have to implement a policy to respect EU copyright law, including through state-of-the-art technologies (e.g. watermarking), to carry out lawful text- and data-mining exceptions as envisaged under the Copyright Directive. In addition, GPAIs must draw up and make publicly available a sufficiently detailed summary of the content used in training the GPAI models according to a template provided by the AI Office. Financial Institutions headquartered outside the EU will have to appoint a representative in the EU. However, AI models made accessible under a free and open source will be exempt from some of the obligations (i.e., disclosure of technical documentation) given they have, in principle, positive effects on research, innovation and competition.

Systemic-risk GPAI obligations

GPAI models with ‘high-impact capabilities’ could pose a systemic risk and have a significant impact due to their reach and their actual or reasonably foreseeable negative effects (on public health, safety, public security, fundamental rights, or the society as a whole). GPAI providers must therefore notify the European Commission if their model is trained using a total computing power exceeding 10^25 FLOPs (i.e. floating-point operations per second). When this threshold is met, the presumption will be that the model is a GPAI model posing systemic risks. In addition to the requirements on transparency and copyright protection falling on all GPAI models, providers of systemic-risk GPAI models are required to constantly assess and mitigate the risks they pose and to ensure cybersecurity protection. That requires keeping track of, documenting, and reporting to regulators serious incidents and implementing corrective measures.

  • Codes of practice and presumption of conformity

GPAI model providers will be able to rely on codes of practice to demonstrate compliance with the obligations set under the act. By means of implementing acts, the Commission may decide to approve a code of practice and give it a general validity within the EU, or alternatively, provide common rules for implementing the relevant obligations. Compliance with a European standard grants GPAI providers the presumption of conformity. Providers of GPAI models with systemic risks who do not adhere to an approved code of practice will be required to demonstrate adequate alternative means of compliance.

]]>
https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/feed/ 0 370843
AI Regulations for Financial Services: Federal Reserve https://blogs.perficient.com/2024/11/08/ai-regulations-for-financial-services-federal-reserve/ https://blogs.perficient.com/2024/11/08/ai-regulations-for-financial-services-federal-reserve/#respond Fri, 08 Nov 2024 15:25:56 +0000 https://blogs.perficient.com/?p=370906

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

Federal Reserve

The largest of the federal banking agencies, the Federal Reserve has four regional federal reserve banks (Atlanta, Boston, New York and San Francisco) set up offices to study financial innovation with AI. These efforts are intended to focus on how regulators can use AI to assist in regulating financial institutions as well as to better understand how banks are using AI in their activities.

While the FRB has not passed AI-specific regulations, a Chief AI Officer has been named, an AI policy approved, and a risk-based review of AI programs and activities has been established and findings will be published and shared with the public.

As noted at the central Federal Reserve Bank level by Chief Artificial Intelligence Officer (“CAIO”), Anderson Monken, the FRB is committed to an artificial intelligence (AI) program for FRB (“Board”) staff that:

  • Promotes the responsible use of AI and enables AI-related innovation
  • Mitigates risks associated with AI use through robust governance and strong risk management practices
  • Complies with all applicable federal requirements related to AI use by federal agencies

As noted in the Federal Reserve System Compliance Plan for OMB Memorandum M-24-10, the Board recognizes the value of a comprehensive enterprise risk-management approach to ensure safe and responsible AI innovation.

Determining Which AI Use Is Presumed to Be Safety- or Rights-Impacting

The Board has implemented its enterprise-wide AI policy and corresponding review process to determine which current or planned AI use cases are determined to be safety- or rights-impacting.

  • Review process. Each current or planned AI use case undergoes a thorough review and assessment by the CAIO and the AI Program team to determine whether the use case meets the definition of safety- or rights-impacting AI as defined in section 6 of OMB M-24-10.
  • Criteria for assessment. FRB assessment criteria are based on the definitions of safety- and rights-impacting AI and examples of AI presumed to be safety- or rights-impacting in OMB M-24-10 section 6 and Appendix I, respectively. These criteria include whether the AI output would serve as a principal basis for a decision or action and real-world considerations of potential harm to protected or otherwise critical populations, entities, and resources.
  • Supplementary criteria. The Board may incorporate additional review criteria to assess safety and rights-impacting AI considerations in response to internal or external developments.

Implementation of Risk-Management Practices and Termination of Noncompliant AI

  • AI policy and review process. The FRB’s AI policy and review process prohibit any use of AI considered to be safety- or rights-impacting without the CAIO’s approval, waiver of one or more risk-management practices, or approved OMB extension, to meet risk-management requirements. All safety- or rights-impacting AI use cases undergo a comprehensive risk impact assessment including validation of all risk-management practices defined in OMB M-24-10 section 5(iv).
  • Enforceability and penalties. Unauthorized or improper use of AI may result in loss of, or limitations on, the use of Board IT resources and in disciplinary or other action, which could include separation from employment. Board of Governors of the Federal Reserve System Compliance Plan for OMB Memorandum M-24-10
  • Technical controls. The Board has technical controls in place to deter, detect, and remediate policy violations. These controls include the ability to terminate instances of non-compliant AI on Board IT resources.
  • Communications and training. The Board’s AI Program team publishes and manages the AI policy through a regularly updated intranet site. The site provides guidance on the AI policy, the process for submitting a use case, and the criteria for determining the permissibility of a use case. The site also offers non-technical and technical AI training materials, a list of best practices for the responsible use of AI, and answers to policy FAQs.

Minimum Risk-Management Practices for Safety- or Rights-Impacting Uses

  • The Board is implementing a comprehensive environment of controls to encompass the risk management practices required by OMB M-24-10. The CAIO and AI Program team are responsible for ensuring that these controls are designed and operating effectively to provide sufficient assurance that the Board can mitigate risks from non-compliant AI uses.
  • Impact assessment. Every AI use case that is presumed to be safety- or rights-impacting undergoes a comprehensive risk impact assessment, which includes a review of controls and processes meeting or exceeding the minimum risk-management practices defined in OMB M-24-10 sections 5(c)(iv) and 5(c)(v). The review process assesses the quality and appropriateness of AI use cases, all data considered for those use cases, purpose of use, and potential harms to health, safety, privacy, security, rights, and opportunities as noted in the Board’s criteria for assessment. Considerations for resourcing, security controls, testing, and validation plans are also reviewed.
  • Determination process. The CAIO, in conjunction with the AI Program team and, as appropriate, senior Board officials, will review whether the AI use case, along with its impact assessment, satisfies the definitions of safety- or rights-impacting in section 6 of OMB M-24-10. The CAIO shall determine whether the AI use case matches the definition of safety- or rights-impacting after considering the conditions and context of the use case and whether the AI is serving as the principal basis for a decision or action.
  • Waiver process. In limited circumstances, waivers of minimum risk-managements practices may be granted in accordance with OMB M-24-10 section 5(c)(iii). The AI Program will develop criteria to guide consistent decision making for the CAIO to waive risk-management practices, ensuring that waivers are granted only when necessary. Any decisions to grant or revoke a waiver will require documentation of the scope, justification, and supporting evidence. The AI Program team will establish procedures for issuing, denying, and revoking waivers, with oversight by the CAIO and the AI Enablement Working Group.
  • Documentation and validation. The CAIO is responsible for documenting and validating that current and planned risk-management practices for all safety- and rights-impacting AI use cases are designed and operating effectively. The AI Program team maintains detailed records of all Managing Risks from the Use of Artificial Intelligence use cases and extension, waiver, and determination decisions to support consistent reviews, enable effective compliance and reporting, and promote transparency and accountability.
  • Publication and annual certification of waiver and determination actions. All materials related to a waiver or determination action will be reported to OMB within 30 days. An annual certification process of the ongoing validity of waivers and determinations will be conducted by the CAIO, the AI Program team, and the owners of relevant AI use cases. The AI Program team will develop procedures for certifying all waivers and determinations. A summary of the outcome of the annual certification process, detailing individual waivers and determinations along with justification, will be shared with OMB and the public in accordance with OMB M-24-10 section 5(a)(ii). If there are no active determinations or waivers, that information will be shared with the public and reported to OMB.
  • Implementation and oversight. The AI Program team has a dedicated workstream with responsibility for the implementation and oversight of risk-management practices. The workstream includes members specializing in relevant mission and compliance functions, including technology, security, privacy, legal, data, and enterprise risk management, and represents a diversity of enterprise perspectives. The group is responsible for promoting consistent and comprehensive AI risk management through the use case review and impact assessment processes. This workstream is also responsible for maintaining a register of enterprise AI risks and associated mitigations to promote active management and accountability across the FRB.
]]>
https://blogs.perficient.com/2024/11/08/ai-regulations-for-financial-services-federal-reserve/feed/ 0 370906
AI Regulations for Financial Services: SEC https://blogs.perficient.com/2024/11/06/ai-regulations-for-financial-services-sec/ https://blogs.perficient.com/2024/11/06/ai-regulations-for-financial-services-sec/#respond Wed, 06 Nov 2024 15:15:04 +0000 https://blogs.perficient.com/?p=370912

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

SEC

The Securities and Exchange Commission (SEC) issued a proposed rule in July 2023 to address conflicts of interest associated with broker-dealers’ and investment advisers’ use of predictive data analytics (“PDA”) and similar technologies, including AI.

The proposed rules required that broker-dealers and investment advisers:

  • evaluate the predictive data analytics (“PDA”) the firms use and identify and eliminate or neutralize any related conflicts of interest that could place the firm’s interests ahead of those of its customers or clients; and
  • adopt, implement and maintain written policies and procedures to come into compliance with the proposed rules; and
  • comply with recordkeeping requirements by maintaining records of evaluations done on PDA, including when the technology was implemented and materially modified, the date(s) of testing and any actual or potential conflicts of interest identified.

Other efforts by the SEC suggest that it is not content to wait until final rules governing AI are in effect before addressing problems and risks it perceives related to AI technologies. Around the time the SEC proposed its rules on PDA, the SEC’s Division of Examination launched an AI-related sweep, asking firms questions concerning how they are using AI and requesting that they provide, among other things, a description of their models and techniques, the source and providers of their data, and internal reports of any incidents where AI use raised any regulatory, ethical or legal issues. The SEC has also requested copies of the firms’ AI compliance policies and procedures, contingency plans in case of AI system failure or inaccuracies, a sample of the firms’ client profile documents used by AI systems to understand clients’ risk tolerance and investment objectives, and all disclosure and marketing documents to clients that disclose the firm’s use of AI. In addition, SEC’s Division of Enforcement has reported that it has AI-focused investigations underway.

Broker-dealers and investment advisory firms utilize AI in various ways, such as to forecast the price movements of certain investment products, program robo-advisers to assist in automated planning and investment services, address basic client questions via virtual assistants, aid in risk management, and bolster compliance efforts by enhancing surveillance capabilities.

SEC Chairman Gensler foresees potential systemic risks because the use of AI in the financial sector eventually may be driven by a small handful of foundational models, thus creating a “monoculture” with many market participants relying on the same dataset or model. In that event, Chairman Gensler posits, it becomes more likely that various AI models will produce similar outputs, making it more likely that those relying on those outputs will make similar financial decisions, concentrating risk. Similar to program trading in the 1980s that led to the 1987 Stock Market Crash now known as Black Monday, if all the programs say to sell, who is left to buy?

]]>
https://blogs.perficient.com/2024/11/06/ai-regulations-for-financial-services-sec/feed/ 0 370912
AI Regulations for Financial Services: CFTC and FDIC https://blogs.perficient.com/2024/11/04/ai-regulations-for-financial-services-cftc-and-fdic/ https://blogs.perficient.com/2024/11/04/ai-regulations-for-financial-services-cftc-and-fdic/#respond Mon, 04 Nov 2024 15:28:39 +0000 https://blogs.perficient.com/?p=370902

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

CFTC

The Commodity Futures Trading Commission’s (“CFTC”) which regulates derivatives market activity, not particular technologies, issued in January 2024 a Request For Comment on current and potential uses and risks of AI in CFTC-regulated derivatives markets. After receiving significant industry-provided feedback, in May 2024, the Technology Advisory Committee of the CFTC issued a report on Responsible Artificial Intelligence in Financial Markets.

The report recommended that the agency develop a sector-specific AI Risk Management Framework. The report also called for the CFTC to engage with industry and develop firm-level governance standards for AI systems. The same report urged the agency to create an inventory of existing regulations related to AI and use it to identify potential risks and opportunities for rulemaking, and then encouraged a stick approach to regulation, urging penalties for AI-related misconduct should be high enough to deter entities from viewing the potential rewards as outweighing the risks.

As of the fourth quarter of 2024, no specific AI-related rules or regulations have been proposed or enacted by the CFTC.

FDIC

Fdic Official Sign

The Federal Deposit Insurance Corporation (FDIC), which is the primary federal regulator for insured state-chartered banks that are not members of the Federal Reserve, was the lead bank regulator when in June 2021 it issued a Request for Information seeking comments and information on the use of AI by financial institutions it regulated. In addition to the FDIC, the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the Consumer Financial Protection Bureau, and the National Credit Union Administration also distributed the same RFI to the financial institutions they regulated. Together, the federal regulators publicly sought to better understand:

  • the use of AI by financial institutions;
  • appropriate governance, risk management, and controls over AI;
  • challenges in developing, adopting, and managing AI;
  • and whether any clarification would be helpful.

At the time, the agencies noted they supported responsible innovation by financial institutions as the use of AI, had the potential to augment decision-making and enhance services available to consumers and businesses. They also noted that, as with any activity or process in which a bank engages, identifying and managing risks are key.

After the results of the RFI were collected, the FDIC created FDITech, a tech lab focused on all areas of technology including, but not limited, to AI. However, in 2024 the FDIC reduced its public-facing role.

Although the FDIC has not issued specific AI regulations, the FDIC regulates the use of AI by financial institutions it regulates in a number of ways, including:

  • Compliance with existing laws
    • Banks must use AI in compliance with existing laws, including consumer protection, safety, and soundness.
  • Model risk management
    • Banks should review the FDIC’s Supervisory Guidance on Model Risk Management, which outlines the agency’s approach to quantitative models, including those using AI.
  • Explainable AI
    • AI systems that are part of banks’ risk management models must be explainable.
  • Reporting
    • Reporting lines and formats should be structured to ensure communication is risk appropriate.
  • Risk assessment
    • A documented risk assessment should be carried out when relying on third-party services.
  • Senior management
    • Senior management must have sufficient technical expertise and be responsible for all significant business decisions.

As noted in a January 2024 industry conference by the FDIC Chairperson, “It doesn’t matter what label you put on it and what the underlying technique is. Financial institutions and banks understand what model risk management is and how they’re expected to conduct it. If they began to use newer techniques of artificial intelligence, including language learning models, then they need to make sure that those comply with model risk management expectations.”

]]>
https://blogs.perficient.com/2024/11/04/ai-regulations-for-financial-services-cftc-and-fdic/feed/ 0 370902
AI Regulations for Financial Services: US Treasury Department https://blogs.perficient.com/2024/11/01/ai-regulations-for-financial-services-us-treasury-department/ https://blogs.perficient.com/2024/11/01/ai-regulations-for-financial-services-us-treasury-department/#respond Fri, 01 Nov 2024 14:15:48 +0000 https://blogs.perficient.com/?p=370887

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

In November 2022, the Treasury Department explored opportunities and risks related to the use of AI in its report assessing the impact of new entrant non-bank firms on competition in consumer finance markets, for which the department conducted extensive outreach. Among other findings, that report found that innovations in AI were powering many non-bank firms’ capabilities and product and service offerings. The reader is urged to think back to when embedded finance and embedded financial services were deemed to be the future of banking in America. The same 2022 report noted that firms’ use of AI may help expand the provision of financial products and services to consumers, particularly in the credit space. The report also found that, in deploying AI models and tools, firms use a greater amount and variety of data than in the past, leading to an unprecedented demand for consumer data, which presents new data privacy and surveillance risks. Additionally, the report identified concerns related to bias and discrimination in the use of AI in financial services, including challenges with explainability. Explainability is the ability to understand a model’s output and decisions, or how the model establishes relationships based on the model input. That challenge imposes on AI developers and users to ensure compliance with fair lending requirements; the potential for models to perpetuate discrimination by using and learning from data that reflect and reinforce historical biases; and the potential for AI tools to expand capabilities for firms to inappropriately target specific individuals or communities (i.e. low- to moderate-income communities, communities of color, women, rural, tribal, or disadvantaged communities). The report concluded that new entrant non-bank firms and AI innovations they were utilizing in financial services may be able to help improve financial services, but that further steps should be considered to monitor and address risks to consumers, foster market integrity, and help ensure the safety and soundness of the financial system.

The following year, in December 2023, the US Treasury Department issued an RFI that sought input to inform its development of a national financial inclusion strategy; that RFI included questions related to the use of technologies such as AI in the provision of consumer financial services.

In March 2024, the Department of the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection issued a report in response to requirements from the 2023 executive order on AI, entitled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.” The report identified opportunities and challenges that AI presents to the security and resiliency of the financial services sector. The report outlined a series of next steps to address AI-related operational risk, cybersecurity, and fraud challenges, as a response to Executive Order 14110. Perficient’s Financial Services Risk and Regulatory Center of Excellence consultants noted while reading that report that the “Next Steps: Challenges & Opportunities” chapter contains a small section that notes “Regulation of AI in Financial Services Remains an Open Question”.

Two months later, in May 2024, the US Treasury Department issued its 2024 National Strategy for Combatting Terrorist and Other Illicit Financing (National Illicit Finance Strategy), noting that innovations in AI, including machine learning and large language models such as generative AI, have significant potential to strengthen anti-money laundering/countering the financing of terrorism (AML/CFT) compliance by helping financial institutions analyze large amounts of data and more effectively identify illicit finance patterns, risks, trends, and typologies. One of the objectives identified in the National Illicit Finance Strategy is industry outreach to improve Treasury’s understanding of how financial institutions are using AI to comply with applicable AML/CFT requirements.

In June 2024, the US Treasury issued a Request for Information (RFI) on the uses, opportunities and risks presented by developments and applications of artificial intelligence within the financial sector. The Treasury noted particular desire to gather information from a broad set of stakeholders in the financial services ecosystem, including those providing, facilitating, and receiving financial products and services, as well as consumer and small business advocates, academics, nonprofits, and others.

The Treasury Department noted that AI provides opportunities for financial institutions to improve efficiency, reduce costs, strengthen risk controls, and expand impacted entities’ access to financial products and services. At the same time, the use of AI in financial services can pose a variety of risks for impacted entities, depending on its application. Treasury was interested in perspectives on actual and potential benefits and opportunities to financial institutions and impacted entities of the use of AI in financial services, as well as views on the optimal methods to mitigate risks. In particular, the Treasury Department expressed interest in perspectives on bias and potential discrimination as well as privacy risks, the extent to which impacted entities are protected from and informed about the potential harms from financial institutions’ use of AI in financial services.

Written comments and information were requested on or before August 12, 2024, but the results were not published as of the writing of this blog.

]]>
https://blogs.perficient.com/2024/11/01/ai-regulations-for-financial-services-us-treasury-department/feed/ 0 370887
FDIC Extends Timeline to Comply with New Digital Signage Requirements https://blogs.perficient.com/2024/10/31/fdic-extends-timeline-to-comply-with-new-digital-signage-requirements/ https://blogs.perficient.com/2024/10/31/fdic-extends-timeline-to-comply-with-new-digital-signage-requirements/#respond Thu, 31 Oct 2024 15:15:24 +0000 https://blogs.perficient.com/?p=371320

The Federal Deposit Insurance Corporation (“FDIC”) recently announced that it is providing financial institutions additional time to get new process and systems in place by extending the compliance date for the new FDIC signage and advertising rule (Part 328, subpart A) from January 1, 2025, to May 1, 2025.

The final rule established a new black and navy-blue FDIC official digital sign shown below.

Fdic Digital Sign

Banks will be required to display the FDIC official digital sign near the name of the bank on all bank websites and mobile applications. Banks also will be required to display the FDIC official digital sign on certain automated teller machines.

The extension applies to the provisions requiring:

  1. the use of the FDIC official sign, official digital sign, and other signs differentiating deposits and non-deposit products across all banking channels, including physical premises, automated teller machines (ATMs) and digital channels, and
  2. the establishment and maintenance of written policies and procedures to achieve compliance with Part 328.

What Perficient has been seeing in the industry is that since an IDI’s mobile apps and websites must be altered for the new FDIC signage, bank executives are taking the opportunity to kill two birds with one stone and revamp their website and mobile apps to improve the user experience and enhance security while meeting the new FDIC signage requirements.

Contact us to discuss your specific risk and regulatory challenges. Our financial services expertise, blended with our digital leadership across platforms and business needs, equips financial institutions of all sizes to solve complex challenges and compliantly drive growth.

]]>
https://blogs.perficient.com/2024/10/31/fdic-extends-timeline-to-comply-with-new-digital-signage-requirements/feed/ 0 371320
AI Regulations for Financial Services: FinCEN https://blogs.perficient.com/2024/10/30/ai-regulations-for-financial-services-fincen/ https://blogs.perficient.com/2024/10/30/ai-regulations-for-financial-services-fincen/#respond Wed, 30 Oct 2024 14:05:03 +0000 https://blogs.perficient.com/?p=370909

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

FinCEN

In 2018, Treasury’s Financial Crimes Enforcement Network (FinCEN) and the federal banking agencies (FDIC, Federal reserve, OCC, and NCUA) issued a Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing, that encouraged banks to use existing tools or adopt new technologies, including AI, to identify and report money laundering, terrorist financing, and other illicit financial activity.

Pursuant to requirements and authorities outlined in the Anti-Money Laundering Act of 2020 (the AML Act), FinCEN is also taking several steps to create the necessary regulatory and examination environment to support AML/CFT-related innovation that can enhance the effectiveness and efficiency of the Bank Secrecy Act (BSA). In particular, Section 6209 of the AML Act requires the Secretary of the Treasury to issue a rule specifying standards for testing technology and related technology internal processes designed to facilitate effective compliance with the BSA by financial institutions, and these standards may include an emphasis on innovative approaches to compliance, such as the use of machine learning.

In April 2021 a Statement and separate Request for Information on Model Risk Management was issued by FinCEN and the FDIC, Federal Reserve, NCUA, and OCC. As part of the regulatory process, FinCEN may consider how financial institutions are currently using innovative approaches to compliance such as machine learning and AI, and the potential benefits and risks of specifying standards for those technologies.

In February 2023, FinCEN hosted a FinCEN Exchange that brought together law enforcement, financial institutions, and other private sector and government entities to discuss how AI is used for monitoring and detecting illicit financial activity. FinCEN also regularly engages financial institutions on the topic through the BSA Advisory Group Subcommittee on Innovation and Technology, and BSAAG Subcommittee on Information Security and Confidentiality.

]]>
https://blogs.perficient.com/2024/10/30/ai-regulations-for-financial-services-fincen/feed/ 0 370909
AI Regulations for Financial Services: CFPB https://blogs.perficient.com/2024/10/28/ai-regulations-for-financial-services-cfpb/ https://blogs.perficient.com/2024/10/28/ai-regulations-for-financial-services-cfpb/#respond Mon, 28 Oct 2024 14:30:12 +0000 https://blogs.perficient.com/?p=370894

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

CFPB

On June 24, 20024 the Consumer Financial Protection Bureau (CFPB) approved a new rule  to address the current and future applications of complex algorithms and artificial intelligence used to estimate the value of a home.

As noted by the CFPB, when buying or selling a home, an accurate home valuation is critical. Mortgage lenders use this collateral valuation to determine how much they will lend on a property. On popular real estate websites, many people even track their own home’s value generated from these AI-driven appraisal tools.

The CFPB rule requires companies that use these algorithmic appraisal tools to:

  1. put safeguards into place to ensure a high level of confidence in the home value estimates;
  2. protect against the manipulation of data;
  3. avoid conflicts of interest; and
  4. comply with applicable nondiscrimination laws.

In addition to their own rule, the CFPB highlighted to the OCC in the latter’s 2024 Request for Information discussed below a number of CFPB publications and guidance documents regarding consumer protection issues that may be implicated by the use of AI, including:

  • Chatbots
    • Chatbots and other automated customer service technologies built on large language models may:
      • provide inaccurate information and increase risk of unfair, deceptive, and abusive acts and practices in violation of the Consumer Financial Protection Act (CFPA);
      • fail to recognize when consumers invoke statutory rights under Regulation E and Regulation Z; and
      • raise privacy and security risks, resulting in increased compliance risk for institutions.
    • Lenders are prohibited against discrimination and must meet the requirement to provide consumers with information regarding adverse action taken against them, as required pursuant to the Equal Credit Opportunity Act (ECOA). The CFPB noted that courts have already held that an institution’s decision to use AI as an automated decision-making tools can itself be a policy that produces bias under the disparate impact theory of liability.
    • Fraud screening. The Comment stresses that the use of fraud screening tools, such as those offered by third-party vendors that generate fraud risk services, must be offered in compliance with ECOA and the CFPA. In addition, because such screening is often used to assess creditworthiness by determining who gets offered or approved for a financial product or at a special rate, institutions that compile and provide such information are therefore likely subject to the requirements of the Fair Credit Reporting Act.
]]>
https://blogs.perficient.com/2024/10/28/ai-regulations-for-financial-services-cfpb/feed/ 0 370894