Insurance Articles / Blogs / Perficient https://blogs.perficient.com/category/industries/insurance/ Expert Digital Insights Thu, 21 Nov 2024 13:53:44 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Insurance Articles / Blogs / Perficient https://blogs.perficient.com/category/industries/insurance/ 32 32 30508587 AI Regulations for Financial Services: Hong Kong https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/ https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/#respond Thu, 21 Nov 2024 15:14:09 +0000 https://blogs.perficient.com/?p=370864

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

In the summer of 2024, the Hong Kong Monetary Authority (“HKMA”) issued multiple guidance documents to financial services firms covering their use of artificial intelligence in both customer-facing applications as well as anti-money laundering and detecting and countering terrorist financing (“AML/CTF”). Specifically, the HKMA issued:

  1. The guiding principles issued by the HKMA on August 19, 2024 (“GenAI”) in customer-facing applications (“GenAI Guidelines”). The GenAI Guidelines built on a previous HKMA circular “Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions” dated November 5, 2019 (“2019 BDAI Guiding Principles”) and provide specific guidelines to financial services firms on the use of GenAI; and
  2. An AML/CTF circular issued by the HKMA on September 9, 2024, that requires financial services firms with operations in Hong Kong to:
    1. undertake a study to consider the feasibility of using artificial intelligence in tackling AML/CTF, and
    2. submit the feasibility study and an implementation plan to the HKMA by the end of March 2025.

Leveraging the 2019 BDAI Guiding Principles as a foundation, the GenAI Guidelines adopts the same core principles of governance and accountability, fairness, transparency and disclosure, and data privacy and protection, but introduces additional requirements to address the specific challenges presented by GenAI.

Core Principles Requirements under GenAI Guidelines
Governance and Accountability The board and senior management of financial services firms should remain accountable for all GenAI-driven decisions and processes and should thoroughly consider the potential impact of GenAI applications on customers through an appropriate committee which sits within the firm’s governance framework. The board and senior management should ensure the following:

  • Clearly defined scope of customer-facing GenAI applications to avoid GenAI usage in unintended areas;
  • Proper policies and procedures and related control measures for responsible GenAI use in customer-facing applications; and
  • Proper validation of GenAI models, including a “human-in-the-loop” approach in early stages, i.e. having a human retain control in the decision-making process, to ensure the model-generated outputs are accurate and not misleading.
Fairness Financial services firms are responsible for ensuring that GenAI models produce objective, consistent, ethical, and fair outcomes for customers. This includes:

  • That model generated outputs do not lead to unfair outcomes for customers. As part of this, firms are expected to consider different approaches that may be deployed in GenAI models, such as
    1.       anonymizing certain data categories; and
    2.       using comprehensive and fair datasets; and
    3.       making adjustments to remove bias during validation and review; and
  • During the early deployment stage, provide customers with an option to opt out of GenAI use and request human intervention on GenAI-generated decisions as far as practicable. If an “opt-out” option is unavailable, AIs should provide channels for customers to request review of GenAI-generated decisions.
Transparency and Disclosure Financial Services firms should:

  • Provide appropriate transparency to customers regarding GenAI applications; and
  • Disclose the use of GenAI to customers; and
  • Communicate the use, purpose, and limitations of GenAI models to enhance customer understanding.
Data Privacy and Protection Financial Services firms should:

  • Implement effective protection measures for customer data; and
  • Where personal data are collected and processed by GenAI applications, comply with the Personal Data (Privacy) Ordinance, including the relevant recommendations and good practices issued by the Office of the Privacy Commissioner for Personal Data, such as the:
  1. “Guidance on the Ethical Development and Use of Artificial Intelligence” issued on August 18, 2021, and
  2. “Artificial Intelligence: Model Personal Data Protection Framework” issued on June 11, 2024.

Consistent with the HKMA’s recognition of the potential use of GenAI in consumer protection in the GenAI Guidelines, the HKMA Circular also indicates that the HKMA recognizes the considerable benefits that may come from the deployment of AI in improving AML/CTF. In particular, the HKMA Circular notes that the use of AI powered systems “take into account a broad range of contextual information focusing not only on individual transactions, but also the active risk profile and past transaction patterns of customers…These systems have proved to be more effective and efficient than conventional rules-based transaction monitoring systems commonly used by covered firms.”

Given this, the HKMA has indicated that financial services firms with operations in Hong Kong should:

  • give due consideration to adopting AI in their AML/CTF monitoring systems to enable them to stay effective and efficient; and
  • undertake a feasibility study in relation to the adoption of AI in their AML/CTF monitoring systems and based on the outcome of that review, should formulate an implementation plan.

The feasibility study and implementation plan should be signed off at the board level and submitted to the HKMA by March 31, 2025.

]]>
https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/feed/ 0 370864
AI-Powered Prior Authorization: A New Era with Salesforce Health Cloud https://blogs.perficient.com/2024/11/20/ai-powered-prior-authorization-a-new-era-with-salesforce-health-cloud/ https://blogs.perficient.com/2024/11/20/ai-powered-prior-authorization-a-new-era-with-salesforce-health-cloud/#respond Wed, 20 Nov 2024 11:47:18 +0000 https://blogs.perficient.com/?p=372338

In the ever-evolving healthcare industry, efficiency and patient care are crucial. Streamlined processes ensure that patients receive timely and appropriate care, reducing the risk of complications and improving overall health outcomes. At the same time, a strong focus on patient care fosters trust and satisfaction, which are essential for successful treatment and recovery. 

Recognizing these imperatives, a leading health organization and Salesforce have embarked on a groundbreaking partnership to streamline the prior authorization process. This collaboration aims to address one of the most significant pain points in healthcare: the often cumbersome and time-consuming approval process for medical treatments and services. 

The Challenge of Prior Authorizations 

Prior authorizations are essential for ensuring that treatments are safe, evidence-based, and cost-effective. However, the traditional process is fraught with inefficiencies, often leading to delays in patient care. According to a survey by the American Medical Association, 78% of physicians reported that issues with prior authorizations can result in patients foregoing necessary treatments. 

This is in part due to outdated and inefficient healthcare industry processes, where about two-thirds of prior authorization requests are submitted manually or partially manually, including by fax machine. Submissions that lack complete clinical information slow the process, and outdated electronic systems waste time and resources, leaving patients without answers and worried about their next steps in care. 

A Technological Solution 

Leveraging Salesforce Health Cloud, this partnership is set to transform the prior authorization process. Health Cloud integrates with existing electronic health records (EHRs) to gather relevant clinical data, enabling near real-time prior authorization decisions. 

The use of Health Level Seven (HL7) Fast Healthcare Interoperability Resources (FHIR) standards creates technology that will streamline over 20 different systems into one process that integrates with physicians’ current workflow. 

The Role of AI 

AI plays a crucial role in this transformation. By automating data collection and analysis, AI can significantly speed up the approval process. This not only reduces the administrative burden on healthcare providers but also ensures that patients receive timely care. The AI system is designed to handle most requests quickly, with only a small number requiring further clinical consultation. 

When clinical consultation is needed, physicians will receive a message in near real-time detailing what is needed to complete the authorization and options to begin a peer-to-peer clinical consultation. This process, which currently can take several days, will be reduced to hours, depending on the requesting physician’s availability. 

Benefits for Patients and Providers 

Patients will receive updates on their authorization status through a member app, giving them more clarity around their status. 

For providers, the streamlined process allows them to focus more on patient care rather than administrative tasks. Modifications or denials will always be made by a medical director or licensed clinician, ensuring that decisions are clinically sound. 

A Step Towards Digital Transformation 

This partnership is a testament to the power of digital transformation in healthcare. By adopting advanced technologies, the collaborators are setting a new standard for efficiency and patient care. This initiative not only addresses current challenges but also paves the way for future innovations in healthcare delivery. 

Ready to Transform Your Healthcare Organization? 

This prior authorization solution using Health Cloud and AI is a significant step toward a more efficient and patient-centric healthcare system. As we continue to navigate the complexities of healthcare, such partnerships highlight the potential of technology to drive meaningful change. 

At Perficient, we are excited to see how these advancements will shape the future of healthcare and are committed to supporting our clients in their digital transformation journeys. 

Whether you’re looking to enhance patient engagement, streamline operations, or leverage data for better decision-making, we’re here to guide you every step of the way. From initial strategy to implementation and ongoing support, Perficient is committed to helping you achieve your healthcare transformation goals. 

Don’t let outdated systems hold your organization back. Take the first step towards a more efficient, patient-centric future. Contact Perficient today to discover how we can help you harness the power of Salesforce and other leading technologies to revolutionize your healthcare delivery. 

Let’s work together to create healthier communities and better patient outcomes. Reach out now to start your transformation journey with Perficient. 

 

]]>
https://blogs.perficient.com/2024/11/20/ai-powered-prior-authorization-a-new-era-with-salesforce-health-cloud/feed/ 0 372338
AI Regulations for Financial Services: Japan https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/ https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/#respond Tue, 19 Nov 2024 15:13:32 +0000 https://blogs.perficient.com/?p=370870

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

Japan currently has yet to pass a law or regulation specifically directed to regulating the use of AI at financial services firms. Currently, the Japanese government and regulators are taking an indirect approach of supporting a policy goal of prioritizing innovation while minimizing foreseeable harms.

On April 19, 2024, the Japanese government published new “AI Guidelines for Business Version 1.0” (the “Guidelines”). While not legally binding, the Guidelines are expected to support and induce voluntary efforts by developers, providers, and business users of AI systems through compliance with generally recognized AI principles and are similar to the EU regulations discussed previously in that they propose a risk-based approach.

As noted on page 26 of the English version of the Guidelines, the Guidelines promote “agile governance” where “multiple stakeholders continuously and rapidly run a cycle consisting of environment and risk analysis, goal setting, system design, operation and then evaluation in various governance systems in companies, regulations, infrastructure, markets, social codes and the like”.

In addition to the Guidelines, an AI Strategy Council, a government advisory body, was established to consider approaches for maximizing the potential of AI while minimizing the potential risks to the financial system. On May 22, 2024, the Council submitted draft discussion points concerning the advisability and potential scope of any future regulation.

Finally, a working group in the Japanese Parliament has proposed the first specific Japanese regulation of AI, “the Basic Act on the Advancement of Responsible AI,” which proposes a hard law approach to regulate certain generative AI foundation models. If passed as-is, the Japanese government would designate the AI systems and developers that are subject to regulation; impose obligations on them with respect to the vetting, operation, and output of the systems; and require periodic reports concerning AI systems.

The proposed obligations would provide a general framework, while industry groups for financial services firms would work with the Japanese Financial Services Agency (“JFSA”) to establish the specific standards by which firms would comply. It is further thought that the government would have the authority to monitor AI developers and impose fines and penalties for violations of the reporting obligations and/or compliance with the substance of the law.

]]>
https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/feed/ 0 370870
AI Regulations for Financial Services: South Korea and the UK https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/ https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/#respond Thu, 14 Nov 2024 15:27:01 +0000 https://blogs.perficient.com/?p=370878

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

South Korea

Map Of South Korea

In South Korea, efforts to enact the AI framework act have been underway since 2020. Nine different regulations have been proposed, but none have been passed into law. While the Personal Information Protection Act (PIPA) includes provisions related to AI, such as the exercise of data subjects’ rights concerning automated decision-making, comprehensive legislation has yet to be enacted.

United Kingdom

Bank executives who have their London trading desks over in Canary Wharf must remember Brexit and that Great Britain is not in the EU. The Financial Conduct Authority (FCA) regulates artificial intelligence (AI) in the UK by focusing on identifying and mitigating risks, rather than prohibiting specific technologies. The FCA and FSA’s approaches to AI regulation are based on the five principles of safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and lastly contestability and redress. Similar to Japan and South Korea, the UK is yet to pass a law or regulation specifically directed to regulating the use of AI at financial services firms.

]]>
https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/feed/ 0 370878
AI Regulations for Financial Services: European Union https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/ https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/#respond Tue, 12 Nov 2024 15:19:00 +0000 https://blogs.perficient.com/?p=370843

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

EU Regulations

European Union lawmakers signed the Artificial Intelligence (“AI”) Act in June 2024. The AI act, the first binding worldwide horizontal regulation on AI, sets a common framework for the use and supply of AI systems by financial institutions in the European Union.

The new act offers a classification for AI systems with different requirements and obligations tailored to a ‘risk-based approach’. In our opinion, the proposed risk-based system will be very familiar to bankers who remember the original rollout and asset-based classification system required by regulators in the original BASEL risk-based capital requirements of the early 1990s. Some AI systems presenting ‘unacceptable’ risks are outright prohibited regardless of controls. A wide range of ‘high-risk’ AI systems that can have a detrimental impact on people’s health, safety or on their fundamental rights are permitted, but subject to a set of requirements and obligations to gain access to the EU market. AI systems posing limited risks because of their lack of transparency will be subject to information and transparency requirements, while AI systems presenting what are classified “minimal risks” are not subjected to further obligations.

The regulation also lays down specific rules for General Purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with “high-impact capabilities” that could pose a systemic risk and have a significant impact on the EU marketplace. The AI Act was published in the EU’s Official Journal on July 12, 2024, and became effective August 31, 2024.

Eu Ai Act Risk Based Approach

The EU AI act adopts a risk-based approach and classifies AI systems into several risk categories, with different degrees of regulation applying.

Prohibited AI practices

The final text prohibits a wider range of AI practices than originally proposed by the Commission because of their harmful impact:

  • AI systems using subliminal or manipulative or deceptive techniques to distort people’s or a group of people’s behavior and impair informed decision making, leading to significant harm;
  • AI systems exploiting vulnerabilities due to age, disability, or social or economic situations, causing significant harm;
  • Biometric categorization systems inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation;
  • AI systems evaluating or classifying individuals or groups based on social behavior or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts or unjustified or disproportionate to their behavior;
  • AI systems assessing the risk of individuals committing criminal offences based solely on profiling or personality traits;
  • AI systems creating or expanding facial recognition databases through untargeted scraping from the Internet or CCTV footage; and
  • AI systems inferring emotions in workplaces or educational institutions.

High-risk AI systems

The AI act identifies a number of use cases in which AI systems are to be considered high-risk because they can potentially create an adverse impact on people’s health, safety or their fundamental rights.

  • The risk classification is based on the intended purpose of the AI system. The function performed by the AI system and the specific purpose and modalities for which the system is used are key to determine if an AI system is high-risk or not. High-risk AI systems can be safety components of products covered by sectoral EU law (e.g. medical devices) or AI systems that, as a matter of principle, are classified as high risk when they are used in specific areas listed in an annex.13 of the regulation. The Commission is tasked with maintaining an EU database for the high-risk AI systems listed in this annex.
  • A new test has been enshrined at the Parliament’s request (‘filter provision’), according to which AI systems will not be considered high risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. However, an AI system will always be considered high risk if the AI system performs profiling of natural persons.
  • Providers of such high-risk AI systems will have to run a conformity assessment procedure before their products can be sold and used in the EU. They will need to comply with a range of requirements including testing, data training and cybersecurity and, in some cases, will have to conduct a fundamental rights impact assessment to ensure their systems comply with EU law. The conformity assessment should be conducted either based on internal control (self-assessment) or with the involvement of a notified body (e.g. biometrics). Compliance with European harmonized standards to be developed will grant high-risk AI systems providers a presumption of conformity. After such AI systems are placed in the market, providers must implement post-market monitoring and take corrective actions if necessary.

Transparency risk

Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception, irrespective of whether they qualify as high-risk AI systems or not. Such systems are subject to information and transparency requirements. Users must be made aware that they interact with chatbots. Deployers of AI systems that generate or manipulate image, audio or video content (i.e. deep fakes), must disclose that the content has been artificially generated or manipulated except in very limited cases (e.g. when it is used to prevent criminal offences). Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, interoperable, effective and robust techniques and methods (such as watermarks) to enable marking and detection that the output has been generated or manipulated by an AI system and not a human. Employers who deploy AI systems in the workplace must inform the workers and their representatives.

Minimal risks

Systems presenting minimal risk for people (e.g. spam filters) are not subject to further obligations beyond currently applicable legislation (e.g. GDPR).

General-purpose AI (GPAI)

The regulation provides specific rules for general purpose AI models and for general-purpose AI models that pose systemic risks.

GPAI system transparency requirements

All GPAI models will have to draw up and maintain up-to-date technical documentation and make information and documentation available to downstream providers of AI systems. All providers of GPAI models have to implement a policy to respect EU copyright law, including through state-of-the-art technologies (e.g. watermarking), to carry out lawful text- and data-mining exceptions as envisaged under the Copyright Directive. In addition, GPAIs must draw up and make publicly available a sufficiently detailed summary of the content used in training the GPAI models according to a template provided by the AI Office. Financial Institutions headquartered outside the EU will have to appoint a representative in the EU. However, AI models made accessible under a free and open source will be exempt from some of the obligations (i.e., disclosure of technical documentation) given they have, in principle, positive effects on research, innovation and competition.

Systemic-risk GPAI obligations

GPAI models with ‘high-impact capabilities’ could pose a systemic risk and have a significant impact due to their reach and their actual or reasonably foreseeable negative effects (on public health, safety, public security, fundamental rights, or the society as a whole). GPAI providers must therefore notify the European Commission if their model is trained using a total computing power exceeding 10^25 FLOPs (i.e. floating-point operations per second). When this threshold is met, the presumption will be that the model is a GPAI model posing systemic risks. In addition to the requirements on transparency and copyright protection falling on all GPAI models, providers of systemic-risk GPAI models are required to constantly assess and mitigate the risks they pose and to ensure cybersecurity protection. That requires keeping track of, documenting, and reporting to regulators serious incidents and implementing corrective measures.

  • Codes of practice and presumption of conformity

GPAI model providers will be able to rely on codes of practice to demonstrate compliance with the obligations set under the act. By means of implementing acts, the Commission may decide to approve a code of practice and give it a general validity within the EU, or alternatively, provide common rules for implementing the relevant obligations. Compliance with a European standard grants GPAI providers the presumption of conformity. Providers of GPAI models with systemic risks who do not adhere to an approved code of practice will be required to demonstrate adequate alternative means of compliance.

]]>
https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/feed/ 0 370843
AI Regulations for Financial Services: SEC https://blogs.perficient.com/2024/11/06/ai-regulations-for-financial-services-sec/ https://blogs.perficient.com/2024/11/06/ai-regulations-for-financial-services-sec/#respond Wed, 06 Nov 2024 15:15:04 +0000 https://blogs.perficient.com/?p=370912

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

SEC

The Securities and Exchange Commission (SEC) issued a proposed rule in July 2023 to address conflicts of interest associated with broker-dealers’ and investment advisers’ use of predictive data analytics (“PDA”) and similar technologies, including AI.

The proposed rules required that broker-dealers and investment advisers:

  • evaluate the predictive data analytics (“PDA”) the firms use and identify and eliminate or neutralize any related conflicts of interest that could place the firm’s interests ahead of those of its customers or clients; and
  • adopt, implement and maintain written policies and procedures to come into compliance with the proposed rules; and
  • comply with recordkeeping requirements by maintaining records of evaluations done on PDA, including when the technology was implemented and materially modified, the date(s) of testing and any actual or potential conflicts of interest identified.

Other efforts by the SEC suggest that it is not content to wait until final rules governing AI are in effect before addressing problems and risks it perceives related to AI technologies. Around the time the SEC proposed its rules on PDA, the SEC’s Division of Examination launched an AI-related sweep, asking firms questions concerning how they are using AI and requesting that they provide, among other things, a description of their models and techniques, the source and providers of their data, and internal reports of any incidents where AI use raised any regulatory, ethical or legal issues. The SEC has also requested copies of the firms’ AI compliance policies and procedures, contingency plans in case of AI system failure or inaccuracies, a sample of the firms’ client profile documents used by AI systems to understand clients’ risk tolerance and investment objectives, and all disclosure and marketing documents to clients that disclose the firm’s use of AI. In addition, SEC’s Division of Enforcement has reported that it has AI-focused investigations underway.

Broker-dealers and investment advisory firms utilize AI in various ways, such as to forecast the price movements of certain investment products, program robo-advisers to assist in automated planning and investment services, address basic client questions via virtual assistants, aid in risk management, and bolster compliance efforts by enhancing surveillance capabilities.

SEC Chairman Gensler foresees potential systemic risks because the use of AI in the financial sector eventually may be driven by a small handful of foundational models, thus creating a “monoculture” with many market participants relying on the same dataset or model. In that event, Chairman Gensler posits, it becomes more likely that various AI models will produce similar outputs, making it more likely that those relying on those outputs will make similar financial decisions, concentrating risk. Similar to program trading in the 1980s that led to the 1987 Stock Market Crash now known as Black Monday, if all the programs say to sell, who is left to buy?

]]>
https://blogs.perficient.com/2024/11/06/ai-regulations-for-financial-services-sec/feed/ 0 370912
AI Regulations for Financial Services: US Treasury Department https://blogs.perficient.com/2024/11/01/ai-regulations-for-financial-services-us-treasury-department/ https://blogs.perficient.com/2024/11/01/ai-regulations-for-financial-services-us-treasury-department/#respond Fri, 01 Nov 2024 14:15:48 +0000 https://blogs.perficient.com/?p=370887

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

In November 2022, the Treasury Department explored opportunities and risks related to the use of AI in its report assessing the impact of new entrant non-bank firms on competition in consumer finance markets, for which the department conducted extensive outreach. Among other findings, that report found that innovations in AI were powering many non-bank firms’ capabilities and product and service offerings. The reader is urged to think back to when embedded finance and embedded financial services were deemed to be the future of banking in America. The same 2022 report noted that firms’ use of AI may help expand the provision of financial products and services to consumers, particularly in the credit space. The report also found that, in deploying AI models and tools, firms use a greater amount and variety of data than in the past, leading to an unprecedented demand for consumer data, which presents new data privacy and surveillance risks. Additionally, the report identified concerns related to bias and discrimination in the use of AI in financial services, including challenges with explainability. Explainability is the ability to understand a model’s output and decisions, or how the model establishes relationships based on the model input. That challenge imposes on AI developers and users to ensure compliance with fair lending requirements; the potential for models to perpetuate discrimination by using and learning from data that reflect and reinforce historical biases; and the potential for AI tools to expand capabilities for firms to inappropriately target specific individuals or communities (i.e. low- to moderate-income communities, communities of color, women, rural, tribal, or disadvantaged communities). The report concluded that new entrant non-bank firms and AI innovations they were utilizing in financial services may be able to help improve financial services, but that further steps should be considered to monitor and address risks to consumers, foster market integrity, and help ensure the safety and soundness of the financial system.

The following year, in December 2023, the US Treasury Department issued an RFI that sought input to inform its development of a national financial inclusion strategy; that RFI included questions related to the use of technologies such as AI in the provision of consumer financial services.

In March 2024, the Department of the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection issued a report in response to requirements from the 2023 executive order on AI, entitled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.” The report identified opportunities and challenges that AI presents to the security and resiliency of the financial services sector. The report outlined a series of next steps to address AI-related operational risk, cybersecurity, and fraud challenges, as a response to Executive Order 14110. Perficient’s Financial Services Risk and Regulatory Center of Excellence consultants noted while reading that report that the “Next Steps: Challenges & Opportunities” chapter contains a small section that notes “Regulation of AI in Financial Services Remains an Open Question”.

Two months later, in May 2024, the US Treasury Department issued its 2024 National Strategy for Combatting Terrorist and Other Illicit Financing (National Illicit Finance Strategy), noting that innovations in AI, including machine learning and large language models such as generative AI, have significant potential to strengthen anti-money laundering/countering the financing of terrorism (AML/CFT) compliance by helping financial institutions analyze large amounts of data and more effectively identify illicit finance patterns, risks, trends, and typologies. One of the objectives identified in the National Illicit Finance Strategy is industry outreach to improve Treasury’s understanding of how financial institutions are using AI to comply with applicable AML/CFT requirements.

In June 2024, the US Treasury issued a Request for Information (RFI) on the uses, opportunities and risks presented by developments and applications of artificial intelligence within the financial sector. The Treasury noted particular desire to gather information from a broad set of stakeholders in the financial services ecosystem, including those providing, facilitating, and receiving financial products and services, as well as consumer and small business advocates, academics, nonprofits, and others.

The Treasury Department noted that AI provides opportunities for financial institutions to improve efficiency, reduce costs, strengthen risk controls, and expand impacted entities’ access to financial products and services. At the same time, the use of AI in financial services can pose a variety of risks for impacted entities, depending on its application. Treasury was interested in perspectives on actual and potential benefits and opportunities to financial institutions and impacted entities of the use of AI in financial services, as well as views on the optimal methods to mitigate risks. In particular, the Treasury Department expressed interest in perspectives on bias and potential discrimination as well as privacy risks, the extent to which impacted entities are protected from and informed about the potential harms from financial institutions’ use of AI in financial services.

Written comments and information were requested on or before August 12, 2024, but the results were not published as of the writing of this blog.

]]>
https://blogs.perficient.com/2024/11/01/ai-regulations-for-financial-services-us-treasury-department/feed/ 0 370887
AI Regulations for Financial Services: FinCEN https://blogs.perficient.com/2024/10/30/ai-regulations-for-financial-services-fincen/ https://blogs.perficient.com/2024/10/30/ai-regulations-for-financial-services-fincen/#respond Wed, 30 Oct 2024 14:05:03 +0000 https://blogs.perficient.com/?p=370909

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

FinCEN

In 2018, Treasury’s Financial Crimes Enforcement Network (FinCEN) and the federal banking agencies (FDIC, Federal reserve, OCC, and NCUA) issued a Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing, that encouraged banks to use existing tools or adopt new technologies, including AI, to identify and report money laundering, terrorist financing, and other illicit financial activity.

Pursuant to requirements and authorities outlined in the Anti-Money Laundering Act of 2020 (the AML Act), FinCEN is also taking several steps to create the necessary regulatory and examination environment to support AML/CFT-related innovation that can enhance the effectiveness and efficiency of the Bank Secrecy Act (BSA). In particular, Section 6209 of the AML Act requires the Secretary of the Treasury to issue a rule specifying standards for testing technology and related technology internal processes designed to facilitate effective compliance with the BSA by financial institutions, and these standards may include an emphasis on innovative approaches to compliance, such as the use of machine learning.

In April 2021 a Statement and separate Request for Information on Model Risk Management was issued by FinCEN and the FDIC, Federal Reserve, NCUA, and OCC. As part of the regulatory process, FinCEN may consider how financial institutions are currently using innovative approaches to compliance such as machine learning and AI, and the potential benefits and risks of specifying standards for those technologies.

In February 2023, FinCEN hosted a FinCEN Exchange that brought together law enforcement, financial institutions, and other private sector and government entities to discuss how AI is used for monitoring and detecting illicit financial activity. FinCEN also regularly engages financial institutions on the topic through the BSA Advisory Group Subcommittee on Innovation and Technology, and BSAAG Subcommittee on Information Security and Confidentiality.

]]>
https://blogs.perficient.com/2024/10/30/ai-regulations-for-financial-services-fincen/feed/ 0 370909
3 Key Insurance Takeaways From InsureTech Connect 2024 https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/ https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/#respond Tue, 29 Oct 2024 16:49:00 +0000 https://blogs.perficient.com/?p=371156

The 2024 InsureTech Connect (ITC) conference was truly exhilarating, with key takeaways impacting the insurance industry. Each year, it continues to improve, offering more relevant content, valuable industry connections, and opportunities to delve into emerging technologies.

This year’s event was no exception, showcasing the importance of personalization to the customer, tech-driven relationship management, and AI-driven underwriting processes. The industry is constantly evolving, and ITC displays the alignment of everyone within the insurance industry surrounding the same purpose.

The Road Ahead: Transformative Trends

As I reflect on ITC and my experience, it is evident the progression of the industry is remarkable. Here are a few key takeaways from my perspective that will shape our industry roadmap:

1. Personalization at Scale

We’ve spoken for many years about the need to drive greater personalization across our interactions in our industry. We know that customers engage with companies that demonstrate authentic knowledge of their relationship. This year, we saw great examples of how companies are treating personalization, not as an incremental initiative, but rather embedding it at key moments in the insurance experience, particularly underwriting and claims.

For example, New York Life highlighted how personalization is driving generational loyalty. We’ve been working with industry leading insurers to help drive personalization across the distribution network: carriers to agents and the final policyholder.

Success In Action: Our client wanted to integrate better contact center technology to improve internal processes and allow for personalized, proactive messaging to clients. We implemented Twilio Flex and leveraged its outbound notification capabilities to support customized messaging while also integrating their cloud-based outbound dialer and workforce management suite. The insurer now has optimized agent productivity and agent-customer communication, as well as newfound access to real-time application data across the entire contact center.

2. Holistic, Well-Connected Distribution Network

Insurance has always had a complex distribution network across platforms, partnerships, carriers, agents, producers, and more. Leveraging technology to manage these relationships opens opportunities to gain real-time insights and implement effective strategies, fostering holistic solutions and moving away from point solutions. Managing this complexity and maximizing the value of this network requires a good business and digital transformation strategy.

Our proprietary Envision process has been leading the way to help carriers navigate this complex system with proprietary strategy tools, historical industry data, and best practices.

3. Artificial Intelligence (AI) for Process Automation

Not surprisingly, AI permeated many of the presentations and demos across the session. AI Offers insurers unique decisioning throughout the value chain to create differentiation. It was evident that while we often talk about AI as an overarching technology, the use cases were more point solutions across the insurance value chain. Moreover, AI is not here to replace the human, but rather assist the human. By automating the mundane process activities, mindshare and human capital can be invested toward more value-added activity and critical problems to improve customer experience. Because these point solutions are available across many disparate groups, organizational mandates demand safe and ethical use of AI models.

Our PACE framework provides a holistic approach to responsibly operationalize AI across an organization. It empowers organizations to unlock the benefits of AI while proactively addressing risks.

Our industry continues to evolve in delivering its noble purpose – to protect individual’s and businesses’ property, liability, and financial obligations. Technology is certainly an enabler of this purpose, but transformation must be managed to be effective.

Perficient Is Driving Success and Innovation in Insurance

Want to know the now, new, and next of digital transformation in insurance? Contact us and let us help you meet the challenges of today and seize the opportunities of tomorrow in the insurance industry.

]]>
https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/feed/ 0 371156
AI Regulations for Financial Services: CFPB https://blogs.perficient.com/2024/10/28/ai-regulations-for-financial-services-cfpb/ https://blogs.perficient.com/2024/10/28/ai-regulations-for-financial-services-cfpb/#respond Mon, 28 Oct 2024 14:30:12 +0000 https://blogs.perficient.com/?p=370894

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

CFPB

On June 24, 20024 the Consumer Financial Protection Bureau (CFPB) approved a new rule  to address the current and future applications of complex algorithms and artificial intelligence used to estimate the value of a home.

As noted by the CFPB, when buying or selling a home, an accurate home valuation is critical. Mortgage lenders use this collateral valuation to determine how much they will lend on a property. On popular real estate websites, many people even track their own home’s value generated from these AI-driven appraisal tools.

The CFPB rule requires companies that use these algorithmic appraisal tools to:

  1. put safeguards into place to ensure a high level of confidence in the home value estimates;
  2. protect against the manipulation of data;
  3. avoid conflicts of interest; and
  4. comply with applicable nondiscrimination laws.

In addition to their own rule, the CFPB highlighted to the OCC in the latter’s 2024 Request for Information discussed below a number of CFPB publications and guidance documents regarding consumer protection issues that may be implicated by the use of AI, including:

  • Chatbots
    • Chatbots and other automated customer service technologies built on large language models may:
      • provide inaccurate information and increase risk of unfair, deceptive, and abusive acts and practices in violation of the Consumer Financial Protection Act (CFPA);
      • fail to recognize when consumers invoke statutory rights under Regulation E and Regulation Z; and
      • raise privacy and security risks, resulting in increased compliance risk for institutions.
    • Lenders are prohibited against discrimination and must meet the requirement to provide consumers with information regarding adverse action taken against them, as required pursuant to the Equal Credit Opportunity Act (ECOA). The CFPB noted that courts have already held that an institution’s decision to use AI as an automated decision-making tools can itself be a policy that produces bias under the disparate impact theory of liability.
    • Fraud screening. The Comment stresses that the use of fraud screening tools, such as those offered by third-party vendors that generate fraud risk services, must be offered in compliance with ECOA and the CFPA. In addition, because such screening is often used to assess creditworthiness by determining who gets offered or approved for a financial product or at a special rate, institutions that compile and provide such information are therefore likely subject to the requirements of the Fair Credit Reporting Act.
]]>
https://blogs.perficient.com/2024/10/28/ai-regulations-for-financial-services-cfpb/feed/ 0 370894
Perficient Named in Forrester’s App Modernization and Multicloud Managed Services Landscape, Q4 2024 https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/ https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/#respond Fri, 25 Oct 2024 12:21:43 +0000 https://blogs.perficient.com/?p=371037

As new technologies become available within the digital space, businesses must adapt quickly by modernizing their legacy systems and harnessing the power of the cloud to stay competitive. Forrester’s 2024 report recognizes 42 notable providers– and we’re proud to announce that Perficient is among them.

We believe our inclusion in Forrester’s Application Modernization and Multicloud Managed Services Landscape, Q4 2024 reflects our commitment to evolving enterprise applications and managing multicloud environments to enhance customer experiences and drive growth in a complex digital world.

With the demand for digital transformation growing rapidly, this landscape provides valuable insights into what businesses can expect from service providers, how different companies compare, and the options available based on provider size and market focus.

Application Modernization and Multicloud Managed Services

Forrester defines application modernization and multicloud managed services as:

“Services that offer technical and professional support to perform application and system assessments, ongoing application multicloud management, application modernization, development services for application replacements, and application retirement.”

According to the report,

“Cloud leaders and sourcing professionals implement application modernization and multicloud managed services to:

  • Deliver superior customer experiences.
  • Gain access to technical and transformational skills and capabilities.
  • Reduce costs associated with legacy technologies and systems.”

By focusing on application modernization and multicloud management, Perficient empowers businesses to deliver superior customer experiences through agile technologies that boost user satisfaction. We provide clients with access to cutting-edge technical and transformational skills, allowing them to stay ahead of industry trends. Our solutions are uniquely tailored to reduce costs associated with maintaining legacy systems, helping businesses optimize their IT budgets while focusing on growth.

Focus Areas for Modernization and Multicloud Management

Perficient has honed its expertise in several key areas that are critical for organizations looking to modernize their applications and manage multicloud environments effectively. As part of the report, Forrester asked each provider included in the Landscape to select the top business scenarios for which clients select them and from there determined which are the extended business scenarios that highlight differentiation among the providers. Perficient self-reported three key business scenarios that clients work with us out of those extended application modernization and multicloud services business scenarios:

  • Infrastructure Modernization: We help clients transform their IT infrastructure to be more flexible, scalable, and efficient, supporting the rapid demands of modern applications.
  • Cloud-Native Development Execution: Our cloud-native approach enables new applications to leverage cloud environments, maximizing performance and agility.
  • Cloud Infrastructure “Run”: We provide ongoing support for cloud infrastructure, keeping applications and systems optimized, secure, and scalable.

Delivering Value Through Innovation

Perficient is listed among large consultancies with an industry focus in financial services, healthcare, and the manufacturing/production of consumer products. Additionally, our geographic presence in North America, Latin America, and the Asia-Pacific region was noted.

We believe that Perficient’s inclusion in Forrester’s report serves as another milestone in our mission to drive digital innovation for our clients across industries. We are proud to be recognized among notable providers and look forward to continuing to empower our clients to transform their digital landscapes with confidence. For more information on how Perficient can help your business with application modernization and multicloud managed services, contact us today.

Download the Forrester report, The Application Modernization And Multicloud Managed Services Landscape, Q4 2024, to learn more (link to report available to Forrester subscribers and for purchase).

]]>
https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/feed/ 0 371037
AI Regulations for Financial Services: OCC https://blogs.perficient.com/2024/10/24/ai-regulations-for-financial-services-occ/ https://blogs.perficient.com/2024/10/24/ai-regulations-for-financial-services-occ/#respond Thu, 24 Oct 2024 20:56:51 +0000 https://blogs.perficient.com/?p=370915

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

OCC

In December 2023, the Office of the Comptroller of the Currency (OCC) classified AI as an emerging risk to the banking industry in an industry report they produced. As noted at the time by the OCC, advances in computing capacity, increased data availability, and improvements in analytical techniques have significantly expanded opportunities for banks to leverage AI for risk management and operational purposes.

The utilization of AI has seen tremendous growth over the last few years, including:

  • Customer chatbots
    • Customer chatbots serve to streamline operations by reducing the need for extensive phone center staffing. This mitigates the risk of customer service representatives providing incorrect information and ensures compliance with regulatory disclosures, ultimately enhancing the overall customer experience while reducing costs.
  • Fraud detection
    • AI-driven fraud detection proves instrumental in curtailing the time required for addressing stolen debit and credit cards, thereby minimizing losses resulting from identity theft.
  • Credit scoring
    • Credit scoring AI enhances credit accessibility for deserving customers who might otherwise be overlooked by traditional credit algorithms. By continuously improving and adapting over time, AI-driven credit scoring ensures a fairer assessment and broader availability of credit.

However, offsetting the positive aspects, the OCC cautioned that risks can still arise, such as:

  • Lack of explainability
  • Reliance on large volumes of data
  • Potential bias
  • Privacy concerns
  • Third-party risk
  • Cybersecurity risks
  • Consumer protection concerns

The OCC report emphasized the importance of banks identifying, measuring, monitoring, and controlling these risks associated with AI, applying the same standards as using any other technology.

While existing guidance may not explicitly address AI, the OCC maintains that safety and soundness standards and compliance requirements remain applicable. The supervision risk management principles, outlined in the OCC issuances, provide a solid framework for banks implementing AI to operate safely, soundly, and fairly.

On June 6, 2024, the Office of the Comptroller of the Currency (“OCC”) Acting Chairman, Michael J. Hsu, addressed the 2024 Conference on Artificial Intelligence (AI) and Financial Stability, provided critical regulatory insights on the OCC’s thinking on AI. Hsu discussed the systemic risk implications of AI in banking and finance using a “tool or weapon” approach.

In his speech, Hsu emphasized that the rapid adoption of technology during periods of change, without corresponding adjustment in controls, allows risks to grow undetected until they culminate in financial crises. Learning from history, he referenced the lack of regulatory controls in derivatives and financial engineering before the 2008 financial crisis, and more recently, the unregulated growth of cryptocurrencies leading to the “Crypto Winter” of 2022.

To avoid repeating such scenarios, of that rather dire history, Hsu advocated for regulators and the industry to proactively identify points where growth and development should pause to ensure responsible innovation and build trust. He argued that well-designed checkpoints could help balance the need for innovation with necessary safeguards to prevent runaway growth.

 

Risk Management Control Gate Graphic

The evolution of electronic trading provides a valuable case study to consider. Traditionally, trading was manual. Market making eventually transitioned to phone-based systems, with computers providing real-time information, valuations and forecasts for traders. In time, computers took on a more active role, not only providing information but also assisting and guiding traders’ actions, supporting faster execution and more complex strategies. Eventually, algorithms took over entirely, automatically buying and selling securities according to pre-determined instructions without the need for human intervention.

Using the evolution of electronic trading as a reference, Hsu outlined three phases in its history:

  1. Inputs: Computers provided information for human traders to consider.
  2. Co-pilots: Software supported and enabled traders to operate more efficiently and swiftly.
  3. Agents: Computers executed trades autonomously based on algorithms programmed by software developers.

Hsu highlighted that each phase requires different risk management strategies and controls. For example, mitigating the risk of flash crashes—exacerbated by algorithmic trading—demands more sophisticated controls than those needed when traders are simply receiving information on a computer screen and execute trades manually.

Artificial Intelligence (AI) is following a similar evolutionary path: initially producing inputs for human decision-making, then acting as a co-pilot to enhance human actions, and finally becoming an agent that makes decisions independently on behalf of humans. As AI progresses from an input provider to a co-pilot and ultimately to an autonomous agent, the risks and potential negative consequences of weak controls increase significantly.

For banks interested in adopting AI, establishing clear and effective gates between each phase can help ensure that innovations are beneficial rather than harmful. Before advancing to the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established for the new phase being entered.

Since Chairman Hsu’s remarks, in early October 2024, the OCC began a solicitation of academic research papers on the use of artificial intelligence in banking and finance for submission by December 15, 2024.

]]>
https://blogs.perficient.com/2024/10/24/ai-regulations-for-financial-services-occ/feed/ 0 370915