Our risk and regulatory compliance experts, Carl Aridas and Chandni Patel, have just returned from XLoD 2024 in New York. The event brought together the world’s top financial institutions and regulators to discuss the future of non-financial risk and control. With over 500 industry professionals in attendance, it showcased the unwavering commitment to practical innovation within the field.
Strategic Utilization of Intelligent Automation and GenAI
Upside of Generative Artificial Intelligence
The executives that Chandni and Carl spoke with identified two main benefits of AI.
First, it can automate controls and detect fraud patterns earlier and proactively, enhancing the control environment without needing to hire many new staff. Even if not fully effective initially, generative AI software will gradually reduce false positives, improving the value of risk findings over time.
Second, AI can automate many mundane, low-value tasks performed by risk staff, freeing them to focus on more high-value tasks. This potential to enhance efficiency and productivity was noted by many risk practitioners at the conference.
Downsides of AI
One memorable demonstration featured a video that cost only $60 and 30 minutes to create, showing an executive fluently speaking Italian in his own voice. After the 45-second video ran, the risk executive explained that he didn’t speak Italian and that he was merely reading an Italian weather forecast. This raised concerns about the potential misuse of such technology, highlighting the need for robust defenses to prevent fraud, such as unauthorized wire transfers.
Additionally, a simulated boardroom debate pointed out that while humans have biases, it is likely that AI programs can also produce biased results. Given new regulation like the new CRA Act, requiring result-based tests of non-discrimination lending, risk managers need to carefully ensure that they are ever-vigilant, and have the testing methodologies required, to ensure automation programs do not result in automated discrimination.
The Importance of Data Quality
While it was generally agreed that Artificial Intelligence, particularly generative AI, is the future of risk controls, several experienced industry executives note that the value of AI is wholly dependent on the quality of the data provided to the generative AI system to conduct the analysis.
Similar to how programmers learned the saying “garbage in, garbage out” in the 1980’s, a system that does not have quality data points to draw the picture of risks will not be able to determine risks at either the transaction level or higher product/systemic level.
Frequency of Controls Testing
In one roundtable discussion entitled “How to create great RCSA controls”, the group leader asked how many of the 200+ risk professional participants present considered RCSA testing an annual “check the box” exercise. Many hands rose immediately.
During the discussion, several risk executives explained their bank used to develop RCSA goals annually, which had become a routine exercise. However, they shifted to testing controls quarterly, providing near real-time status reports to executives, and conducting deeper dives into controls performed annually. This more frequent approach provided significant value to the bank’s RCSA program, transforming it from a tick-the-box exercise into a meaningful practice.
Conquer Compliance
The insights that Carl and Chandni gathered at XLoD highlight the ongoing evolution within the industry. While intelligent automation and generative AI offer significant opportunities for enhancing efficiency and fraud detection, these technologies still pose challenges like potential biases and data quality issues that must continue to be managed carefully.
As the industry progresses, balancing innovation with regulatory and risk management will be crucial.
Contact us today to discuss your specific risk and regulatory challenges.
Learn More: Strategies + Solutions to Ensure Regulatory and Compliance Excellence
This blog was co-authored by: Carl Aridas and Chandni Patel