Data & Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ Expert Digital Insights Sat, 30 Nov 2024 21:44:31 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Data & Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ 32 32 30508587 Perficient Achieves AWS Healthcare Services Competency, Strengthening Our Commitment to Healthcare https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/ https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/#respond Fri, 29 Nov 2024 16:30:18 +0000 https://blogs.perficient.com/?p=372789

At Perficient, we’re proud to announce that we have achieved the AWS Healthcare Services Competency! This recognition highlights our ability to deliver transformative cloud solutions tailored to the unique challenges and opportunities in the healthcare industry.

Healthcare organizations are under increasing pressure to innovate while maintaining compliance, ensuring security, and improving patient outcomes. Achieving the AWS Healthcare Services Competency validates our expertise in helping providers, payers, and life sciences organizations navigate these complexities and thrive in a digital-first world.

A Proven Partner in Healthcare Transformation

Our team of AWS-certified experts has extensive experience working with leading healthcare organizations to modernize systems, accelerate innovation, and deliver measurable outcomes. By aligning with AWS’s best practices and leveraging the full suite of AWS services, we’re helping our clients build a foundation for long-term success.

The Future of Healthcare Starts Here

This milestone is a reflection of our ongoing commitment to innovation and excellence. As we continue to expand our collaboration with AWS, we’re excited to partner with healthcare organizations to create solutions that enhance lives, empower providers, and redefine what’s possible.

Ready to Transform?

Learn more about how Perficient’s AWS expertise can drive your healthcare organization’s success.

]]>
https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/feed/ 0 372789
Transforming Knowledge Work and Product Development with AI Agents https://blogs.perficient.com/2024/11/25/transforming-knowledge-work-and-product-development-with-ai-agents/ https://blogs.perficient.com/2024/11/25/transforming-knowledge-work-and-product-development-with-ai-agents/#respond Mon, 25 Nov 2024 19:53:49 +0000 https://blogs.perficient.com/?p=372516

Now more than ever, we’re witnessing a significant shift from simple AI capabilities to action-driven AI Agents that promise to revolutionize how we approach knowledge work, product development, and business processes. Drawing insights from Perficient’s industry experts, we’re constantly exploring Generative AI, the emerging world of agentic frameworks, and their potential to reshape organizational capabilities. 

 

Beyond Chatbots: The Evolution of AI Agents 

For the past few years, many organizations have been deploying AI within their organizations via generative AI chatbots – tools that take prompts, access a knowledge base, and generate responses. While these were once groundbreaking tools to improve business functions, they are essentially one-dimensional: they could provide information but couldn’t take meaningful action. 

While AI chatbots can respond to user input, AI Agents can take action and perform tasks within defined parameters. 

AI Agents are rapidly expanding across multiple domains including virtual assistance, complex task management, social media content, product development, and more.  

But what makes an AI Agent truly revolutionary? It’s about creating a more nuanced, human-like intelligence. An AI Agent is characterized by: 

  1. Knowledge Base: Similar to chatbots, but augmented with information that supports outputs, standards, or historical content.
  2. Role Definition: A clear, contextual understanding of its purpose and role often within a team.
  3. Skills and Cooperation: The ability to make decisions and take action (within defined parameters) while providing and taking feedback within a team of other Agents and humans.

 

Navigating the AI Agent Implementation Journey 

Imagine transforming your organization’s potential, not through a massive overhaul, but through iterative, strategic steps. Successful AI Agent implementation is less about a revolutionary leap and more about a thoughtful, incremental progression. 

Like most transformations, planning where to provide value is critical. It’s important to identify pain points that can be delegated to an Agent, rather than distracting your most talented people away from valuable work.  

Perficient’s AI Accelerated Modeling Process (AMP) can help you implement Agentic AI quickly and responsibly. AI AMP is a short, focused four- to six- week initiative with the goal of developing an interactive model that demonstrates how your organization can leverage machine learning, natural language processing, and cognitive computing to jump start Al adoption. 

Organizations are discovering AI Agents aren’t just theoretical – they’re practical problem-solvers across multiple domains: 

  • An Agentic team that can reverse engineer legacy software, documenting business requirements and how the software currently works. 
  • Supplementing your Product Owners by checking the quality of backlog artifacts across multiple teams, providing feedback and enriching those requirements. 
  • Creating synthetic data that can provide greater test coverage at scale. 
  • A social media team with a writer, reviewer, and editor that have knowledge of the brand and previous posts and can critique the writing based on the research. 
  • An Agentic CX team that can merge research, gather initial insights and draft presentations so the human team can focus on the deep insights and recommendations. 
  • Automatic routing of emails based on the content and context of customer service requests. 

 

AI Agents Aren’t Just About Capability – It’s About Responsibility.  

Security isn’t an afterthought; it’s the foundation. Perficient’s PACE Framework is a holistic approach to designing tailored operational AI programs that empower business and technical stakeholders to innovate with confidence while mitigating risks and upholding ethical standards. 

Our comprehensive engagement model evaluates your organization against the PACE framework, tailoring programs and processes to effectively and responsibly integrate AI capabilities across your organization. 

 

The Future of Work 

The transformative potential of AI agents extends far beyond traditional chatbots, representing a strategic pathway for organizations to augment human capabilities intelligently and responsibly.  

To explore how your enterprise can benefit from Agentic AI, reach out to Perficient’s team of experts today. The next wave of AI is about creating intelligent, collaborative agentic systems that augment and transform our capabilities, one specialized Agent at a time. 

]]>
https://blogs.perficient.com/2024/11/25/transforming-knowledge-work-and-product-development-with-ai-agents/feed/ 0 372516
AI Regulations for Financial Services: Hong Kong https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/ https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/#respond Thu, 21 Nov 2024 15:14:09 +0000 https://blogs.perficient.com/?p=370864

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

In the summer of 2024, the Hong Kong Monetary Authority (“HKMA”) issued multiple guidance documents to financial services firms covering their use of artificial intelligence in both customer-facing applications as well as anti-money laundering and detecting and countering terrorist financing (“AML/CTF”). Specifically, the HKMA issued:

  1. The guiding principles issued by the HKMA on August 19, 2024 (“GenAI”) in customer-facing applications (“GenAI Guidelines”). The GenAI Guidelines built on a previous HKMA circular “Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions” dated November 5, 2019 (“2019 BDAI Guiding Principles”) and provide specific guidelines to financial services firms on the use of GenAI; and
  2. An AML/CTF circular issued by the HKMA on September 9, 2024, that requires financial services firms with operations in Hong Kong to:
    1. undertake a study to consider the feasibility of using artificial intelligence in tackling AML/CTF, and
    2. submit the feasibility study and an implementation plan to the HKMA by the end of March 2025.

Leveraging the 2019 BDAI Guiding Principles as a foundation, the GenAI Guidelines adopts the same core principles of governance and accountability, fairness, transparency and disclosure, and data privacy and protection, but introduces additional requirements to address the specific challenges presented by GenAI.

Core Principles Requirements under GenAI Guidelines
Governance and Accountability The board and senior management of financial services firms should remain accountable for all GenAI-driven decisions and processes and should thoroughly consider the potential impact of GenAI applications on customers through an appropriate committee which sits within the firm’s governance framework. The board and senior management should ensure the following:

  • Clearly defined scope of customer-facing GenAI applications to avoid GenAI usage in unintended areas;
  • Proper policies and procedures and related control measures for responsible GenAI use in customer-facing applications; and
  • Proper validation of GenAI models, including a “human-in-the-loop” approach in early stages, i.e. having a human retain control in the decision-making process, to ensure the model-generated outputs are accurate and not misleading.
Fairness Financial services firms are responsible for ensuring that GenAI models produce objective, consistent, ethical, and fair outcomes for customers. This includes:

  • That model generated outputs do not lead to unfair outcomes for customers. As part of this, firms are expected to consider different approaches that may be deployed in GenAI models, such as
    1.       anonymizing certain data categories; and
    2.       using comprehensive and fair datasets; and
    3.       making adjustments to remove bias during validation and review; and
  • During the early deployment stage, provide customers with an option to opt out of GenAI use and request human intervention on GenAI-generated decisions as far as practicable. If an “opt-out” option is unavailable, AIs should provide channels for customers to request review of GenAI-generated decisions.
Transparency and Disclosure Financial Services firms should:

  • Provide appropriate transparency to customers regarding GenAI applications; and
  • Disclose the use of GenAI to customers; and
  • Communicate the use, purpose, and limitations of GenAI models to enhance customer understanding.
Data Privacy and Protection Financial Services firms should:

  • Implement effective protection measures for customer data; and
  • Where personal data are collected and processed by GenAI applications, comply with the Personal Data (Privacy) Ordinance, including the relevant recommendations and good practices issued by the Office of the Privacy Commissioner for Personal Data, such as the:
  1. “Guidance on the Ethical Development and Use of Artificial Intelligence” issued on August 18, 2021, and
  2. “Artificial Intelligence: Model Personal Data Protection Framework” issued on June 11, 2024.

Consistent with the HKMA’s recognition of the potential use of GenAI in consumer protection in the GenAI Guidelines, the HKMA Circular also indicates that the HKMA recognizes the considerable benefits that may come from the deployment of AI in improving AML/CTF. In particular, the HKMA Circular notes that the use of AI powered systems “take into account a broad range of contextual information focusing not only on individual transactions, but also the active risk profile and past transaction patterns of customers…These systems have proved to be more effective and efficient than conventional rules-based transaction monitoring systems commonly used by covered firms.”

Given this, the HKMA has indicated that financial services firms with operations in Hong Kong should:

  • give due consideration to adopting AI in their AML/CTF monitoring systems to enable them to stay effective and efficient; and
  • undertake a feasibility study in relation to the adoption of AI in their AML/CTF monitoring systems and based on the outcome of that review, should formulate an implementation plan.

The feasibility study and implementation plan should be signed off at the board level and submitted to the HKMA by March 31, 2025.

]]>
https://blogs.perficient.com/2024/11/21/ai-regulations-for-financial-services-hong-kong/feed/ 0 370864
Adaptive by Design: The Promise of Generative Interfaces https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/ https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/#respond Wed, 20 Nov 2024 21:44:55 +0000 https://blogs.perficient.com/?p=372351

Imagine a world where digital interfaces anticipate your needs, understand your preferences, and adapt in real-time to enhance your experience. This is not a futuristic daydream, but the promise of generative interfaces. 

Generative interfaces represent a new paradigm in user experience design, moving beyond static layouts to create highly personalized and adaptive interactions. These interfaces are powered by generative AI technologies that respond to each user’s unique needs, behaviors, and context. The result is a fluid, intuitive experience—a digital environment that transforms, adapts, and grows with its users. 

 

The Evolution of User Interaction 

Traditional digital interfaces have long relied on predefined structures and user journeys. While these methods have served us well, they fall short of delivering truly personalized experiences. 

Generative interfaces, on the other hand, redefine personalization and interactivity at the level of individual interactions. They have the capability to bring data and components directly to users from multiple systems, seamlessly integrating them into a cohesive user experience.  

Users can perform tasks without switching applications as generative systems dynamically render necessary components within the interface, such as images, interactive components, and data visualizations. 

This adaptability means that generative interfaces continually evolve based on users’ inputs, preferences, and behaviors, creating a more connected and fluid experience. Instead of users adapting to software, the software adapts to them, enhancing productivity, reducing friction, and making digital interactions feel natural. 

 

Adaptive Design Principles 

At the heart of generative interfaces lies the principle of adaptability. This adaptability is more than just personalization—it’s about creating an interface that is in constant dialogue with its user. Unlike conventional systems that rely on rules and configurations set during development, generative interfaces leverage machine learning and user data to generate real-time responses. This not only makes the experience dynamic but also inherently human-centered. 

For instance, a digital assistant that supports a knowledge worker doesn’t just answer questions—it understands the context of the work, anticipates upcoming needs, and interacts in a way that aligns with the user’s goals. Generative interfaces are proactive and responsive, driven by the understanding that user needs can change from moment to moment. 

 

Envisioning the Future 

Generative interfaces hold the promise of reshaping not just individual applications, but entire categories of digital interaction—from productivity tools to entertainment platforms. Imagine entertainment systems that automatically adjust content suggestions based on your mood, or collaboration platforms that adapt their layouts and tools depending on whether you are brainstorming or executing a task. 

This is why data privacy and security considerations must be built into every aspect of the system, from data collection and storage to processing and output generation.  Without control of the experience, you risk low-quality outputs that can do more harm than good. 

As organizations deploy generative interfaces, robust governance frameworks become essential for managing risks and ensuring responsible AI use 

 

Embracing Generative Interfaces

The shift towards generative interfaces is a step towards making technology more human-centric. As we embrace these adaptive designs, we create an opportunity to redefine our digital experiences, making them more intuitive, enjoyable, and impactful. At Perficient, we are pushing the boundaries of how technology can adapt to users rather than forcing users to adapt to technology. 

The impact of these interfaces goes beyond just convenience; they are capable of crafting meaningful digital experiences that feel personal and fulfilling. As generative AI continues to advance, I envision a future where technology fades into the background, seamlessly blending into our lives and intuitively enhancing everything from work to leisure. 

]]>
https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/feed/ 0 372351
Multiclass Text Classification Using LLM (MTC-LLM): A Comprehensive Guide https://blogs.perficient.com/2024/11/20/multiclass-text-classification-using-llm-mtc-llm-a-comprehensive-guide/ https://blogs.perficient.com/2024/11/20/multiclass-text-classification-using-llm-mtc-llm-a-comprehensive-guide/#respond Wed, 20 Nov 2024 16:08:04 +0000 https://blogs.perficient.com/?p=372343

by Luis Pacheco and Uday Yallapragada

Introduction to Multiclass Text Classification with LLMs

Multiclass text classification (MTC) is a natural language processing (NLP) task where text is categorized into multiple predefined categories or classes. Traditional approaches rely on training machine learning models, requiring labeled data and iterative fine-tuning. However, with the advent of large language models (LLMs), this task can now be approached differently. Instead of building and training a custom model, we can utilize pre-trained LLMs to classify text using carefully designed prompts, allowing rapid deployment with minimal data requirements and enabling flexibility to adjust classes without retraining. 

Approaches for MTC-LLM 

In MTC-LLM, we generally have two main approaches for utilizing LLMs to achieve classification. 

Single Classifier with a Multi-Class Prompt 

Using a single LLM prompt for multi-class text classification involves providing a single, comprehensive prompt that instructs the model on all possible classes, expecting it to classify the text into one of these categories. This approach is simple and straightforward, as it requires only one prompt, making implementation fast and computationally efficient. It also reduces costs, as each classification requires just one LLM call, saving on both usage costs and processing time. 

However, this approach has notable limitations. When classes are similar, the model may struggle to make precise distinctions, reducing accuracy in nuanced tasks. Additionally, handling all categories within a single prompt can lead to lengthy and complex instructions, which may introduce ambiguity and diminish the model’s reliability. Another critical drawback is the approach’s inability to detect hierarchical relationships within a taxonomy; without recognizing these layers, the model may miss important contextual distinctions between classes that depend on hierarchical categorization. 

Hierarchical Sequence of Binary Classifiers 

The hierarchical sequence of binary classifiers approach structures classification as a decision tree, where each node represents a binary decision point. Starting from the top node, the model proceeds through a series of binary classifications, with each LLM call determining whether the text belongs to a specific class. This process continues down the hierarchy until a final classification is achieved. 

This method provides high accuracy since each binary decision allows the model to make precise, focused choices, which is particularly valuable for distinguishing among nuanced classes. It is also highly adaptable to complex hierarchies, accommodating cases where broad classes may require further subclass distinctions for an accurate classification. 

However, this approach comes with increased costs and latency, as multiple LLM calls are needed to reach a final classification, making it more expensive and time-consuming. Additionally, managing this approach requires structuring and maintaining numerous prompts and class definitions, adding to its complexity. For use cases where accuracy is prioritized over cost—such as in high-stakes applications like customer service—this hierarchical method is generally the recommended approach. 

Example Use Case: Intent Detection for Airline Customer Service 

Let’s consider an airline company using an automated system to respond to customer emails. The goal is to detect the intent behind each email accurately, enabling the system to route the message to the appropriate department or generate a relevant response. This system leverages a hierarchical sequence of binary classifiers, providing a structured approach to intent detection. At each level of the hierarchy, binary classifiers assess whether a specific intent is present, progressively narrowing down the scope of inquiry to arrive at a precise classification. 

 High-Level Intent Classification 

At the first stage of the hierarchy, the system categorizes emails into high-level intents to streamline processing and ensure accurate responses. These high-level intents include: 

General QueriesThis intent captures broad, information-seeking emails unrelated to specific complaints or actions.    These emails are generally routed to informational workflows or knowledge bases, allowing for automated responses with the required details.  

Booking IssuesEmails under this intent are related to the booking process or flight details. These emails are generally routed to booking support workflows, where sub-classification helps further refine the action required, such as new bookings, modifications, or cancellations. 

Customer ComplaintsThis category identifies emails expressing dissatisfaction or grievances. These emails are prioritized for customer service escalation, ensuring timely resolution and acknowledgment. Examples include: 

Refund RequestsThis category is specific to emails where customers request refunds for canceled flights, overcharges, or other issues. These emails are routed to the refund processing team, where workflows validate the claim and initiate the refund process.  Examples include: 

Special Assistance RequestsEmails in this category pertain to special accommodations or requests from passengers. These are routed to workflows that handle special services and ensure the requests are appropriately addressed. 

Lost and Found Inquiries – This intent captures emails related to lost items or baggage issues. These emails are routed to the airline’s lost and found or baggage resolution teams. 

Hierarchical Sub-Classification 

Once the high-level intent is identified, a second layer of binary classifiers operates within each category to refine the classification further. For example: 

Booking Issues Sub-Classifiers 

  •    New Bookings 
  •   Modifications to Existing Bookings   
  •    Cancellations   

Customer Complaints Sub-Classifiers  

  •    Flight Delays   
  •    Billing Issues   
  •    Service Quality   

Refund Requests Sub-Classifiers 

  •    Flight Cancellations   
  •    Baggage Fees   
  •    Duplicate Charges   

Special Assistance Requests Sub-Classifiers 

  •    Mobility Assistance   
  •    Dietary Preferences   
  •    Family Travel Needs   

Lost and Found Sub-Classifiers  

  •    Lost Items in Cabin   
  •    Missing Baggage   
  •    Items Lost at the Airport   

Benefits of this Approach 

 Scalability – The hierarchical design enables seamless addition of new intents or sub-intents as customer needs evolve, without disrupting the existing classification framework. 

EfficiencyBy filtering out irrelevant categories at each stage, the system minimizes computational overhead and ensures that only relevant workflows are triggered for each email. 

Improved AccuracyBinary classification simplifies the decision-making process, leading to higher precision and recall compared to a flat multiclass classifier. 

Enhanced Customer ExperienceAutomated responses tailored to specific intents ensure quicker resolutions and more accurate handling of customer inquiries, enhancing overall satisfaction. 

Cost-Effectiveness – Automating intent detection reduces reliance on human intervention for routine tasks, freeing up resources for more complex customer service needs. 

By categorizing emails into high-level intents like general queries, booking issues, complaints, refunds, special assistance requests, and lost and found inquiries, this automated system ensures efficient routing and resolution. Hierarchical sub-classification adds an extra layer of precision, enabling the airline to deliver fast, accurate, and customer-centric responses while optimizing operational efficiency. 

The table below is a representation of the complete taxonomy of the intent detection system organized into primary and secondary intents. This taxonomy enables the chatbot to understand and respond more accurately to customer intents, from broad categories down to specific, actionable concerns. Each level helps direct the inquiry to the appropriate team or resource for faster, more effective resolution. 

 

Level  Category  Sub-Category 
High-Level Intent  General Queries    
Sub-Intent  General Queries  Baggage Policy 
Sub-Intent  General Queries  Frequent Flyer Program 
Sub-Intent  General Queries  Travel with Pets 
High-Level Intent  Booking Issues    
Sub-Intent  Booking Issues  New Bookings 
Sub-Intent  Booking Issues  Modifications to Existing Bookings 
Sub-Intent  Booking Issues  Cancellations 
High-Level Intent  Customer Complaints    
Sub-Intent  Customer Complaints  Flight Delays 
Sub-Intent  Customer Complaints  Billing Issues 
Sub-Intent  Customer Complaints  Service Quality 
High-Level Intent  Refund Requests    
Sub-Intent  Refund Requests  Flight Cancellations 
Sub-Intent  Refund Requests  Baggage Fees 
Sub-Intent  Refund Requests  Duplicate Charges 
High-Level Intent  Special Assistance Requests    
Sub-Intent  Special Assistance Requests  Mobility Assistance 
Sub-Intent  Special Assistance Requests  Dietary Preferences 
Sub-Intent  Special Assistance Requests  Family Travel Needs 
High-Level Intent  Lost and Found Inquiries    
Sub-Intent  Lost and Found Inquiries  Lost Items in Cabin 
Sub-Intent  Lost and Found Inquiries  Missing Baggage 
Sub-Intent  Lost and Found Inquiries  Items Lost at the Airport 

 

The diagram below provides a depiction of this architecture. 

 

 

Mtc Llm Blog Image

Prompt Structure for a Binary Classifier 

Here’s a sample structure for a binary classifier prompt, where the LLM determines if a customer message is related to a Booking Inquiry. 

You are an AI language model tasked with classifying whether a customer's message to the Acme airline company is a "BOOKING INQUIRY."  

Definition: 

A "BOOKING INQUIRY" is a message that directly involves: 

Booking a flight: Questions or assistance requests about reserving a new flight. 
Modifying a reservation: Any request to change an existing booking, such as altering dates, times, destinations, or passenger details. 
Managing a reservation: Tasks like seat selection, cancellations, refunds, or upgrading class, which are tied to the customer's reservation. 
Resolving issues related to booking: Problems like errors in the booking process, confirmation issues, or requests for help with travel-related arrangements. 

Messages must demonstrate a clear and specific relationship to these areas to qualify as "BOOKING INQUIRY." General questions about unrelated travel aspects (e.g., baggage fees, flight status, or policies) are classified as "NOT A BOOKING INQUIRY." 

Instructions (Chain-of-Thought Process): 

For each customer message, follow this reasoning process: 

Step 1: Understand the Context - Read the message carefully. If the message is in a language other than English, translate it to English first for proper analysis. 
Step 2: Identify Booking-Related Keywords or Phrases - Look for keywords or phrases related to booking (e.g., "book a flight," "cancel reservation," "change my seat"). Determine if the message is directly addressing the reservation process or related issues. 
Step 3: Match to Definition - Compare the content of the message to the definition of "BOOKING INQUIRY." Determine if it fits one of the following categories: 
Booking a flight 
Modifying an existing reservation 
Managing or resolving booking-related issues 
Step 4: Evaluate Confidence Level - Decide if the message aligns strongly with the definition and the criteria for "BOOKING INQUIRY." If there is ambiguity or insufficient information classify it as "NOT A BOOKING INQUIRY." 
Step 5: Provide a Clear Explanation - Based on your analysis, explain your decision in step-by-step reasoning, ensuring the classification is well-justified. 

Examples: 

Positive Examples: 

Input Message - "I’d like to change my seat for my flight next week." 
Decision: true 
Reasoning: The message explicitly mentions "change my seat," which is directly related to modifying a reservation. It aligns with the definition of "BOOKING INQUIRY" as it involves managing a booking. 

Input Message - "Can I cancel my reservation and get a refund?" 
Decision: true 
Reasoning: The message includes "cancel my reservation" and "get a refund," which are part of managing an existing booking. This request is a clear match with the definition of "BOOKING INQUIRY." 

Negative Examples: 

Input Message: "How much does it cost to add extra baggage?" 
Decision: false 
Reasoning: The message asks about baggage costs, which relates to general travel policies rather than reservations or bookings. There is no indication of booking, modifying, or managing a reservation. 

Input Message: "What’s the delay on flight AA123?" 
Decision: false 
Reasoning: The message focuses on the status of a flight, not the reservation or booking process. It does not meet the definition of "BOOKING INQUIRY." 

Output: Provide your classification output in the following JSON format:
{
  "decision": true/false,
  "reasoning": "Step-by-step reasoning for the decision."
}

 

 

Example Code for Binary Classifier Using boto3 and Bedrock 

In this section, we are providing a Python script that implements hierarchical intent detection on user messages by interfacing with a language model (LLM) via AWS Bedrock runtime. The script is designed for flexibility and can be customized to work with other LLM frameworks.

This module is part of an automated email processing system designed to analyze customer messages, detect their intent, and generate structured responses based on the analysis. The system employs a large language model API to perform Natural Language Processing (NLP), classifying emails into primary intents such as “General Queries,” “Booking Issues,” or “Customer Complaints.”

```python 

import json 
import boto3 
from pathlib import Path 
from typing import List 

def get_prompt(intent: str) -> str: 

    """ 
    Retrieve the prompt template for a given intent from the 'prompts' directory. 
    Assumes that prompt files are stored in a './prompts/' directory relative to this file, 
    and that the filenames are in the format '{INTENT}-prompt.txt', e.g., 'GENERAL_QUERIES-prompt.txt'. 

    Parameters: 
        intent (str): The intent for which to retrieve the prompt template. 
 
    Returns: 
        str: The content of the prompt template file corresponding to the specified intent. 
    """ 

    # Determine the path to the 'prompts' directory relative to this file. 
    project_root = Path(__file__).parent 
    full_path = project_root / "prompts" 

 
    # Open and read the prompt file for the specified intent. 
    with open(full_path / f"{intent}-prompt.txt") as file: 
        prompt = file.read() 

    return prompt 

 

def intent_detection(message: str, decision_list: List[str]) -> str: 

    """ 
    Recursively detects the intent of a message by querying an LLM. 
    This function iterates over a list of intents, formats a prompt for each, 
    and queries the LLM to determine if the message matches the intent. 
    If a match is found, it may recursively check for more specific sub-intents.  

    Parameters: 
        message (str): The user's message for which to detect the intent. 
        decision_list (List[str]): A list of intent names to evaluate. 

    Returns: 
        str: The detected intent name, or 'UNKNOWN' if no intent is matched. 
    """ 

    # Create a client for AWS Bedrock runtime to interact with the LLM. 
    client = boto3.client("bedrock-runtime", region_name="us-east-1") 

    for intent in decision_list: 

        # Retrieve and format the prompt template with the user's message. 
        prompt_template = get_prompt(intent) 
        prompt = prompt_template.format(input_text=message) 


        # Construct the request body for the LLM API call. 
        body = json.dumps( 
            { 
                "anthropic_version": "bedrock-2023-05-31", 
                "max_tokens": 4096, 
                "temperature": 0.0, 
                "messages": [ 
                    { 
                        "role": "user", 
                        "content": [ 
                            {"type": "text", "text": prompt} 
                        ] 
                    } 
                ] 
            } 
        ) 

        # Invoke the LLM model with the constructed body. 
        raw_response = client.invoke_model( 
            modelId="anthropic.claude-3-5-sonet-20240620-v1:0", 
            body=body 
        ) 

        # Read and parse the response from the LLM. 
        response = raw_response.get("body").read() 
        response_body = json.loads(response) 
        llm_text_response = response_body.get("content")[0].get("text") 

        # Parse the LLM's text response to JSON. 
        llm_response_json = json.loads(llm_text_response) 

        # Check if the LLM decided that the message matches the current intent. 
        if llm_response_json.get("decision", False): 
            transitional_intent = intent 
            break  # Exit the loop as we've found a matching intent. 
        else: 
            # If not matched, set the transitional intent to 'UNKNOWN'. 
            transitional_intent = "UNKNOWN" 

 
    # Define the root intents that may have more specific sub-intents. 
    root_intents = ["GENERAL_QUERIES", "BOOKING_ISSUES", "CUSTOMER_COMPLAINTS"] 

    # If a matching root intent is found, recursively check for more specific intents. 
    if transitional_intent in root_intents: 

        # Mapping of root intents to their related sub-intents. 
        intent_definition = { 
            "GENERAL_QUERIES_related_intents": [ 
                "DESTINATION_INFORMATION", 
                "LOYALTY_PROGRAM_DETAILS", 
                "FLIGHT_SCHEDULES", 
                "AIRLINE_POLICIES", 
                "CHECK_IN_PROCEDURES", 
                "IN_FLIGHT_SERVICES", 
                "CANCELLATION_POLICY" 
            ], 

            "BOOKING_ISSUES_related_intents": [ 
                "FLIGHT_CHANGE", 
                "SEAT_SELECTION", 
                "BAGGAGE" 
            ], 

            "CUSTOMER_COMPLAINTS_related_intents": [ 
                "DELAY", 
                "SERVICE_DISSATISFACTION", 
                "SAFETY_CONCERNS" 
            ] 
        } 

        # Recursively call intent_detection with the related sub-intents. 
        return intent_detection( 
            message, 
            intent_definition.get(f"{transitional_intent}_related_intents") 
        ) 

    else: 
        # Return the detected intent or 'UNKNOWN' if none matched. 
        return transitional_intent 
 

def main(message: str) -> str: 

    """ 
    Main function to initiate intent detection on a user's message. 
    Parameters: 
        message (str): The user's message for which to detect the intent.  
    Returns: 
        str: The detected intent name, or 'UNKNOWN' if no intent is matched. 
    """ 

    # Start intent detection with the root intents. 

    return intent_detection( 
        message=message, 
        decision_list=[ 
            "GENERAL_QUERIES", 
            "BOOKING_ISSUES", 
            "CUSTOMER_COMPLAINTS" 
        ] 
    ) 

if __name__ == "__main__": 
    message = """\ 
Hello, 
I'm planning to travel next month and wanted to ask about your airline's policies. Could you please provide information on: 
Your refund and cancellation policies. 
Rules regarding carrying liquids or other restricted items. 
Any COVID-19 safety measures still in place. 
Looking forward to your response. 
    """ 
    print(main(message=message))

 

Evaluation Guidelines 

To comprehensively evaluate the performance of a hierarchical sequence of binary classifiers for multiclass text classification using LLMs, a well-constructed ground truth dataset is critical. This dataset should be meticulously designed to serve multiple purposes, ensuring both the overall system and individual classifiers are assessed accurately. 

Dataset Design Considerations 

  • Balanced Dataset for Overall Evaluation: The ground truth dataset must encompass a balanced representation of all intent categories to evaluate the system holistically. This enables the calculation of critical overall metrics such as accuracy, macro-precision, macro-recall, and micro-precision. A balanced dataset ensures that no specific category disproportionately influences these metrics, providing a fair measure of the system’s performance across all intents.
  • Per-Classifier Evaluation: Each binary classifier in the hierarchy should also be evaluated individually. To achieve this, the dataset must contain balanced positive and negative samples for each classifier. This balance is essential to calculate metrics such as accuracy, precision, recall, and F1-score for each individual classifier, enabling targeted performance analysis and iterative improvements at every level of the hierarchy.
  • Negative Sample Creation: Designing negative samples is a critical aspect of the dataset preparation process. Negative samples should be created using common sense principles to reflect real-world scenarios accurately: 
    • Diversity: Negative samples should be diverse to simulate various input conditions, preventing classifiers from overfitting to narrow definitions of “positive” and “negative” examples. 
    • Relevance for Lower-Level Classifiers: For classifiers deeper in the hierarchy, negative samples need not include examples from unrelated categories. For instance, in a “Flight Change” classifier, negative samples can exclude intents related to “Safety Concerns” or “In-Flight Entertainment.” This specificity helps avoid unnecessary complexity and confusion, focusing the classifier on its immediate decision boundary. 

Metrics for Evaluation 

  • Overall System Metrics: 
    • Accuracy: The ratio of correctly classified samples to total samples, indicating the system’s general performance. 
    • Macro and Micro Precision & Recall: Macro metrics weigh each class equally, providing insights into system performance for underrepresented categories. Micro metrics, on the other hand, weigh classes proportionally to their sample sizes, offering a perspective on system performance for frequently occurring categories. 
  • Classifier-Level Metrics: 
    • Each binary classifier must be evaluated independently using accuracy, precision, recall, and F1-score. These metrics help pinpoint weaknesses in individual classifiers, which can then be addressed through retraining, hyperparameter tuning, or data augmentation. 
  • Cost per Classification: 
    • Tracking the computational or financial cost per classification is vital, especially in scenarios where resource efficiency is a priority. This metric helps balance the trade-off between model performance and operational budget constraints. 

Additional Considerations 

  • Dataset Size:  The dataset should be large enough to capture variations in intent expressions while ensuring each classifier receives sufficient positive and negative samples for robust training and evaluation. 
  • Data Augmentation: Techniques such as paraphrasing, synonym replacement, or noise injection can be employed to expand the dataset and improve classifier generalization. 
  • Cross-Validation:  Employing techniques like k-fold cross-validation can ensure that the evaluation metrics are not biased by a specific train-test split, providing a more reliable assessment of the system’s performance. 
  • Real-World Testing:  In addition to ground truth datasets, testing the system on real-world, unstructured data can reveal gaps in performance and help fine-tune classifiers to handle practical scenarios effectively. 

By adhering to these principles, the evaluation process will yield a thorough understanding of both the end-to-end system’s performance and the individual strengths and weaknesses of each classifier, guiding data-driven refinements and ensuring robust, scalable deployment. 

Additional Best Practices for Multiclass Text Classification Using LLMs 

Prompt Caching 

Prompt caching is a powerful technique for improving efficiency and reducing latency in applications with repeated queries or predictable user interactions. By caching prompts and their corresponding LLM-generated outputs, systems can avoid redundant API calls, thereby improving response times and lowering operational costs. 

Implementation Across Popular LLM Suites 
  • Anthropic: Anthropic’s models support prompt caching is done by marking specific parts of your prompt—such as tool definitions, system instructions, or lengthy context—with the cache_control parameter in your API requests. For example, you might include the entire text of a book in your prompt and cache it, allowing you to ask multiple questions about the text without reprocessing it each time. To enable this feature, include the header anthropic-beta: prompt-caching-2024-07-31 in your API calls, as prompt caching is currently in beta. By structuring your prompts with static content at the beginning and dynamic, user-specific content at the end, and by strategically marking cacheable sections, you can optimize performance, reduce latency, and lower operational costs when working with Anthropic’s language models. 
  • ChatGPT (OpenAI): To implement OpenAI’s Prompt Caching and optimize your application’s performance, structure your prompts so that static or repetitive content—like system prompts and common instructions—is placed at the beginning, while dynamic, user-specific information is appended at the end. This setup leverages exact prefix matching, increasing the likelihood of cache hits for prompts longer than 1,024 tokens. When the prefix of a prompt matches a cached entry, the system reuses the cached processing results, reducing latency by up to 80% and cutting costs by 50% for lengthy prompts. The caching mechanism operates automatically, requiring no additional code changes, and is specific to your organization to maintain data privacy. Cached prompts remain active for 5 to 10 minutes of inactivity and can persist up to an hour during off-peak periods. By following these implementation strategies, you can enhance API efficiency and reduce operational costs when interacting with OpenAI’s language models. 
  • Gemini (Google): Context caching in the Gemini API enables you to reduce processing time and costs by caching large input tokens that are reused across multiple requests. To implement this, you first upload your content (such as large documents or files) using the Files API. Then, you create a cache with a specified Time to Live (TTL) using the CachedContent.create() method, which stores the tokenized content for a duration you choose. When generating responses, you construct a GenerativeModel that references this cached content, allowing the model to access the cached tokens without reprocessing them. This is particularly effective for applications like chatbots with extensive system instructions or repetitive analysis tasks, as it minimizes redundant token processing and optimizes overall performance. 
Best Practices for Implementing Caching with Large Language Models (LLMs):
  • Structure Prompts Effectively 
    • Static Content First: Place static or repetitive content—such as system prompts, instructions, context, or examples—at the beginning of your prompt. 
    • Dynamic Content Last: Append variable or user-specific information at the end. This increases the likelihood of cache hits due to exact prefix matching. 
  • Leverage Exact Prefix Matching 
    • Ensure that the cached sections of your prompts are identical across requests. Even minor differences can prevent cache hits. 
    • Use consistent formatting, wording, and structure for the static parts of your prompts. 
  • Utilize Caching for Long Prompts
    • Caching is most beneficial for prompts that exceed certain token thresholds (e.g., 1,024 tokens). 
    • For lengthy prompts with repetitive elements, caching can significantly reduce latency and cost. 
  • Mark Cacheable Sections Appropriately 
    • Use available API features (such as cache_control parameters or specific headers) to designate cacheable sections in your prompts. 
    • Clearly define cache boundaries to optimize caching efficiency. 
  • Set Appropriate Time to Live (TTL)
    • Adjust the TTL based on how frequently the cached content is accessed. 
    • Longer TTLs are advantageous for content that is reused often, while shorter TTLs prevent stale data in dynamic environments. 
  • Be Mindful of Model and API Constraints
    • Ensure that you’re using models that support caching features. 
    • Be aware of minimum token counts and other limitations specific to the LLM you’re using. 
  • Understand Pricing and Cost Implications: 
    • Familiarize yourself with the pricing model for caching, including any costs for cache writes, reads, and storage duration. 
    • Balance the cost of caching against the benefits of reduced processing time and lower per-request costs. 
  • Handle Cache Invalidation and Updates: 
    • Implement mechanisms to update or invalidate caches when the underlying content changes. 
    • Be prepared to handle cache misses gracefully by processing the full prompt when necessary. 

Temperature Settings

The temperature parameter is critical in controlling the randomness and creativity of an LLM’s output. 

Low Temperature (e.g., 0.2) 

A low temperature setting makes the model’s outputs more deterministic by prioritizing higher-probability tokens. This is ideal for: 

  • Classification-oriented tasks requiring consistent responses. 
  • Scenarios where factual accuracy is critical. 
  • Narrow decision boundaries, such as binary classifiers in the hierarchy. 
High Temperature (e.g., 0.8–1.0) 

Higher temperature settings introduce more randomness, making the model explore diverse possibilities. This is useful for: 

  • Generating creative text, brainstorming ideas, or handling ambiguous inputs. 
  • Scenarios where the intent is not well-defined and may benefit from exploratory responses. 
Best Practices for Multiclass Hierarchies 
  • Use low temperatures for top-level binary classifiers where intent boundaries are clear. 
  • Experiment with slightly higher temperatures for ambiguous or nuanced intent categories to capture edge cases during evaluation phase
Adding Reasoning to the Prompt 

Encouraging LLMs to reason step-by-step improves their ability to handle ambiguous or complex cases. This can be achieved by explicitly prompting the model to break down the classification process. For instance: 

  • Use phrases like “First, analyze the input for relevant keywords. Then, decide the most appropriate intent based on the following rules.” 
  • This approach helps mitigate errors in cases where multiple intents may appear similar by providing a logical framework for decision-making. 

Prompt Optimization with Meta-Prompting 

Meta-prompts are prompts about prompts. They guide the LLM to follow specific rules or adhere to structured formats for better interpretability and accuracy. Examples include: 

  • Defining constraints, such as “Respond only with ‘Yes’ or ‘No.'” 
  • Setting explicit rules, such as “If the input mentions scheduling changes, classify as ‘Flight Change.'” 
  • Clarifying ambiguous instructions, such as “If unsure, classify as ‘Miscellaneous’ and provide an explanation.” 

Fine-Tuning Other Key LLM Parameters 

  • Max Tokens – Control the length of the output to avoid excessive verbosity or truncation. For classification tasks, limit the tokens to the minimal response necessary (e.g., “Yes,” “No,” or a concise class label). 
  • Top-p Sampling (Nucleus Sampling) – Instead of selecting tokens based on temperature alone, top-p sampling chooses from a subset of tokens whose cumulative probability adds up to a specified threshold. For deterministic tasks, set top-p close to 0.9 to balance precision and diversity. 
  • Stop Sequences – Define stop sequences to terminate outputs gracefully, ensuring outputs do not contain unnecessary or irrelevant continuations. 

Iterative Prompt Refinement 

Iterative prompt refinement is a crucial process for continuously improving the performance of LLMs in hierarchical multiclass classification tasks. By systematically analyzing errors, refining prompts, and validating changes, you can ensure the system evolves to handle complex and ambiguous scenarios more effectively. A structured “prompt refinement pipeline” can greatly enhance this process by combining meta-prompts and ground truth datasets for evaluation. 

The Prompt Refinement Pipeline 

A prompt refinement pipeline is an automated or semi-automated framework that systematically refines, tests, and evaluates prompts. It consists of the following components: 

Meta-Prompt for Refinement 

Use an LLM itself to refine existing prompts by generating more concise, effective, or logically robust alternatives. A meta-prompt asks the model to analyze and improve a given prompt. For example: 

  • Input Meta-Prompt: 
    • “The following prompt is used for a binary classifier in a hierarchical text classification task. Suggest improvements to make it more specific, avoid ambiguity, and handle edge cases better. Also, propose an explanation for why your suggestions improve the prompt. Current prompt: [insert prompt].” 
  • Output: The model may suggest rewording, adding explicit constraints, or including step-by-step reasoning logic. These suggestions can then be iteratively tested. 
Ground Truth Dataset for Evaluation 

Use a ground truth dataset to validate refined prompts against pre-labeled examples. This ensures that improvements suggested by the meta-prompt are objectively tested. Key steps include: 

  • Evaluate the refined prompt on classification accuracy, precision, recall, and F1-score using the ground truth dataset. 
  • Compare these metrics against the original prompt to ensure genuine improvement. 
  • Use misclassified examples to further identify weaknesses and refine prompts iteratively. 
Automated Testing and Feedback Loop 

Implement an automated system to: 

  • Test the refined prompt on a validation set. 
  • Log performance metrics, including correct classifications, errors, and cases where ambiguity persists. 
  • Highlight specific prompts or input types that consistently underperform for further manual refinement. 
Version Control and Experimentation 

Maintain a version-controlled repository for prompts. Track: 

  • Changes made during each refinement cycle. 
  • Associated performance metrics. 
  • Rationale behind prompt modifications. This documentation provides a knowledge base for future refinements and prevents regressions. 
Benefits of a Prompt Refinement Pipeline 
  • Systematic Improvement  – A structured approach ensures refinements are not ad hoc but are guided by data-driven insights and measurable results. 
  • ScalabilityBy automating key aspects of the refinement process, the pipeline scales effectively with larger datasets and more complex classification hierarchies. 
  • Model-AgnosticThe pipeline can be used with various LLMs, such as Anthropic’s models, OpenAI’s ChatGPT, or Google Gemini. This flexibility enables organizations to adopt or switch LLM providers without losing the benefits of the refinement process. 
  • Increased Robustness – Leveraging ground truth datasets ensures that prompts are evaluated on real-world examples, helping the model handle diverse and ambiguous scenarios with greater reliability. 
  • Meta-Prompt BenefitsMeta-prompts provide an efficient mechanism to leverage LLM capabilities for self-improvement. By incorporating LLM-generated suggestions, the system continuously evolves in response to new challenges or requirements. 
  • Error AnalysisThe feedback loop enables a focused analysis of misclassifications, guiding the creation of targeted prompts that address specific failure cases or edge conditions. 
Iterative Workflow for Prompt Refinement Pipeline 
  • Baseline Testing – Start with an initial prompt and evaluate it on the ground truth dataset. Log performance metrics. 
  • Meta-Prompt Refinement – Use a meta-prompt to generate improved versions of the initial prompt. Select the most promising refinement. 
  • Validation and Comparison – Test the refined prompt on the dataset, comparing results to the baseline. Identify improvements and areas where performance remains suboptimal. 
  • Targeted Refinements – For consistently misclassified samples, manually analyze and refine the prompt further. Re-evaluate until significant performance gains are achieved. 
  • Deployment and Monitoring- Deploy the improved prompt into production and monitor real-world performance. Incorporate newly encountered edge cases into subsequent iterations of the refinement pipeline. 

A prompt refinement pipeline provides a robust framework for systematically improving the performance of LLMs in hierarchical multiclass classification tasks. By combining meta-prompts, ground truth datasets, and automated evaluation, this approach ensures continuous improvement, scalability, and adaptability to new challenges, resulting in a more reliable and efficient classification system. 

References

  1. Brown, T. B., et al. (2020). “Language Models are Few-Shot Learners.” *NeurIPS* 
  2. OpenAI. “Best Practices for Prompt Engineering with GPT-4.” 
  3. Anthropic. “Building Reliable Classification with Claude.” 
  4. https://huggingface.co/docs/transformers/en/tasks/prompting 
  5. https://www.vellum.ai/llm-parameters-guide 
]]>
https://blogs.perficient.com/2024/11/20/multiclass-text-classification-using-llm-mtc-llm-a-comprehensive-guide/feed/ 0 372343
AI Regulations for Financial Services: Japan https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/ https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/#respond Tue, 19 Nov 2024 15:13:32 +0000 https://blogs.perficient.com/?p=370870

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

Japan currently has yet to pass a law or regulation specifically directed to regulating the use of AI at financial services firms. Currently, the Japanese government and regulators are taking an indirect approach of supporting a policy goal of prioritizing innovation while minimizing foreseeable harms.

On April 19, 2024, the Japanese government published new “AI Guidelines for Business Version 1.0” (the “Guidelines”). While not legally binding, the Guidelines are expected to support and induce voluntary efforts by developers, providers, and business users of AI systems through compliance with generally recognized AI principles and are similar to the EU regulations discussed previously in that they propose a risk-based approach.

As noted on page 26 of the English version of the Guidelines, the Guidelines promote “agile governance” where “multiple stakeholders continuously and rapidly run a cycle consisting of environment and risk analysis, goal setting, system design, operation and then evaluation in various governance systems in companies, regulations, infrastructure, markets, social codes and the like”.

In addition to the Guidelines, an AI Strategy Council, a government advisory body, was established to consider approaches for maximizing the potential of AI while minimizing the potential risks to the financial system. On May 22, 2024, the Council submitted draft discussion points concerning the advisability and potential scope of any future regulation.

Finally, a working group in the Japanese Parliament has proposed the first specific Japanese regulation of AI, “the Basic Act on the Advancement of Responsible AI,” which proposes a hard law approach to regulate certain generative AI foundation models. If passed as-is, the Japanese government would designate the AI systems and developers that are subject to regulation; impose obligations on them with respect to the vetting, operation, and output of the systems; and require periodic reports concerning AI systems.

The proposed obligations would provide a general framework, while industry groups for financial services firms would work with the Japanese Financial Services Agency (“JFSA”) to establish the specific standards by which firms would comply. It is further thought that the government would have the authority to monitor AI developers and impose fines and penalties for violations of the reporting obligations and/or compliance with the substance of the law.

]]>
https://blogs.perficient.com/2024/11/19/ai-regulations-for-financial-services-japan/feed/ 0 370870
A Comprehensive Guide to IDMC Metadata Extraction in Table Format https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/ https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/#respond Sun, 17 Nov 2024 00:00:27 +0000 https://blogs.perficient.com/?p=372086

Metadata Extraction: IDMC vs. PowerCenter

When we talk about metadata extraction, IDMC (Intelligent Data Management Cloud) can be trickier than PowerCenter. Let’s see why.
In PowerCenter, all metadata is stored in a local database. This setup lets us use SQL queries to get data quickly and easily. It’s simple and efficient.
In contrast, IDMC relies on the IICS Cloud Repository for metadata storage. This means we have to use APIs to get the data we need. While this method works well, it can be more complicated. The data comes back in JSON format. JSON is flexible, but it can be hard to read at first glance.
To make it easier to understand, we convert the JSON data into a table format. We use a tool called jq to help with this. jq allows us to change JSON data into CSV or table formats. This makes the data clearer and easier to analyze.

In this section, we will explore jq. jq is a command-line tool that helps you work with JSON data easily. It lets you parse, filter, and change JSON in a simple and clear way. With jq, you can quickly access specific parts of a JSON file, making it easier to work with large datasets. This tool is particularly useful for developers and data analysts who need to process JSON data from APIs or other sources, as it simplifies complex data structures into manageable formats.

For instance, if the requirement is to gather Succeeded Taskflow details, this involves two main processes. First, you’ll run the IICS APIs to gather the necessary data. Once you have that data, the next step is to execute a jq query to pull out the specific results. Let’s explore two methods in detail.

Extracting Metadata via Postman and jq:-

Step 1:
To begin, utilize the IICS APIs to extract the necessary data from the cloud repository. After successfully retrieving the data, ensure that you save the file in JSON format, which is ideal for structured data representation.
Step 1 Post Man Output

Step 1 1 Save File As Json

Step 2:
Construct a jq query to extract the specific details from the JSON file. This will allow you to filter and manipulate the data effectively.

Windows:-
(echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:-
jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
To proceed, run the jq query in the Command Prompt or Terminal. Upon successful execution, the results will be saved in CSV file format, providing a structured way to analyze the data.

Step 3 1 Executing Query Cmd

Step 3 2 Csv File Created

Extracting Metadata via Command Prompt and jq:-

Step 1:
Formulate a cURL command that utilizes IICS APIs to access metadata from the IICS Cloud repository. This command will allow you to access essential information stored in the cloud.

Windows and Linux:-
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json"

Step 2:
Develop a jq query along with cURL to extract the required details from the JSON file. This query will help you isolate the specific data points necessary for your project.

Windows:
(curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json") | (echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json" | jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
Launch the Command Prompt and run the cURL command that includes the jq query. Upon running the query, the results will be saved in CSV format, which is widely used for data handling and can be easily imported into various applications for analysis.

Step 3 Ver 2 Cmd Prompt

Conclusion
To wrap up, the methods outlined for extracting workflow metadata from IDMC are designed to streamline your workflow, minimizing manual tasks and maximizing productivity. By automating these processes, you can dedicate more energy to strategic analysis rather than tedious data collection. If you need further details about IDMC APIs or jq queries, feel free to drop a comment below!

Reference Links:-

IICS Data Integration REST API – Monitoring taskflow status with the status resource API

jq Download Link – Jq_Download

]]>
https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/feed/ 0 372086
8 Digital Healthcare Trends For 2025 https://blogs.perficient.com/2024/11/15/digital-healthcare-trends/ https://blogs.perficient.com/2024/11/15/digital-healthcare-trends/#respond Fri, 15 Nov 2024 15:35:42 +0000 https://blogs.perficient.com/?p=359138

Our experts are closely monitoring eight healthcare trends that are shaping industry leaders’ strategies in 2025.

And this year is especially interesting, as 2024’s U.S. election results could significantly shift healthcare policy and impact healthcare access, affordability, regulation, and innovation.

As such, forward-looking healthcare organizations (HCOs) that are on-track to differentiate their brand in the modern marketplace demonstrate some key characteristics:

  • Pragmatically progressive strategies
  • Strong partnerships to see those strategies through

Let’s dive into the eight healthcare trends and pragmatic innovations that our experts are helping HCOs navigate in 2025.

Healthcare Trend #1: AI Disruption and Enablement

Healthcare has seen a surge of interest in AI, with the market set to soar to $187.95 billion by 2030. But the industry faces unique challenges that other sectors don’t encounter. Strict regulations around HIPAA, PHI, and PII create significant barriers, making it difficult to adopt off-the-shelf AI solutions from fields like commerce or digital experience. These regulations demand that healthcare AI be specifically tailored to ensure data privacy, security, and compliance, limiting the utility of plug-and-play approaches seen in other industries.

Recommended Approach: AI should not be viewed as a standalone strategy but rather as a powerful enabler of broader business objectives. A well-formed strategy aligns key business priorities with organizational capabilities – people, technology, and processes – to create a cohesive framework. AI’s transformative potential can then be harnessed to address high-impact use cases for HCOs that are defined by clear KPIs and measurable outcomes. However, this potential can only be fully realized if AI is implemented with careful consideration of ethical, security and privacy, and oversight issues. This approach ensures that AI drives tangible value, tailored to the unique needs and strengths of the organization.

Success In Action: Accelerating CSR Support of Benefits Questions Using GenAI

Healthcare Trend #2: Cost Management Without Sacrificing Agility

HCOs continue to face substantial challenges in maintaining margins. While there are many macro and operational factors at play, cost management will play a key part in C-suite planning for the foreseeable future. Against this background, leaders are still under intense competitive pressure to improve many aspects of the digital experience. This tension is driving renewed interest in automation, including AI, and an emphasis on MVP+ and Agile delivery of everything from data modernization to websites and search.

Recommended Approach: Strategic cohesion is vital to ensure initiatives are supported by extremely clear goals and KPIs, and ultimately deliver business value and better health outcomes. A rigorous yet practical business transformation mindset has therefore never been more important. Leaders must prioritize technology investments that balance shorter-term wins and longer-range viability. Cost containment will require compromises. Thus, organizational alignment and change management become even more vital as teams competing for technology development dollars evolve their focus from departmental goals to enterprise sustainability.

You May Enjoy: 10 Ways Agile Supports Product-Driven Healthcare

Healthcare Trend #3: Clinician Burnout and Patient Impacts

Approximately 63% of physicians report burnout at least once a week. Clinician burnout not only exacerbates staffing challenges and jeopardizes the health and well-being of frontline healthcare workers, it also poses critical risks to patient safety, care quality, and the long-term sustainability of HCOs. Burnout can lead to increased medical errors, compromised decision-making, and diminished patient-provider relationships, directly impacting the experience and outcomes for patients and potentially increasing insurance premiums and out-of-pocket expenses for members. Every departing nurse or physician deepens the cycle, as budget freezes and shortages in the workforce make it difficult, if not impossible, to replace these key personnel. As the pressure mounts, remaining staff and healthcare consumers all suffer – with longer wait times, reduced continuity of care, and overall diminished access to services, threatening the very stability of healthcare delivery systems.

Recommended Approach: Ease the burden on clinicians by first understanding teams’ day-to-day friction points. Engaging directly with end-users ensures their voices inform your modernization efforts, fostering a culture of collaboration that can drive meaningful change. This open dialogue cultivates powerful change advocates who will champion the adoption of digital investments, such as smart automation, trusted data, advanced analytics, and integrated consumer experiences. Furthermore, organizations must strategically engage and resonate with providers who are contemplating career transitions, ensuring that their needs and aspirations are addressed. These efforts not only contribute to your HCO’s bottom line but also enhance the overall experience for everyone—providers, patients, and caregivers alike. In both the short- and long-term, these initiatives will build trust within your consumer base, positioning your organization as a desirable destination for care and ultimately fostering a healthier, more engaged community.

See Also: Perficient Mentioned in Two Forrester Reports on Tech-Enabled Clinician Experiences

Healthcare Trend #4: Experiences That Build Trust

Research from Gallup showed consumers, in 2023, had some of the lowest levels of trust ever recorded in the healthcare industry. Although we are seeing levels of trust in HCOs begin to improve, they still have a long way to go. 2025 will see the continued push to meet healthcare consumers’ demand for convenience and personalized digital experiences.

Recommended Approach: From everyday commerce to the 2024 presidential election, we continue to see one clear fact: It’s imperative to know your audience. There is no “typical” healthcare consumer, and if you don’t treat people as individuals with unique, personal needs, you risk losing them to another HCO that does. Your organization must incorporate comprehensive healthcare personas and journeys to fully understand the people you serve, how they want you to communicate with them, and how they access your care or services — or risk losing them. Consider potential areas of mistrust for your organization and address them now to build consumers’ confidence. Key areas where we often help HCOs do just that are through digital front door strategies, implementation of intelligent search, and reimagining information architecture (IA).

You May Enjoy: 5 Takeaways: Enhancing Trust in Healthcare [Webinar]

Healthcare Trend #5: Competition From Disruptive Healthcare Models

We’ve seen upheaval in the realm of healthcare disruptors — as Walmart has pulled out and Walgreens has pulled back, Best Buy has jumped in. Healthcare disruptors are finding out something traditional healthcare organizations (HCOs) have known for some time: Success in the healthcare industry is a complicated. But we are seeing disruptors to the traditional healthcare model find that success. Companies like Hims, Hers, and Henry Meds combine the best of empathetic, consumer-friendly language with convenient, powerful commerce experiences designed to help users way find and convert quickly.

Recommended Approach: Traditional HCOs that want to compete against successful disruptors require thoughtful, thorough business transformation. Take stock of your organization’s KPIs and how you are measuring success. Are you driving toward growth? If so, is it the right kind of growth to stand out? Next, determine whether you’re meeting the evolving expectations of today’s healthcare consumers. Be mindful of considerations around health equity and social determinants of health (SDOH) and align your strategies to match. Ultimately, we’re seeing powerful outcomes from organizations that shift from a project-focused model to a product-driven approach. Product-driven healthcare enables greater agility to respond to market shifts and fluctuations, as well as industry trends, the uncertainty of changes in healthcare regulation, and the demands of today’s consumers.

Read More: Is Your Healthcare Organization Really Product-Driven?

Healthcare Trend #6: Better Health Outcomes Through Shared Health Data

Efforts to reduce costs and improve health outcomes are driving collaboration among HCOs as health plans and integrated systems aim to more-holistically support consumer health, ease the care journey, and reduce the cost of care. Clinical data spanning an individual’s various provider relationships is crucial for a comprehensive patient view. Meanwhile, leaders continue to explore ways AI and automation can illuminate a 360-degree consumer view to power personalization, boost retention, and increase business resilience. These discussions are forcing focus toward data quality, consistency, governance, and bias.

Recommended Approach: Cloud services’ importance has surged to meet the growing need for real-time, accessible data. We recommend that HCOs continue building a scalable foundation to connect and integrate consumer data across health systems, providers, and insurers. This requires focus in several key areas, including data integration, data management. and data consistency and quality. Only then can data be richly woven into a reliable 360-degree view of the consumer that spans and supports better care management, marketing engagement, and support services. To optimize costs, we anticipate increasing adoption of data virtualization (a.k.a., Data as a Service, or DaaS). This unified data access layer approach bypasses the need to replicate data across various patient and member data management systems (e.g., data warehouses, MarTech, contact center, etc.), and offers a single view of enriched and transformed data from multiple data sources.

Explore More: Data-Driven Companies Move Faster and Smarter

Healthcare Trend #7: Care For the Aging and Underserved

An aging consumer base and a growing emphasis on health equity are reshaping patient engagement and business models for HCOs. According to the National Institute on Aging, approximately 85% of older adults have at least one chronic health condition, and 60% have at least two chronic conditions. In response, health insurers intensified focus on Medicare Advantage and Medicaid managed care plans to effectively serve a more diverse and underserved member population. Concurrently, providers are expanding into digitally connected services, such as telemedicine, remote patient monitoring, and personalized care plans, enabling patients to manage their health in more convenient and accessible settings. These shifts not only enhance patient experience and satisfaction but also foster a more inclusive healthcare system that addresses the unique needs of various demographic groups.

Recommended Approach: Deeply understand your patients’ and members’ journeys so you can deliver differentiated digital experiences in an increasingly crowded marketplace. Improve brand affinity with intuitive, personalized, accessible care moments that build trust (and bolster Star ratings). Intelligently automate systems and processes to optimize costs and build margin that can buffer potential shifts in reimbursement models. The integration of Social Determinants of Health (SDOH) data adds value by addressing factors like transportation, housing, and food security that impact health outcomes. Through a surround-care approach, powered with important health insights and intuitive tools, HCOs can strengthen community and individual health. This comprehensive strategy enhances engagement and trust while promoting better health outcomes and equity across diverse populations.

You May Also Appreciate: Diversity, Equity and Inclusion in Healthcare

Healthcare Trend #8: Mandate-Driven Transformation

Regulatory mandates continue to drive significant investment and effort from all HCO’s. Leaders strive to meet evolving requirements in CMS interoperability and prior authorization, price transparency, TEFCA, and others. Meanwhile, HHS insight on PHI and legal cases muddy the waters of HIPAA. In general, the effort to understand expectations, implement new functionality, and abide by existing mandates continues to increase. These mandates may seem simple at first, but they have significant implications as insurers work to incorporate patient data using standards common to the provider world. HCOs cannot simply repurpose hastily-constructed solutions from earlier mandates as a foundation for future compliance. Upcoming mandates are meant to build upon those that came before. Without a scalable approach and a thoughtful architecture, HCO’s will find themselves with an ever-increasing debt burden.

Recommended Approach: We encourage leaders to identify mandates’ silver lining opportunities. After all, to remain competitive and compliant, HCOs must innovate in ways that add business value, meet consumers’ evolving expectations, build trust, and deliver equitable care and services. Achieving transformative outcomes and health experiences requires a digital strategy that not only satisfies mandates but also aligns the enterprise around a shared vision and actionable KPIs, ultimately keeping patients, members, and care teams at the heart of progress.

Therefore, we recommend that HCOs approach mandates as a set of iterations, using a strategy-first approach that holistically considers the broader mandate and regulatory landscape. Keep a pulse on what other healthcare organizations – especially new market entrants and disruptors – are doing. Adapt digital best practices from outside of the healthcare industry. And deeply understand the nuance of interoperability standards, patient data modeling, API gateways, and SMART on FHIR applications.

The most successful organizations will build a proper foundation that scales and supports successive mandates. Composable architecture offers a powerful, flexible approach that balances “best in breed,” fit-for-purpose solutions while bypassing unneeded, costly features or services. Tactically, organizations can accelerate value, privacy, and data quality with secure, compliant, and modern technology platforms and data architectures. It’s also vital to build trust in data and with consumers, paving the way for ubiquitous, fact-based decision making that supports health and enables relationships across the care continuum.

You May Enjoy: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data

Expert Digital Healthcare Consulting Services: Imagine, Create, Engineer, Run

In this next decade, advances in digital health, growing consumerism, and mounting financial constraints will propel how HCOs shape experiences and deliver equitable, high-quality, cost-effective care.

Perficient combines strategy, industry best practices, and technology expertise to deliver award-winning results for leading health plans and providers:

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

We are trusted by leading technology partners, mentioned by analysts, and Modern Healthcare consistently ranks us as one of the largest healthcare consulting firms.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2024/11/15/digital-healthcare-trends/feed/ 0 359138
AI Regulations for Financial Services: South Korea and the UK https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/ https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/#respond Thu, 14 Nov 2024 15:27:01 +0000 https://blogs.perficient.com/?p=370878

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

South Korea

Map Of South Korea

In South Korea, efforts to enact the AI framework act have been underway since 2020. Nine different regulations have been proposed, but none have been passed into law. While the Personal Information Protection Act (PIPA) includes provisions related to AI, such as the exercise of data subjects’ rights concerning automated decision-making, comprehensive legislation has yet to be enacted.

United Kingdom

Bank executives who have their London trading desks over in Canary Wharf must remember Brexit and that Great Britain is not in the EU. The Financial Conduct Authority (FCA) regulates artificial intelligence (AI) in the UK by focusing on identifying and mitigating risks, rather than prohibiting specific technologies. The FCA and FSA’s approaches to AI regulation are based on the five principles of safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and lastly contestability and redress. Similar to Japan and South Korea, the UK is yet to pass a law or regulation specifically directed to regulating the use of AI at financial services firms.

]]>
https://blogs.perficient.com/2024/11/14/ai-regulations-for-financial-services-south-korea-and-the-uk/feed/ 0 370878
Intelligently Automating Prior Authorization to Build Consumer Trust in Healthcare https://blogs.perficient.com/2024/11/12/intelligently-automating-prior-authorization-to-build-consumer-trust-in-healthcare/ https://blogs.perficient.com/2024/11/12/intelligently-automating-prior-authorization-to-build-consumer-trust-in-healthcare/#respond Tue, 12 Nov 2024 22:36:17 +0000 https://blogs.perficient.com/?p=371945

Healthcare leaders are engaging us in a variety of discussions to explore intelligent automation’s role for complex business challenges, ranging from efforts to enhance consumer trust and use artificial intelligence (AI) in effective ways, to navigating change that comes with prior authorization mandates. This series shares key insights coming from those discussions.  

As the saying goes, diamonds are made under pressure, and the most impactful opportunities are often those that challenge leaders the most. 

Prior Authorization, In a Nutshell

The CMS Prior Authorization mandate, which goes into effect on January 1, 2026, aims to reduce guesswork for healthcare consumers and the administrative burden on care teams, and to improve patient/member care by streamlining processes and enhancing the exchange of health information. 

Enabling prior authorization through API development is a good start; however, APIs are not a comprehensive solution. Rather, the introduction of multiple third-party APIs creates new processes and steps, often prompting manual follow-ups to track and connect data gathered from multiple sources. In addition, these new data points require new data models and methods to handle patient data.  

To address these inherent challenges, healthcare leaders are prioritizing investments in interoperability and automation technologies. 

Intelligent Automation Supports Prior Authorization and Business Efficiencies

True trust-enhancing transparency can be unlocked through intelligent automation. This is especially true as low-code, more-approachable AI, machine learning (ML) and Generative AI (GenAI) capabilities enter the mix. 

Intelligent automation connects digital process automation (DPA), robotic process automation (RPA), and artificial intelligence (AI) to deliver efficient and intelligent processes and align all aspects of your organization with the vision of constant process improvement, technological integration, and increasing consumer value. 

Although DPA, RPA and AI don’t make final decisions, they can streamline and leverage information, so the right decision gets made. Health insurers are always seeking access to actionable information about their members while adhering to data privacy laws and regulations. 

Getting to that actionable data requires multiple considerations: 

  • Using best practices to assemble and curate the right data fields for any given use case 
  • A continuous process of identifying and resolving issues in core systems 
  • Appropriate environments in which to store data to maintain its integrity, security, and accessibility 
  • Only then can you effectively enable specific sub-functions (i.e. functions that ingest the data then act or recommend actions) to happen accurately and on time 

Streamline and Optimize Prior Auth Processes

Every step in the prior authorization process has potential for improvement using intelligent automation. It can support, enhance, and accelerate based on rules engines, event logs, decision rules, and simple automations of high-volume processes. 

These intelligent tools streamline information sharing between payers and providers, reducing the need for repeated exchanges and guesswork, enhancing clinical review, and ensuring timely, accurate decisions. 

Intelligent automation rapidly optimizes the prior authorization workflows that occur at the edge of what can conveniently and cost-effectively be managed through APIs. AI and machine learning (ML) can assist required communications, reporting, and decision flows in many ways, including: 

  • Orchestration: Automate the coordination of tasks and data flow between disparate systems and stakeholders. 
  • Monitoring: Continuously track the status of prior authorization requests and flag any issues or delays. 
  • Standardization: Ensure consistent repeatable workflows and processes across all systems to facilitate smoother information exchange. 

YOU MAY ALSO ENJOY: Evolving Healthcare: Generative AI Strategy for Payers and Providers 

Best Practices to Transform Prior Authorization Experiences

Intelligent automation enhances and overlays existing systems, helping to accelerate the prior authorization process with greater efficiency and generating insights into any recurring root causes in process breakdowns. 

As you’re approaching your prior authorization initiatives, we recommend the following transformation best practices: 

Transformation Tip #1: Cross-Functional Feedback

Maintaining cross-functional feedback is essential to identify and address pain points effectively. Automation allows for healthcare providers to quickly identify and communicate common pain points, such as inaccurate or incomplete record keeping, avoiding common pitfalls in the prior authorization process. 

Transformation Tip #2: Measurement and Tracking

Automated processes provide valuable insights for contracting, reporting requirements, and more. By measuring and tracking these processes, efficiency, effectiveness, and consumer experience are greatly impacted. This information can be used to improve upstream messaging to patients and members about prior authorizations.  

The overlay of technology not only increases operational efficiencies, but it also provides valuable insights that can be used to improve communication and support for consumers. 

Empowering Solutions for Healthcare

We partner with healthcare leaders to optimize prior authorization experiences and drive transparent, consistent engagement with consumers.  

Interested in learning more? In a recent webinar, our experts explored how better prior authorization experiences could enhance consumer trust in healthcare. 

Discover why we’ve been trusted by the 10 largest healthcare systems and 10 largest health insurers and are consistently recognized by Modern Healthcare as a leading healthcare consulting firm. Contact us today to explore how we can help you forge better experiences and improve outcomes.

]]>
https://blogs.perficient.com/2024/11/12/intelligently-automating-prior-authorization-to-build-consumer-trust-in-healthcare/feed/ 0 371945
AI Regulations for Financial Services: European Union https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/ https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/#respond Tue, 12 Nov 2024 15:19:00 +0000 https://blogs.perficient.com/?p=370843

Artificial intelligence (AI) is poised to affect every aspect of the world economy and play a significant role in the global financial system, leading financial regulators around the world to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the financial systems include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators.

It is the goal of Perficient’s Financial Services consultants to help financial services executives, whether they lead banks, bank branches, bank holding companies, broker-dealers, financial advisors, insurance companies or investment management firms, the knowledge to know the status of AI regulation and the risk and regulatory trend of AI regulation not only in the US, but around the world where their firms are likely to have investment and trading operations.

EU Regulations

European Union lawmakers signed the Artificial Intelligence (“AI”) Act in June 2024. The AI act, the first binding worldwide horizontal regulation on AI, sets a common framework for the use and supply of AI systems by financial institutions in the European Union.

The new act offers a classification for AI systems with different requirements and obligations tailored to a ‘risk-based approach’. In our opinion, the proposed risk-based system will be very familiar to bankers who remember the original rollout and asset-based classification system required by regulators in the original BASEL risk-based capital requirements of the early 1990s. Some AI systems presenting ‘unacceptable’ risks are outright prohibited regardless of controls. A wide range of ‘high-risk’ AI systems that can have a detrimental impact on people’s health, safety or on their fundamental rights are permitted, but subject to a set of requirements and obligations to gain access to the EU market. AI systems posing limited risks because of their lack of transparency will be subject to information and transparency requirements, while AI systems presenting what are classified “minimal risks” are not subjected to further obligations.

The regulation also lays down specific rules for General Purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with “high-impact capabilities” that could pose a systemic risk and have a significant impact on the EU marketplace. The AI Act was published in the EU’s Official Journal on July 12, 2024, and became effective August 31, 2024.

Eu Ai Act Risk Based Approach

The EU AI act adopts a risk-based approach and classifies AI systems into several risk categories, with different degrees of regulation applying.

Prohibited AI practices

The final text prohibits a wider range of AI practices than originally proposed by the Commission because of their harmful impact:

  • AI systems using subliminal or manipulative or deceptive techniques to distort people’s or a group of people’s behavior and impair informed decision making, leading to significant harm;
  • AI systems exploiting vulnerabilities due to age, disability, or social or economic situations, causing significant harm;
  • Biometric categorization systems inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation;
  • AI systems evaluating or classifying individuals or groups based on social behavior or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts or unjustified or disproportionate to their behavior;
  • AI systems assessing the risk of individuals committing criminal offences based solely on profiling or personality traits;
  • AI systems creating or expanding facial recognition databases through untargeted scraping from the Internet or CCTV footage; and
  • AI systems inferring emotions in workplaces or educational institutions.

High-risk AI systems

The AI act identifies a number of use cases in which AI systems are to be considered high-risk because they can potentially create an adverse impact on people’s health, safety or their fundamental rights.

  • The risk classification is based on the intended purpose of the AI system. The function performed by the AI system and the specific purpose and modalities for which the system is used are key to determine if an AI system is high-risk or not. High-risk AI systems can be safety components of products covered by sectoral EU law (e.g. medical devices) or AI systems that, as a matter of principle, are classified as high risk when they are used in specific areas listed in an annex.13 of the regulation. The Commission is tasked with maintaining an EU database for the high-risk AI systems listed in this annex.
  • A new test has been enshrined at the Parliament’s request (‘filter provision’), according to which AI systems will not be considered high risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. However, an AI system will always be considered high risk if the AI system performs profiling of natural persons.
  • Providers of such high-risk AI systems will have to run a conformity assessment procedure before their products can be sold and used in the EU. They will need to comply with a range of requirements including testing, data training and cybersecurity and, in some cases, will have to conduct a fundamental rights impact assessment to ensure their systems comply with EU law. The conformity assessment should be conducted either based on internal control (self-assessment) or with the involvement of a notified body (e.g. biometrics). Compliance with European harmonized standards to be developed will grant high-risk AI systems providers a presumption of conformity. After such AI systems are placed in the market, providers must implement post-market monitoring and take corrective actions if necessary.

Transparency risk

Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception, irrespective of whether they qualify as high-risk AI systems or not. Such systems are subject to information and transparency requirements. Users must be made aware that they interact with chatbots. Deployers of AI systems that generate or manipulate image, audio or video content (i.e. deep fakes), must disclose that the content has been artificially generated or manipulated except in very limited cases (e.g. when it is used to prevent criminal offences). Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, interoperable, effective and robust techniques and methods (such as watermarks) to enable marking and detection that the output has been generated or manipulated by an AI system and not a human. Employers who deploy AI systems in the workplace must inform the workers and their representatives.

Minimal risks

Systems presenting minimal risk for people (e.g. spam filters) are not subject to further obligations beyond currently applicable legislation (e.g. GDPR).

General-purpose AI (GPAI)

The regulation provides specific rules for general purpose AI models and for general-purpose AI models that pose systemic risks.

GPAI system transparency requirements

All GPAI models will have to draw up and maintain up-to-date technical documentation and make information and documentation available to downstream providers of AI systems. All providers of GPAI models have to implement a policy to respect EU copyright law, including through state-of-the-art technologies (e.g. watermarking), to carry out lawful text- and data-mining exceptions as envisaged under the Copyright Directive. In addition, GPAIs must draw up and make publicly available a sufficiently detailed summary of the content used in training the GPAI models according to a template provided by the AI Office. Financial Institutions headquartered outside the EU will have to appoint a representative in the EU. However, AI models made accessible under a free and open source will be exempt from some of the obligations (i.e., disclosure of technical documentation) given they have, in principle, positive effects on research, innovation and competition.

Systemic-risk GPAI obligations

GPAI models with ‘high-impact capabilities’ could pose a systemic risk and have a significant impact due to their reach and their actual or reasonably foreseeable negative effects (on public health, safety, public security, fundamental rights, or the society as a whole). GPAI providers must therefore notify the European Commission if their model is trained using a total computing power exceeding 10^25 FLOPs (i.e. floating-point operations per second). When this threshold is met, the presumption will be that the model is a GPAI model posing systemic risks. In addition to the requirements on transparency and copyright protection falling on all GPAI models, providers of systemic-risk GPAI models are required to constantly assess and mitigate the risks they pose and to ensure cybersecurity protection. That requires keeping track of, documenting, and reporting to regulators serious incidents and implementing corrective measures.

  • Codes of practice and presumption of conformity

GPAI model providers will be able to rely on codes of practice to demonstrate compliance with the obligations set under the act. By means of implementing acts, the Commission may decide to approve a code of practice and give it a general validity within the EU, or alternatively, provide common rules for implementing the relevant obligations. Compliance with a European standard grants GPAI providers the presumption of conformity. Providers of GPAI models with systemic risks who do not adhere to an approved code of practice will be required to demonstrate adequate alternative means of compliance.

]]>
https://blogs.perficient.com/2024/11/12/ai-regulations-for-financial-services-european-union/feed/ 0 370843
Understanding Cybercrime-as-a-Service: A Growing Threat in the Digital World https://blogs.perficient.com/2024/11/12/understanding-cybercrime-as-a-service-a-growing-threat-in-the-digital-world/ https://blogs.perficient.com/2024/11/12/understanding-cybercrime-as-a-service-a-growing-threat-in-the-digital-world/#comments Tue, 12 Nov 2024 10:53:02 +0000 https://blogs.perficient.com/?p=371918

With just a cryptocurrency wallet, cybercriminals can now execute complex cyberattacks without advanced technical knowledge or sophisticated software. This alarming trend is a byproduct of the growing popularity of cloud computing and the “as-a-service” model, where services like infrastructure, recovery, and cybersecurity are now accessible on demand. Known as “cybercrime-as-a-service” (CaaS), this model has modified cyberattacks by lowering barriers to entry, turning the digital world into a profitable and accessible cybercrime ecosystem.

What is Cybercrime-as-a-Service?

Cybercrime-as-a-service refers to a business model where organized crime syndicates and threat actors offer specialized hacking capabilities for sale. These services are available through dark web marketplaces, exclusive forums, and even encrypted messaging apps like Telegram. Vendors provide cyberattack tools and expertise to customers, who pay in cryptocurrency to preserve anonymity, creating a secure transaction system and enabling even novice hackers to carry out sophisticated attacks. This ecosystem has contributed over $1.6 billion in annual revenue to the global cybercrime market.

Types of Cybercrime-as-a-Service

Cybercrime-as-a-service encompasses a variety of criminal offerings, each targeting specific objectives:

  1. Ransomware-as-a-Service (RaaS)
    RaaS is one of the most profitable CaaS segments, where attackers lease ransomware software to clients. The client executes an attack by encrypting data on target systems and demanding a ransom for decryption. Often, the “service provider” receives a percentage of the ransom, making this a lucrative model for cybercriminals.
  2. Phishing-as-a-Service
    Phishing-as-a-Service (PhaaS) platforms offer ready-made phishing kits, targeting email, social media, or other communication channels. These kits typically come with templates, scripts, and customization options, enabling even non-technical users to launch sophisticated phishing campaigns that trick victims into revealing sensitive information.
  3. DDoS-as-a-Service
    Distributed Denial of Service (DDoS)-as-a-Service allows individuals to hire attackers who overload a target’s network, effectively shutting down websites or services. This service is frequently used to harm businesses by disrupting their operations or to demand ransom payments.
  4. Exploit-as-a-Service
    In Exploit-as-a-Service, vendors provide exploits that target specific software vulnerabilities. These services are typically marketed to attackers who want to breach particular networks or gain unauthorized access to secure systems, often for data theft or further exploitation.

The availability of these services has transformed the underground market into a virtual “one-stop shop” for digital crime, where criminals can easily acquire all the necessary resources.

Role of the Dark Web in Cybercrime-as-a-Service

The Dark Web, a hidden layer of the internet, enables users to operate anonymously and has become a hub for illegal activity. Cybercriminals use the Dark Web to connect with vendors, buy or sell stolen credentials, and procure hacking tools or services. This anonymity adds to the security of transactions, creating a low-risk, high-reward marketplace for would-be attackers.

Defending Against Cybercrime-as-a-Service

Unlike specific cyberattacks, CaaS represents a business model, complicating efforts to counteract it. To defend against this growing threat, organizations must strengthen their cybersecurity defenses with proactive and continuous monitoring. While reactive tools, like traditional antivirus software, may catch known threats, modern cybersecurity demands adaptive solutions.

Many companies now offer cybersecurity as a service, including IBM, Palo Alto Networks, Cisco Secure, Fortinet, and Trellix. These providers combine cutting-edge technology with human expertise to detect, monitor, and respond to cyber threats. Leveraging machine learning, threat intelligence, and expert analysts, cybersecurity services are now more efficient at identifying and neutralizing potential attacks early—often before they can cause any significant damage.

Conclusion

Cybercrime-as-a-service represents a dark shift in how cyberattacks are conducted, making hacking tools and expertise widely available to criminals of all levels. This calls for a proactive defense, as businesses and individuals are increasingly at risk. With comprehensive cybersecurity as a service solutions, organizations can stay vigilant, constantly improving defenses to keep their systems secure in a changing digital environment. By staying one step ahead of cybercriminals, we can begin to mitigate the impacts of this growing cybercrime economy.

]]>
https://blogs.perficient.com/2024/11/12/understanding-cybercrime-as-a-service-a-growing-threat-in-the-digital-world/feed/ 4 371918