GenAI Articles / Blogs / Perficient https://blogs.perficient.com/tag/genai/ Expert Digital Insights Wed, 04 Dec 2024 21:46:46 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png GenAI Articles / Blogs / Perficient https://blogs.perficient.com/tag/genai/ 32 32 30508587 Responsible AI: Expanding Responsibility Beyond the Usual Suspects https://blogs.perficient.com/2024/12/04/responsible-ai-expanding-responsibility-beyond-the-usual-suspects/ https://blogs.perficient.com/2024/12/04/responsible-ai-expanding-responsibility-beyond-the-usual-suspects/#respond Wed, 04 Dec 2024 21:36:15 +0000 https://blogs.perficient.com/?p=373095

In the world of AI, we often hear about “Responsible AI.” However, if you ask ten people what it actually means, you might get ten different answers. Most will focus on ethical standards: fairness, transparency, and social good. But is that the end of responsibility? Many of our AI solutions are built by enterprise organizations who aim to meet both ethical standards AND business objectives. To whom are we responsible, and what kind of responsibility do we really owe? Let’s dive into what “Responsible AI” could mean with a broader scope. 

Ethical Responsibility: The Foundation of Responsible AI 

Ethical responsibility is often our go-to definition for Responsible AI. We’re talking about fairness in algorithms, transparency in data use, and minimizing harm, especially in areas like bias and discrimination. It’s crucial and non-negotiable, but ethics alone don’t cover the full range of responsibilities we have as business and technology leaders. As powerful as ethical guidelines are, they only address one part of the responsibility puzzle. So, let’s step out of this comfort zone a bit to dive deeper. 

Operational Responsibility: Keeping an Eye on Costs 

At its core, AI tools are a resource-intensive technology. When we deploy AI, we’re not just pushing lines of code into the world; we’re managing data infrastructure, compute power, and – let’s face it – a budget that often feels like it’s getting away from us.  

This brings up a question we don’t always want to ask: is it responsible to use up cloud resources so that the AI can write a sonnet? 

Of course, some use cases justify high costs, but we need to weigh the value of specific applications. Responsible AI isn’t just about can we do something; it’s about should we do it, and whether it’s appropriate to pour resources into every whimsical or niche application. 

 Operational responsibility means asking tough questions about costs and sustainability—and, yes, learning to say “no” to AI haikus. 

Responsibility to Employees: Making AI Usable and Sustainable 

If we only think about responsibility in terms of what AI produces, we miss a huge part of the equation: the people behind it. Building Responsible AI isn’t just about protecting the end user; it’s about ensuring that developers, data scientists, and support teams innovating AI systems have the tools and support they need.  

Imagine the mental gymnastics required for an employee navigating overly-complex, high-stakes AI projects without proper support. Not fun. Frankly, it’s an environment where burnout, inefficiency, and mistakes become inevitable. Responsible AI also means being responsible to our employees by prioritizing usability, reducing friction, and creating workflows to make their jobs easier, not more complicated. Employees who are empowered to build reliable, ethical, and efficient AI solutions ultimately deliver better results.  

User Responsibility: Guardrails to Keep AI on Task 

Users love pushing AI to its limits—asking it quirky questions, testing its boundaries, and sometimes just letting it meander into irrelevant tangents. While AI should offer flexibility, there’s a balance to be struck. One of the responsibilities we carry is to guide users with tailored guardrails, ensuring the AI is not only useful but also used in productive, appropriate ways.  

That doesn’t mean policing users, but it does mean setting up intelligent limits to keep AI applications focused on their intended tasks. If the AI’s purpose is to help with research, maybe it doesn’t need to compose a 19th-century-style romance novel (as entertaining as that might be). Guardrails help direct users toward outcomes that are meaningful, keeping both the users and the AI on track. 

Balancing Responsibilities: A Holistic View of Responsible AI 

Responsible AI encompasses a variety of key areas, including ethics, operational efficiency, employee support, and user guidance. Each one adds an additional layer of responsibility, and while these layers can occasionally conflict, they’re all necessary to create AI that truly upholds ethical and practical standards. Taking a holistic approach requires us to evaluate trade-offs carefully. We may sometimes prioritize user needs over operational costs or support employees over certain ethics constraints, but ultimately, the goal is to balance these responsibilities thoughtfully. 

Expanding the scope of “Responsible AI” means going beyond traditional ethics. It’s about asking uncomfortable questions, like “Is this AI task worth the cloud bill?” and considering how we support the  people who are building and using AI. If we want AI to be truly beneficial, we need to be responsible not only to society at large but also to our internal teams and budgets. 

Our dedicated team of AI and digital transformation experts are committed to helping the largest organizations drive real business outcomes. For more information on how Perficient can implement your dream digital experiences, contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2024/12/04/responsible-ai-expanding-responsibility-beyond-the-usual-suspects/feed/ 0 373095
Transforming Knowledge Work and Product Development with AI Agents https://blogs.perficient.com/2024/11/25/transforming-knowledge-work-and-product-development-with-ai-agents/ https://blogs.perficient.com/2024/11/25/transforming-knowledge-work-and-product-development-with-ai-agents/#respond Mon, 25 Nov 2024 19:53:49 +0000 https://blogs.perficient.com/?p=372516

Now more than ever, we’re witnessing a significant shift from simple AI capabilities to action-driven AI Agents that promise to revolutionize how we approach knowledge work, product development, and business processes. Drawing insights from Perficient’s industry experts, we’re constantly exploring Generative AI, the emerging world of agentic frameworks, and their potential to reshape organizational capabilities. 

 

Beyond Chatbots: The Evolution of AI Agents 

For the past few years, many organizations have been deploying AI within their organizations via generative AI chatbots – tools that take prompts, access a knowledge base, and generate responses. While these were once groundbreaking tools to improve business functions, they are essentially one-dimensional: they could provide information but couldn’t take meaningful action. 

While AI chatbots can respond to user input, AI Agents can take action and perform tasks within defined parameters. 

AI Agents are rapidly expanding across multiple domains including virtual assistance, complex task management, social media content, product development, and more.  

But what makes an AI Agent truly revolutionary? It’s about creating a more nuanced, human-like intelligence. An AI Agent is characterized by: 

  1. Knowledge Base: Similar to chatbots, but augmented with information that supports outputs, standards, or historical content.
  2. Role Definition: A clear, contextual understanding of its purpose and role often within a team.
  3. Skills and Cooperation: The ability to make decisions and take action (within defined parameters) while providing and taking feedback within a team of other Agents and humans.

 

Navigating the AI Agent Implementation Journey 

Imagine transforming your organization’s potential, not through a massive overhaul, but through iterative, strategic steps. Successful AI Agent implementation is less about a revolutionary leap and more about a thoughtful, incremental progression. 

Like most transformations, planning where to provide value is critical. It’s important to identify pain points that can be delegated to an Agent, rather than distracting your most talented people away from valuable work.  

Perficient’s AI Accelerated Modeling Process (AMP) can help you implement Agentic AI quickly and responsibly. AI AMP is a short, focused four- to six- week initiative with the goal of developing an interactive model that demonstrates how your organization can leverage machine learning, natural language processing, and cognitive computing to jump start Al adoption. 

Organizations are discovering AI Agents aren’t just theoretical – they’re practical problem-solvers across multiple domains: 

  • An Agentic team that can reverse engineer legacy software, documenting business requirements and how the software currently works. 
  • Supplementing your Product Owners by checking the quality of backlog artifacts across multiple teams, providing feedback and enriching those requirements. 
  • Creating synthetic data that can provide greater test coverage at scale. 
  • A social media team with a writer, reviewer, and editor that have knowledge of the brand and previous posts and can critique the writing based on the research. 
  • An Agentic CX team that can merge research, gather initial insights and draft presentations so the human team can focus on the deep insights and recommendations. 
  • Automatic routing of emails based on the content and context of customer service requests. 

 

AI Agents Aren’t Just About Capability – It’s About Responsibility.  

Security isn’t an afterthought; it’s the foundation. Perficient’s PACE Framework is a holistic approach to designing tailored operational AI programs that empower business and technical stakeholders to innovate with confidence while mitigating risks and upholding ethical standards. 

Our comprehensive engagement model evaluates your organization against the PACE framework, tailoring programs and processes to effectively and responsibly integrate AI capabilities across your organization. 

 

The Future of Work 

The transformative potential of AI agents extends far beyond traditional chatbots, representing a strategic pathway for organizations to augment human capabilities intelligently and responsibly.  

To explore how your enterprise can benefit from Agentic AI, reach out to Perficient’s team of experts today. The next wave of AI is about creating intelligent, collaborative agentic systems that augment and transform our capabilities, one specialized Agent at a time. 

]]>
https://blogs.perficient.com/2024/11/25/transforming-knowledge-work-and-product-development-with-ai-agents/feed/ 0 372516
Adaptive by Design: The Promise of Generative Interfaces https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/ https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/#respond Wed, 20 Nov 2024 21:44:55 +0000 https://blogs.perficient.com/?p=372351

Imagine a world where digital interfaces anticipate your needs, understand your preferences, and adapt in real-time to enhance your experience. This is not a futuristic daydream, but the promise of generative interfaces. 

Generative interfaces represent a new paradigm in user experience design, moving beyond static layouts to create highly personalized and adaptive interactions. These interfaces are powered by generative AI technologies that respond to each user’s unique needs, behaviors, and context. The result is a fluid, intuitive experience—a digital environment that transforms, adapts, and grows with its users. 

 

The Evolution of User Interaction 

Traditional digital interfaces have long relied on predefined structures and user journeys. While these methods have served us well, they fall short of delivering truly personalized experiences. 

Generative interfaces, on the other hand, redefine personalization and interactivity at the level of individual interactions. They have the capability to bring data and components directly to users from multiple systems, seamlessly integrating them into a cohesive user experience.  

Users can perform tasks without switching applications as generative systems dynamically render necessary components within the interface, such as images, interactive components, and data visualizations. 

This adaptability means that generative interfaces continually evolve based on users’ inputs, preferences, and behaviors, creating a more connected and fluid experience. Instead of users adapting to software, the software adapts to them, enhancing productivity, reducing friction, and making digital interactions feel natural. 

 

Adaptive Design Principles 

At the heart of generative interfaces lies the principle of adaptability. This adaptability is more than just personalization—it’s about creating an interface that is in constant dialogue with its user. Unlike conventional systems that rely on rules and configurations set during development, generative interfaces leverage machine learning and user data to generate real-time responses. This not only makes the experience dynamic but also inherently human-centered. 

For instance, a digital assistant that supports a knowledge worker doesn’t just answer questions—it understands the context of the work, anticipates upcoming needs, and interacts in a way that aligns with the user’s goals. Generative interfaces are proactive and responsive, driven by the understanding that user needs can change from moment to moment. 

 

Envisioning the Future 

Generative interfaces hold the promise of reshaping not just individual applications, but entire categories of digital interaction—from productivity tools to entertainment platforms. Imagine entertainment systems that automatically adjust content suggestions based on your mood, or collaboration platforms that adapt their layouts and tools depending on whether you are brainstorming or executing a task. 

This is why data privacy and security considerations must be built into every aspect of the system, from data collection and storage to processing and output generation.  Without control of the experience, you risk low-quality outputs that can do more harm than good. 

As organizations deploy generative interfaces, robust governance frameworks become essential for managing risks and ensuring responsible AI use 

 

Embracing Generative Interfaces

The shift towards generative interfaces is a step towards making technology more human-centric. As we embrace these adaptive designs, we create an opportunity to redefine our digital experiences, making them more intuitive, enjoyable, and impactful. At Perficient, we are pushing the boundaries of how technology can adapt to users rather than forcing users to adapt to technology. 

The impact of these interfaces goes beyond just convenience; they are capable of crafting meaningful digital experiences that feel personal and fulfilling. As generative AI continues to advance, I envision a future where technology fades into the background, seamlessly blending into our lives and intuitively enhancing everything from work to leisure. 

]]>
https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/feed/ 0 372351
Multiclass Text Classification Using LLM (MTC-LLM): A Comprehensive Guide https://blogs.perficient.com/2024/11/20/multiclass-text-classification-using-llm-mtc-llm-a-comprehensive-guide/ https://blogs.perficient.com/2024/11/20/multiclass-text-classification-using-llm-mtc-llm-a-comprehensive-guide/#respond Wed, 20 Nov 2024 16:08:04 +0000 https://blogs.perficient.com/?p=372343

by Luis Pacheco and Uday Yallapragada

Introduction to Multiclass Text Classification with LLMs

Multiclass text classification (MTC) is a natural language processing (NLP) task where text is categorized into multiple predefined categories or classes. Traditional approaches rely on training machine learning models, requiring labeled data and iterative fine-tuning. However, with the advent of large language models (LLMs), this task can now be approached differently. Instead of building and training a custom model, we can utilize pre-trained LLMs to classify text using carefully designed prompts, allowing rapid deployment with minimal data requirements and enabling flexibility to adjust classes without retraining. 

Approaches for MTC-LLM 

In MTC-LLM, we generally have two main approaches for utilizing LLMs to achieve classification. 

Single Classifier with a Multi-Class Prompt 

Using a single LLM prompt for multi-class text classification involves providing a single, comprehensive prompt that instructs the model on all possible classes, expecting it to classify the text into one of these categories. This approach is simple and straightforward, as it requires only one prompt, making implementation fast and computationally efficient. It also reduces costs, as each classification requires just one LLM call, saving on both usage costs and processing time. 

However, this approach has notable limitations. When classes are similar, the model may struggle to make precise distinctions, reducing accuracy in nuanced tasks. Additionally, handling all categories within a single prompt can lead to lengthy and complex instructions, which may introduce ambiguity and diminish the model’s reliability. Another critical drawback is the approach’s inability to detect hierarchical relationships within a taxonomy; without recognizing these layers, the model may miss important contextual distinctions between classes that depend on hierarchical categorization. 

Hierarchical Sequence of Binary Classifiers 

The hierarchical sequence of binary classifiers approach structures classification as a decision tree, where each node represents a binary decision point. Starting from the top node, the model proceeds through a series of binary classifications, with each LLM call determining whether the text belongs to a specific class. This process continues down the hierarchy until a final classification is achieved. 

This method provides high accuracy since each binary decision allows the model to make precise, focused choices, which is particularly valuable for distinguishing among nuanced classes. It is also highly adaptable to complex hierarchies, accommodating cases where broad classes may require further subclass distinctions for an accurate classification. 

However, this approach comes with increased costs and latency, as multiple LLM calls are needed to reach a final classification, making it more expensive and time-consuming. Additionally, managing this approach requires structuring and maintaining numerous prompts and class definitions, adding to its complexity. For use cases where accuracy is prioritized over cost—such as in high-stakes applications like customer service—this hierarchical method is generally the recommended approach. 

Example Use Case: Intent Detection for Airline Customer Service 

Let’s consider an airline company using an automated system to respond to customer emails. The goal is to detect the intent behind each email accurately, enabling the system to route the message to the appropriate department or generate a relevant response. This system leverages a hierarchical sequence of binary classifiers, providing a structured approach to intent detection. At each level of the hierarchy, binary classifiers assess whether a specific intent is present, progressively narrowing down the scope of inquiry to arrive at a precise classification. 

 High-Level Intent Classification 

At the first stage of the hierarchy, the system categorizes emails into high-level intents to streamline processing and ensure accurate responses. These high-level intents include: 

General QueriesThis intent captures broad, information-seeking emails unrelated to specific complaints or actions.    These emails are generally routed to informational workflows or knowledge bases, allowing for automated responses with the required details.  

Booking IssuesEmails under this intent are related to the booking process or flight details. These emails are generally routed to booking support workflows, where sub-classification helps further refine the action required, such as new bookings, modifications, or cancellations. 

Customer ComplaintsThis category identifies emails expressing dissatisfaction or grievances. These emails are prioritized for customer service escalation, ensuring timely resolution and acknowledgment. Examples include: 

Refund RequestsThis category is specific to emails where customers request refunds for canceled flights, overcharges, or other issues. These emails are routed to the refund processing team, where workflows validate the claim and initiate the refund process.  Examples include: 

Special Assistance RequestsEmails in this category pertain to special accommodations or requests from passengers. These are routed to workflows that handle special services and ensure the requests are appropriately addressed. 

Lost and Found Inquiries – This intent captures emails related to lost items or baggage issues. These emails are routed to the airline’s lost and found or baggage resolution teams. 

Hierarchical Sub-Classification 

Once the high-level intent is identified, a second layer of binary classifiers operates within each category to refine the classification further. For example: 

Booking Issues Sub-Classifiers 

  •    New Bookings 
  •   Modifications to Existing Bookings   
  •    Cancellations   

Customer Complaints Sub-Classifiers  

  •    Flight Delays   
  •    Billing Issues   
  •    Service Quality   

Refund Requests Sub-Classifiers 

  •    Flight Cancellations   
  •    Baggage Fees   
  •    Duplicate Charges   

Special Assistance Requests Sub-Classifiers 

  •    Mobility Assistance   
  •    Dietary Preferences   
  •    Family Travel Needs   

Lost and Found Sub-Classifiers  

  •    Lost Items in Cabin   
  •    Missing Baggage   
  •    Items Lost at the Airport   

Benefits of this Approach 

 Scalability – The hierarchical design enables seamless addition of new intents or sub-intents as customer needs evolve, without disrupting the existing classification framework. 

EfficiencyBy filtering out irrelevant categories at each stage, the system minimizes computational overhead and ensures that only relevant workflows are triggered for each email. 

Improved AccuracyBinary classification simplifies the decision-making process, leading to higher precision and recall compared to a flat multiclass classifier. 

Enhanced Customer ExperienceAutomated responses tailored to specific intents ensure quicker resolutions and more accurate handling of customer inquiries, enhancing overall satisfaction. 

Cost-Effectiveness – Automating intent detection reduces reliance on human intervention for routine tasks, freeing up resources for more complex customer service needs. 

By categorizing emails into high-level intents like general queries, booking issues, complaints, refunds, special assistance requests, and lost and found inquiries, this automated system ensures efficient routing and resolution. Hierarchical sub-classification adds an extra layer of precision, enabling the airline to deliver fast, accurate, and customer-centric responses while optimizing operational efficiency. 

The table below is a representation of the complete taxonomy of the intent detection system organized into primary and secondary intents. This taxonomy enables the chatbot to understand and respond more accurately to customer intents, from broad categories down to specific, actionable concerns. Each level helps direct the inquiry to the appropriate team or resource for faster, more effective resolution. 

 

Level  Category  Sub-Category 
High-Level Intent  General Queries    
Sub-Intent  General Queries  Baggage Policy 
Sub-Intent  General Queries  Frequent Flyer Program 
Sub-Intent  General Queries  Travel with Pets 
High-Level Intent  Booking Issues    
Sub-Intent  Booking Issues  New Bookings 
Sub-Intent  Booking Issues  Modifications to Existing Bookings 
Sub-Intent  Booking Issues  Cancellations 
High-Level Intent  Customer Complaints    
Sub-Intent  Customer Complaints  Flight Delays 
Sub-Intent  Customer Complaints  Billing Issues 
Sub-Intent  Customer Complaints  Service Quality 
High-Level Intent  Refund Requests    
Sub-Intent  Refund Requests  Flight Cancellations 
Sub-Intent  Refund Requests  Baggage Fees 
Sub-Intent  Refund Requests  Duplicate Charges 
High-Level Intent  Special Assistance Requests    
Sub-Intent  Special Assistance Requests  Mobility Assistance 
Sub-Intent  Special Assistance Requests  Dietary Preferences 
Sub-Intent  Special Assistance Requests  Family Travel Needs 
High-Level Intent  Lost and Found Inquiries    
Sub-Intent  Lost and Found Inquiries  Lost Items in Cabin 
Sub-Intent  Lost and Found Inquiries  Missing Baggage 
Sub-Intent  Lost and Found Inquiries  Items Lost at the Airport 

 

The diagram below provides a depiction of this architecture. 

 

 

Mtc Llm Blog Image

Prompt Structure for a Binary Classifier 

Here’s a sample structure for a binary classifier prompt, where the LLM determines if a customer message is related to a Booking Inquiry. 

You are an AI language model tasked with classifying whether a customer's message to the Acme airline company is a "BOOKING INQUIRY."  

Definition: 

A "BOOKING INQUIRY" is a message that directly involves: 

Booking a flight: Questions or assistance requests about reserving a new flight. 
Modifying a reservation: Any request to change an existing booking, such as altering dates, times, destinations, or passenger details. 
Managing a reservation: Tasks like seat selection, cancellations, refunds, or upgrading class, which are tied to the customer's reservation. 
Resolving issues related to booking: Problems like errors in the booking process, confirmation issues, or requests for help with travel-related arrangements. 

Messages must demonstrate a clear and specific relationship to these areas to qualify as "BOOKING INQUIRY." General questions about unrelated travel aspects (e.g., baggage fees, flight status, or policies) are classified as "NOT A BOOKING INQUIRY." 

Instructions (Chain-of-Thought Process): 

For each customer message, follow this reasoning process: 

Step 1: Understand the Context - Read the message carefully. If the message is in a language other than English, translate it to English first for proper analysis. 
Step 2: Identify Booking-Related Keywords or Phrases - Look for keywords or phrases related to booking (e.g., "book a flight," "cancel reservation," "change my seat"). Determine if the message is directly addressing the reservation process or related issues. 
Step 3: Match to Definition - Compare the content of the message to the definition of "BOOKING INQUIRY." Determine if it fits one of the following categories: 
Booking a flight 
Modifying an existing reservation 
Managing or resolving booking-related issues 
Step 4: Evaluate Confidence Level - Decide if the message aligns strongly with the definition and the criteria for "BOOKING INQUIRY." If there is ambiguity or insufficient information classify it as "NOT A BOOKING INQUIRY." 
Step 5: Provide a Clear Explanation - Based on your analysis, explain your decision in step-by-step reasoning, ensuring the classification is well-justified. 

Examples: 

Positive Examples: 

Input Message - "I’d like to change my seat for my flight next week." 
Decision: true 
Reasoning: The message explicitly mentions "change my seat," which is directly related to modifying a reservation. It aligns with the definition of "BOOKING INQUIRY" as it involves managing a booking. 

Input Message - "Can I cancel my reservation and get a refund?" 
Decision: true 
Reasoning: The message includes "cancel my reservation" and "get a refund," which are part of managing an existing booking. This request is a clear match with the definition of "BOOKING INQUIRY." 

Negative Examples: 

Input Message: "How much does it cost to add extra baggage?" 
Decision: false 
Reasoning: The message asks about baggage costs, which relates to general travel policies rather than reservations or bookings. There is no indication of booking, modifying, or managing a reservation. 

Input Message: "What’s the delay on flight AA123?" 
Decision: false 
Reasoning: The message focuses on the status of a flight, not the reservation or booking process. It does not meet the definition of "BOOKING INQUIRY." 

Output: Provide your classification output in the following JSON format:
{
  "decision": true/false,
  "reasoning": "Step-by-step reasoning for the decision."
}

 

 

Example Code for Binary Classifier Using boto3 and Bedrock 

In this section, we are providing a Python script that implements hierarchical intent detection on user messages by interfacing with a language model (LLM) via AWS Bedrock runtime. The script is designed for flexibility and can be customized to work with other LLM frameworks.

This module is part of an automated email processing system designed to analyze customer messages, detect their intent, and generate structured responses based on the analysis. The system employs a large language model API to perform Natural Language Processing (NLP), classifying emails into primary intents such as “General Queries,” “Booking Issues,” or “Customer Complaints.”

```python 

import json 
import boto3 
from pathlib import Path 
from typing import List 

def get_prompt(intent: str) -> str: 

    """ 
    Retrieve the prompt template for a given intent from the 'prompts' directory. 
    Assumes that prompt files are stored in a './prompts/' directory relative to this file, 
    and that the filenames are in the format '{INTENT}-prompt.txt', e.g., 'GENERAL_QUERIES-prompt.txt'. 

    Parameters: 
        intent (str): The intent for which to retrieve the prompt template. 
 
    Returns: 
        str: The content of the prompt template file corresponding to the specified intent. 
    """ 

    # Determine the path to the 'prompts' directory relative to this file. 
    project_root = Path(__file__).parent 
    full_path = project_root / "prompts" 

 
    # Open and read the prompt file for the specified intent. 
    with open(full_path / f"{intent}-prompt.txt") as file: 
        prompt = file.read() 

    return prompt 

 

def intent_detection(message: str, decision_list: List[str]) -> str: 

    """ 
    Recursively detects the intent of a message by querying an LLM. 
    This function iterates over a list of intents, formats a prompt for each, 
    and queries the LLM to determine if the message matches the intent. 
    If a match is found, it may recursively check for more specific sub-intents.  

    Parameters: 
        message (str): The user's message for which to detect the intent. 
        decision_list (List[str]): A list of intent names to evaluate. 

    Returns: 
        str: The detected intent name, or 'UNKNOWN' if no intent is matched. 
    """ 

    # Create a client for AWS Bedrock runtime to interact with the LLM. 
    client = boto3.client("bedrock-runtime", region_name="us-east-1") 

    for intent in decision_list: 

        # Retrieve and format the prompt template with the user's message. 
        prompt_template = get_prompt(intent) 
        prompt = prompt_template.format(input_text=message) 


        # Construct the request body for the LLM API call. 
        body = json.dumps( 
            { 
                "anthropic_version": "bedrock-2023-05-31", 
                "max_tokens": 4096, 
                "temperature": 0.0, 
                "messages": [ 
                    { 
                        "role": "user", 
                        "content": [ 
                            {"type": "text", "text": prompt} 
                        ] 
                    } 
                ] 
            } 
        ) 

        # Invoke the LLM model with the constructed body. 
        raw_response = client.invoke_model( 
            modelId="anthropic.claude-3-5-sonet-20240620-v1:0", 
            body=body 
        ) 

        # Read and parse the response from the LLM. 
        response = raw_response.get("body").read() 
        response_body = json.loads(response) 
        llm_text_response = response_body.get("content")[0].get("text") 

        # Parse the LLM's text response to JSON. 
        llm_response_json = json.loads(llm_text_response) 

        # Check if the LLM decided that the message matches the current intent. 
        if llm_response_json.get("decision", False): 
            transitional_intent = intent 
            break  # Exit the loop as we've found a matching intent. 
        else: 
            # If not matched, set the transitional intent to 'UNKNOWN'. 
            transitional_intent = "UNKNOWN" 

 
    # Define the root intents that may have more specific sub-intents. 
    root_intents = ["GENERAL_QUERIES", "BOOKING_ISSUES", "CUSTOMER_COMPLAINTS"] 

    # If a matching root intent is found, recursively check for more specific intents. 
    if transitional_intent in root_intents: 

        # Mapping of root intents to their related sub-intents. 
        intent_definition = { 
            "GENERAL_QUERIES_related_intents": [ 
                "DESTINATION_INFORMATION", 
                "LOYALTY_PROGRAM_DETAILS", 
                "FLIGHT_SCHEDULES", 
                "AIRLINE_POLICIES", 
                "CHECK_IN_PROCEDURES", 
                "IN_FLIGHT_SERVICES", 
                "CANCELLATION_POLICY" 
            ], 

            "BOOKING_ISSUES_related_intents": [ 
                "FLIGHT_CHANGE", 
                "SEAT_SELECTION", 
                "BAGGAGE" 
            ], 

            "CUSTOMER_COMPLAINTS_related_intents": [ 
                "DELAY", 
                "SERVICE_DISSATISFACTION", 
                "SAFETY_CONCERNS" 
            ] 
        } 

        # Recursively call intent_detection with the related sub-intents. 
        return intent_detection( 
            message, 
            intent_definition.get(f"{transitional_intent}_related_intents") 
        ) 

    else: 
        # Return the detected intent or 'UNKNOWN' if none matched. 
        return transitional_intent 
 

def main(message: str) -> str: 

    """ 
    Main function to initiate intent detection on a user's message. 
    Parameters: 
        message (str): The user's message for which to detect the intent.  
    Returns: 
        str: The detected intent name, or 'UNKNOWN' if no intent is matched. 
    """ 

    # Start intent detection with the root intents. 

    return intent_detection( 
        message=message, 
        decision_list=[ 
            "GENERAL_QUERIES", 
            "BOOKING_ISSUES", 
            "CUSTOMER_COMPLAINTS" 
        ] 
    ) 

if __name__ == "__main__": 
    message = """\ 
Hello, 
I'm planning to travel next month and wanted to ask about your airline's policies. Could you please provide information on: 
Your refund and cancellation policies. 
Rules regarding carrying liquids or other restricted items. 
Any COVID-19 safety measures still in place. 
Looking forward to your response. 
    """ 
    print(main(message=message))

 

Evaluation Guidelines 

To comprehensively evaluate the performance of a hierarchical sequence of binary classifiers for multiclass text classification using LLMs, a well-constructed ground truth dataset is critical. This dataset should be meticulously designed to serve multiple purposes, ensuring both the overall system and individual classifiers are assessed accurately. 

Dataset Design Considerations 

  • Balanced Dataset for Overall Evaluation: The ground truth dataset must encompass a balanced representation of all intent categories to evaluate the system holistically. This enables the calculation of critical overall metrics such as accuracy, macro-precision, macro-recall, and micro-precision. A balanced dataset ensures that no specific category disproportionately influences these metrics, providing a fair measure of the system’s performance across all intents.
  • Per-Classifier Evaluation: Each binary classifier in the hierarchy should also be evaluated individually. To achieve this, the dataset must contain balanced positive and negative samples for each classifier. This balance is essential to calculate metrics such as accuracy, precision, recall, and F1-score for each individual classifier, enabling targeted performance analysis and iterative improvements at every level of the hierarchy.
  • Negative Sample Creation: Designing negative samples is a critical aspect of the dataset preparation process. Negative samples should be created using common sense principles to reflect real-world scenarios accurately: 
    • Diversity: Negative samples should be diverse to simulate various input conditions, preventing classifiers from overfitting to narrow definitions of “positive” and “negative” examples. 
    • Relevance for Lower-Level Classifiers: For classifiers deeper in the hierarchy, negative samples need not include examples from unrelated categories. For instance, in a “Flight Change” classifier, negative samples can exclude intents related to “Safety Concerns” or “In-Flight Entertainment.” This specificity helps avoid unnecessary complexity and confusion, focusing the classifier on its immediate decision boundary. 

Metrics for Evaluation 

  • Overall System Metrics: 
    • Accuracy: The ratio of correctly classified samples to total samples, indicating the system’s general performance. 
    • Macro and Micro Precision & Recall: Macro metrics weigh each class equally, providing insights into system performance for underrepresented categories. Micro metrics, on the other hand, weigh classes proportionally to their sample sizes, offering a perspective on system performance for frequently occurring categories. 
  • Classifier-Level Metrics: 
    • Each binary classifier must be evaluated independently using accuracy, precision, recall, and F1-score. These metrics help pinpoint weaknesses in individual classifiers, which can then be addressed through retraining, hyperparameter tuning, or data augmentation. 
  • Cost per Classification: 
    • Tracking the computational or financial cost per classification is vital, especially in scenarios where resource efficiency is a priority. This metric helps balance the trade-off between model performance and operational budget constraints. 

Additional Considerations 

  • Dataset Size:  The dataset should be large enough to capture variations in intent expressions while ensuring each classifier receives sufficient positive and negative samples for robust training and evaluation. 
  • Data Augmentation: Techniques such as paraphrasing, synonym replacement, or noise injection can be employed to expand the dataset and improve classifier generalization. 
  • Cross-Validation:  Employing techniques like k-fold cross-validation can ensure that the evaluation metrics are not biased by a specific train-test split, providing a more reliable assessment of the system’s performance. 
  • Real-World Testing:  In addition to ground truth datasets, testing the system on real-world, unstructured data can reveal gaps in performance and help fine-tune classifiers to handle practical scenarios effectively. 

By adhering to these principles, the evaluation process will yield a thorough understanding of both the end-to-end system’s performance and the individual strengths and weaknesses of each classifier, guiding data-driven refinements and ensuring robust, scalable deployment. 

Additional Best Practices for Multiclass Text Classification Using LLMs 

Prompt Caching 

Prompt caching is a powerful technique for improving efficiency and reducing latency in applications with repeated queries or predictable user interactions. By caching prompts and their corresponding LLM-generated outputs, systems can avoid redundant API calls, thereby improving response times and lowering operational costs. 

Implementation Across Popular LLM Suites 
  • Anthropic: Anthropic’s models support prompt caching is done by marking specific parts of your prompt—such as tool definitions, system instructions, or lengthy context—with the cache_control parameter in your API requests. For example, you might include the entire text of a book in your prompt and cache it, allowing you to ask multiple questions about the text without reprocessing it each time. To enable this feature, include the header anthropic-beta: prompt-caching-2024-07-31 in your API calls, as prompt caching is currently in beta. By structuring your prompts with static content at the beginning and dynamic, user-specific content at the end, and by strategically marking cacheable sections, you can optimize performance, reduce latency, and lower operational costs when working with Anthropic’s language models. 
  • ChatGPT (OpenAI): To implement OpenAI’s Prompt Caching and optimize your application’s performance, structure your prompts so that static or repetitive content—like system prompts and common instructions—is placed at the beginning, while dynamic, user-specific information is appended at the end. This setup leverages exact prefix matching, increasing the likelihood of cache hits for prompts longer than 1,024 tokens. When the prefix of a prompt matches a cached entry, the system reuses the cached processing results, reducing latency by up to 80% and cutting costs by 50% for lengthy prompts. The caching mechanism operates automatically, requiring no additional code changes, and is specific to your organization to maintain data privacy. Cached prompts remain active for 5 to 10 minutes of inactivity and can persist up to an hour during off-peak periods. By following these implementation strategies, you can enhance API efficiency and reduce operational costs when interacting with OpenAI’s language models. 
  • Gemini (Google): Context caching in the Gemini API enables you to reduce processing time and costs by caching large input tokens that are reused across multiple requests. To implement this, you first upload your content (such as large documents or files) using the Files API. Then, you create a cache with a specified Time to Live (TTL) using the CachedContent.create() method, which stores the tokenized content for a duration you choose. When generating responses, you construct a GenerativeModel that references this cached content, allowing the model to access the cached tokens without reprocessing them. This is particularly effective for applications like chatbots with extensive system instructions or repetitive analysis tasks, as it minimizes redundant token processing and optimizes overall performance. 
Best Practices for Implementing Caching with Large Language Models (LLMs):
  • Structure Prompts Effectively 
    • Static Content First: Place static or repetitive content—such as system prompts, instructions, context, or examples—at the beginning of your prompt. 
    • Dynamic Content Last: Append variable or user-specific information at the end. This increases the likelihood of cache hits due to exact prefix matching. 
  • Leverage Exact Prefix Matching 
    • Ensure that the cached sections of your prompts are identical across requests. Even minor differences can prevent cache hits. 
    • Use consistent formatting, wording, and structure for the static parts of your prompts. 
  • Utilize Caching for Long Prompts
    • Caching is most beneficial for prompts that exceed certain token thresholds (e.g., 1,024 tokens). 
    • For lengthy prompts with repetitive elements, caching can significantly reduce latency and cost. 
  • Mark Cacheable Sections Appropriately 
    • Use available API features (such as cache_control parameters or specific headers) to designate cacheable sections in your prompts. 
    • Clearly define cache boundaries to optimize caching efficiency. 
  • Set Appropriate Time to Live (TTL)
    • Adjust the TTL based on how frequently the cached content is accessed. 
    • Longer TTLs are advantageous for content that is reused often, while shorter TTLs prevent stale data in dynamic environments. 
  • Be Mindful of Model and API Constraints
    • Ensure that you’re using models that support caching features. 
    • Be aware of minimum token counts and other limitations specific to the LLM you’re using. 
  • Understand Pricing and Cost Implications: 
    • Familiarize yourself with the pricing model for caching, including any costs for cache writes, reads, and storage duration. 
    • Balance the cost of caching against the benefits of reduced processing time and lower per-request costs. 
  • Handle Cache Invalidation and Updates: 
    • Implement mechanisms to update or invalidate caches when the underlying content changes. 
    • Be prepared to handle cache misses gracefully by processing the full prompt when necessary. 

Temperature Settings

The temperature parameter is critical in controlling the randomness and creativity of an LLM’s output. 

Low Temperature (e.g., 0.2) 

A low temperature setting makes the model’s outputs more deterministic by prioritizing higher-probability tokens. This is ideal for: 

  • Classification-oriented tasks requiring consistent responses. 
  • Scenarios where factual accuracy is critical. 
  • Narrow decision boundaries, such as binary classifiers in the hierarchy. 
High Temperature (e.g., 0.8–1.0) 

Higher temperature settings introduce more randomness, making the model explore diverse possibilities. This is useful for: 

  • Generating creative text, brainstorming ideas, or handling ambiguous inputs. 
  • Scenarios where the intent is not well-defined and may benefit from exploratory responses. 
Best Practices for Multiclass Hierarchies 
  • Use low temperatures for top-level binary classifiers where intent boundaries are clear. 
  • Experiment with slightly higher temperatures for ambiguous or nuanced intent categories to capture edge cases during evaluation phase
Adding Reasoning to the Prompt 

Encouraging LLMs to reason step-by-step improves their ability to handle ambiguous or complex cases. This can be achieved by explicitly prompting the model to break down the classification process. For instance: 

  • Use phrases like “First, analyze the input for relevant keywords. Then, decide the most appropriate intent based on the following rules.” 
  • This approach helps mitigate errors in cases where multiple intents may appear similar by providing a logical framework for decision-making. 

Prompt Optimization with Meta-Prompting 

Meta-prompts are prompts about prompts. They guide the LLM to follow specific rules or adhere to structured formats for better interpretability and accuracy. Examples include: 

  • Defining constraints, such as “Respond only with ‘Yes’ or ‘No.'” 
  • Setting explicit rules, such as “If the input mentions scheduling changes, classify as ‘Flight Change.'” 
  • Clarifying ambiguous instructions, such as “If unsure, classify as ‘Miscellaneous’ and provide an explanation.” 

Fine-Tuning Other Key LLM Parameters 

  • Max Tokens – Control the length of the output to avoid excessive verbosity or truncation. For classification tasks, limit the tokens to the minimal response necessary (e.g., “Yes,” “No,” or a concise class label). 
  • Top-p Sampling (Nucleus Sampling) – Instead of selecting tokens based on temperature alone, top-p sampling chooses from a subset of tokens whose cumulative probability adds up to a specified threshold. For deterministic tasks, set top-p close to 0.9 to balance precision and diversity. 
  • Stop Sequences – Define stop sequences to terminate outputs gracefully, ensuring outputs do not contain unnecessary or irrelevant continuations. 

Iterative Prompt Refinement 

Iterative prompt refinement is a crucial process for continuously improving the performance of LLMs in hierarchical multiclass classification tasks. By systematically analyzing errors, refining prompts, and validating changes, you can ensure the system evolves to handle complex and ambiguous scenarios more effectively. A structured “prompt refinement pipeline” can greatly enhance this process by combining meta-prompts and ground truth datasets for evaluation. 

The Prompt Refinement Pipeline 

A prompt refinement pipeline is an automated or semi-automated framework that systematically refines, tests, and evaluates prompts. It consists of the following components: 

Meta-Prompt for Refinement 

Use an LLM itself to refine existing prompts by generating more concise, effective, or logically robust alternatives. A meta-prompt asks the model to analyze and improve a given prompt. For example: 

  • Input Meta-Prompt: 
    • “The following prompt is used for a binary classifier in a hierarchical text classification task. Suggest improvements to make it more specific, avoid ambiguity, and handle edge cases better. Also, propose an explanation for why your suggestions improve the prompt. Current prompt: [insert prompt].” 
  • Output: The model may suggest rewording, adding explicit constraints, or including step-by-step reasoning logic. These suggestions can then be iteratively tested. 
Ground Truth Dataset for Evaluation 

Use a ground truth dataset to validate refined prompts against pre-labeled examples. This ensures that improvements suggested by the meta-prompt are objectively tested. Key steps include: 

  • Evaluate the refined prompt on classification accuracy, precision, recall, and F1-score using the ground truth dataset. 
  • Compare these metrics against the original prompt to ensure genuine improvement. 
  • Use misclassified examples to further identify weaknesses and refine prompts iteratively. 
Automated Testing and Feedback Loop 

Implement an automated system to: 

  • Test the refined prompt on a validation set. 
  • Log performance metrics, including correct classifications, errors, and cases where ambiguity persists. 
  • Highlight specific prompts or input types that consistently underperform for further manual refinement. 
Version Control and Experimentation 

Maintain a version-controlled repository for prompts. Track: 

  • Changes made during each refinement cycle. 
  • Associated performance metrics. 
  • Rationale behind prompt modifications. This documentation provides a knowledge base for future refinements and prevents regressions. 
Benefits of a Prompt Refinement Pipeline 
  • Systematic Improvement  – A structured approach ensures refinements are not ad hoc but are guided by data-driven insights and measurable results. 
  • ScalabilityBy automating key aspects of the refinement process, the pipeline scales effectively with larger datasets and more complex classification hierarchies. 
  • Model-AgnosticThe pipeline can be used with various LLMs, such as Anthropic’s models, OpenAI’s ChatGPT, or Google Gemini. This flexibility enables organizations to adopt or switch LLM providers without losing the benefits of the refinement process. 
  • Increased Robustness – Leveraging ground truth datasets ensures that prompts are evaluated on real-world examples, helping the model handle diverse and ambiguous scenarios with greater reliability. 
  • Meta-Prompt BenefitsMeta-prompts provide an efficient mechanism to leverage LLM capabilities for self-improvement. By incorporating LLM-generated suggestions, the system continuously evolves in response to new challenges or requirements. 
  • Error AnalysisThe feedback loop enables a focused analysis of misclassifications, guiding the creation of targeted prompts that address specific failure cases or edge conditions. 
Iterative Workflow for Prompt Refinement Pipeline 
  • Baseline Testing – Start with an initial prompt and evaluate it on the ground truth dataset. Log performance metrics. 
  • Meta-Prompt Refinement – Use a meta-prompt to generate improved versions of the initial prompt. Select the most promising refinement. 
  • Validation and Comparison – Test the refined prompt on the dataset, comparing results to the baseline. Identify improvements and areas where performance remains suboptimal. 
  • Targeted Refinements – For consistently misclassified samples, manually analyze and refine the prompt further. Re-evaluate until significant performance gains are achieved. 
  • Deployment and Monitoring- Deploy the improved prompt into production and monitor real-world performance. Incorporate newly encountered edge cases into subsequent iterations of the refinement pipeline. 

A prompt refinement pipeline provides a robust framework for systematically improving the performance of LLMs in hierarchical multiclass classification tasks. By combining meta-prompts, ground truth datasets, and automated evaluation, this approach ensures continuous improvement, scalability, and adaptability to new challenges, resulting in a more reliable and efficient classification system. 

References

  1. Brown, T. B., et al. (2020). “Language Models are Few-Shot Learners.” *NeurIPS* 
  2. OpenAI. “Best Practices for Prompt Engineering with GPT-4.” 
  3. Anthropic. “Building Reliable Classification with Claude.” 
  4. https://huggingface.co/docs/transformers/en/tasks/prompting 
  5. https://www.vellum.ai/llm-parameters-guide 
]]>
https://blogs.perficient.com/2024/11/20/multiclass-text-classification-using-llm-mtc-llm-a-comprehensive-guide/feed/ 0 372343
Agentic AI: The New Frontier in GenAI https://blogs.perficient.com/2024/09/27/agentic-ai-the-new-frontier-in-genai/ https://blogs.perficient.com/2024/09/27/agentic-ai-the-new-frontier-in-genai/#comments Fri, 27 Sep 2024 20:47:17 +0000 https://blogs.perficient.com/?p=369907

In the rapidly evolving landscape of digital transformation, businesses are constantly seeking innovative ways to enhance their operations and gain a competitive edge. While Generative AI (GenAI) has been the hot topic since OpenAI introduced ChatGPT to the public in November 2022, a new evolution of the technology is emerging that promises to revolutionize how businesses operate: Agentic AI. 

What is Agentic AI? 

Agentic AI represents a fundamental shift in how we approach intelligence within digital systems.  

Unlike the first wave of Generative AI solutions that rely heavily on prompt engineering, agentic AI possesses the ability to make autonomous decisions based on predefined goals, adapting in real-time to changing environments. This enables a deeper level of interaction, as agents are able to “think” about the steps in a more structured and planned approach. With access to web search, outputs are more researched and comprehensive, transforming both efficiency and innovation potential for business. 

Key characteristics of Agentic AI include: 

  •   Autonomy: Ability to perform tasks independently based on predefined goals or dynamically changing circumstances. 
  •  Adaptability: Learns from interactions, outcomes, and feedback to make better decisions in the future. 
  • Proactivity: Not only responds to commands but can anticipate needs, automate tasks, and solve problems proactively. 

As technology evolves at an unprecedented rate, agentic AI is positioned to become the next big thing in tech and business transformation, building upon the foundation laid by generative AI while enhancing automation, resource utilization, scalability, and specialization across various tasks. 

Leveraging Agentic Frameworks 

Central to this transformation is the concept of the Augmented Enterprise, which leverages advanced technologies to amplify human capabilities and business processes. Agentic Frameworks provide a structured approach to integrating autonomous systems and artificial intelligence (AI) into the enterprise. 

Agentic Frameworks refer to the strategic models and methodologies that enable organizations to deploy and manage autonomous agents—software entities that perform tasks on behalf of users or other systems. Use cases include code development, content creation, and more.  

Unlike traditional approaches that require explicit programming for each sequence of tasks, Agentic Frameworks provide the business integrations to the model and allow it to decide what system calls are appropriate to achieve the business goal.  

“The integration of agentic AI through well-designed frameworks marks a pivotal moment in business evolution. It’s not just about automating tasks; it’s about creating intelligent systems that can reason, learn, and adapt alongside human workers, driving innovation and efficiency to new heights.” – Robert Bagley, Director 

Governance and Ethical Considerations 

As we embrace the potential of agentic AI and our AI solutions begin acting on our behalf, developing robust AI strategy and governance frameworks becomes more essential. With the increasing complexity of regulatory environments, Agentic Frameworks must include mechanisms for auditability, compliance, and security, ensuring that the deployment of autonomous agents aligns with legal and ethical standards. 

“In the new agentic era, the scope of AI governance and building trust should expand from ethical compliance to include procedural compliance. As these systems become more autonomous, they must both operate within ethical boundaries and align with our organizational values. This is where thoughtful governance becomes a competitive advantage.” – Robert Bagley, Director 

To explore how your enterprise can benefit from Agentic Frameworks, implement appropriate governance programs, and become a truly Augmented Enterprise, reach out to Perficient’s team of experts today. Together, we can shape the future of your business in the age of agentic AI. 

 

]]>
https://blogs.perficient.com/2024/09/27/agentic-ai-the-new-frontier-in-genai/feed/ 2 369907
Create Content Fragment Variations in AEM With GenAI https://blogs.perficient.com/2024/09/24/create-content-fragment-variations-in-aem-with-genai/ https://blogs.perficient.com/2024/09/24/create-content-fragment-variations-in-aem-with-genai/#respond Tue, 24 Sep 2024 10:50:17 +0000 https://blogs.perficient.com/?p=369459

Earlier this year, Adobe introduced new generative AI capabilities in Adobe Experience Manager (AEM). As a Platinum partner of Adobe, Perficient has early adopter features provisioned for our environments. One of the more exciting and relevant features is the ability to use GenAI to generate variations within Content Fragments in AEM.

In this blog, we’ll talk about a sample use-case scenario, the steps involved with this new feature, and show how it can empower marketers and content authors who spend a lot of time in AEM and make their lives easier. 

The Scenario

In this sample use case, we have contributors who write content for a site called WKND Adventures. We’d like to create a contributor biography to enable an engaging experience for the end user. A biography will further enhance the user experience and increase the chance of content leading to a conversion, such as booking a vacation.  

How to Quickly Create a Content Fragment Variation 

1. Open a Content Fragment for Editing

After logging into AEM as a Cloud Service authoring environment, head over to a Content Fragment and open it up for editing.

Note: If you don’t see the new editor, try selecting the “Try New Editor” button to bring up the latest interface.

AEM as a Cloud Service Content Fragment Editing

As you can see, we still have the standard editing features such as associating images, making rich text edits, and publishing capabilities.

2. Generate Variations

Select the “Generate Variations” button on the top toolbar, and then a new window opens with the Generative Variations interface as seen in the image below.

AEM as a Cloud Service Generative Variations

What’s important to note here is that we are tied to the authoring environment in this interface. So, any variations that are generated will be brought back into our content fragment interface. Although a new prompt can be generated, we’ll start with the Cards option.

Note: There will be more prompt templates created after the writing of this blog.

3. Prompt Templates

The Cards option is pre-filled with some default helper text to provide guidance on a potential prompt and help fine-tune what’s being generated. Providing relevant and concise explanations to the user interaction will also improve the generated results. The interaction can also be explained. The generations can be further enhanced by providing Adobe Target, or a CSV file to further improve the generations. Providing a tone of voice also further defines the variations.

AEMaaCS Generative Variations Prompts

One of our favorite features is the ability to provide a URL for domain knowledge. In this case, we’re going to select a site from Rick Steves on winter escapes as seen in the image below.

AEMaaCS Generative Variations Prompt Url Domain Knowledge

After selecting the appropriate user interaction, tone, temperature intent, and number of variations, we select the “Generate” button.

4. Choose a Content Fragment Variation

Once the variations are created, we can review the results and then choose one to bring back into our Content Fragment editor.

AEMaaCS Generative Variations Selection

After selecting a variation and giving it a name, we can then export that variation. This will create a new variation of that content fragment in AEM.

AEM as a Cloud Service New Content Fragment

Although this is a simple example, many other prompt templates can be used to generate variations that can be used in AEM. Such as creating FAQs, a headline, hero banners, tiles, and more. Additional technical details can be found on Adobe’s GenAI page.

The Exciting Future of Content Creation

Having a direct integration to generate variations from an authoring environment will certainly speed up content creation and allow authors to create relevant and engaging content with the help of GenAI. We look forward to more features and improvements from Adobe in this exciting space, and helping customers adopt the technologies to effectively and safely create content to build exciting experiences.

]]>
https://blogs.perficient.com/2024/09/24/create-content-fragment-variations-in-aem-with-genai/feed/ 0 369459
My Experience at the Salesforce Nagpur Ohana Gathering https://blogs.perficient.com/2024/08/28/my-experience-at-the-salesforce-nagpur-ohana-gathering/ https://blogs.perficient.com/2024/08/28/my-experience-at-the-salesforce-nagpur-ohana-gathering/#respond Wed, 28 Aug 2024 06:34:26 +0000 https://blogs.perficient.com/?p=367734

Hello Trailblazers!

Last week, I had the privilege of attending the “Salesforce Nagpur Ohana Gathering,” an inspiring event that united passionate professionals, innovative developers, and curious learners all under one roof. The energy in the room was electrifying, with a shared enthusiasm for collaboration, growth, and the future of Salesforce. This Meetup was more than just a gathering—it was a melting pot of ideas, a place where knowledge was exchanged, and where the spirit of the Salesforce community truly shone.

In this blog post, I’ll be sharing my personal experiences, key takeaways, and insights into the exciting developments and inspiring moments that made this event unforgettable. So, stay tuned for all the details—you won’t want to miss it!

 

Salesforce Nagpur Ohana Gathering:

The “Salesforce Nagpur Ohana Gathering” was an impactful event organized by Central India’s Trailblazer Community Group in collaboration with the Salesforce Developer Group Nagpur.

The gathering featured three highly engaging and informative sessions, each providing valuable insights into various aspects of Salesforce. Attendees had the opportunity to dive deep into new developments, share knowledge, and connect with like-minded professionals.

Let’s take a brief look at each session and the key takeaways that made this event truly remarkable.

1. Gen AI in Salesforce and Salesforce AI Capabilities:

The opening session delved into the transformative potential of Generative AI (GenAI) in Salesforce. The speaker began by explaining that GenAI in Salesforce involves AI models capable of generating new content and insights from existing data. This includes automating tasks such as customer responses, personalized emails, and report generation, all aimed at improving efficiency and delivering personalized experiences.

The session also highlighted how GenAI leverages natural language processing (NLP) and machine learning to interpret data and produce tailored outputs. He demonstrated how businesses can harness these technologies to significantly boost productivity.

The speaker also touched on Salesforce’s AI capabilities and discussed the future of AI, emphasizing the growing role of AI/GenAI in today’s evolving macro-environment and how these advancements will shape the future of business operations.

Img1

Note: The Salesforce AI Specialist Certification has recently been launched. Be sure to get your exam guide from here to stay ahead of the curve in this AI-driven future.

Click here for the Study Material for the Salesforce AI Specialist Certification Exam.

 

2. MINT-1T LMM Concepts:

The second session focused on the groundbreaking MINT-1T LMM (Large Multilingual Model). The speaker introduced MINT-1T as a state-of-the-art language model specifically designed to process and generate text across a variety of languages. As a type of large language model, MINT-1T leverages deep learning architectures—such as transformer models—trained on vast amounts of multilingual text data to understand and generate human language with impressive accuracy.

The speaker provided a deep dive into LMM’s innovative approaches, sharing best practices for implementing and utilizing this technology effectively.

Below are a few screenshots showcasing some of these examples.

Img2

 

3. Einstein for Developers:

The final session of the event covered “Einstein for Developers,” showcasing how Salesforce empowers developers with AI-driven tools and services. The speaker explained that Einstein for Developers enables developers to build, integrate, and enhance their applications using intelligent, data-driven capabilities provided by Salesforce’s AI technology.

The session also featured a demonstration of  “Einstein for Developers” in VS Code. It refers to the integration of Salesforce’s Einstein AI capabilities into the development environment using Visual Studio Code (VS Code), which is the preferred code editor for Salesforce developers. This setup allows developers to efficiently build and deploy the code very easily.

Img3

 

Note: For those interested in a deeper dive into using the Einstein for Developer Extension in VS Code, then you can go with this link where you can find a detailed guide on how to get started.

All three sessions were highly informative, providing a wealth of knowledge and practical insights for everyone in attendance.

 

Quiz Competition, Swag Distribution, and Group Photo:

After the insightful sessions, the event wrapped up with a fun and engaging quiz competition, where attendees tested their knowledge on the topics discussed throughout the day. The quiz was light-hearted yet educational, providing a fun way to reinforce the learning experience. The top winners were rewarded with beautiful swag, adding an extra layer of excitement to the event.

To wrap up the day on a high note, all the Trailblazers gathered together for a celebratory group photo, enthusiastically saying “CHEESE” as they captured the moment of camaraderie and shared success.

Here are some highlights from the event captured in a few photographs.

Click to view slideshow.

Conclusion:

Overall, all three sessions were incredibly insightful and engaging. The event provided valuable learning opportunities while fostering a spirit of collaboration and innovation among all attendees. Everyone walked away with new knowledge, ideas, and a renewed excitement for the future of AI in Salesforce.

Happy Reading!

 Life is full of challenges,

but your resilience will help you

overcome them and emerge stronger.

Salesforce Meetup Related Posts:

  1. Empowering Connections: Insights from the Salesforce MuleSoft Community Meetup

2.My Experience at Central India Trailblazers Meetup

You Can Also Read :

1. Introduction to the Salesforce Queues – Part 1

2.Mastering Salesforce Queues: A Step-by-Step Guide – Part 2

3.How to Assign Records to Salesforce Queue: A Complete Guide

4. An Introduction to Salesforce CPQ

5. Revolutionizing Customer Engagement: The Salesforce Einstein Chatbot

 

FAQs:

1. What is Gen AI?

Ans: Generative AI is a type of artificial intelligence that is designed to create new, original content based on the data it has been trained on. Instead of just analyzing or classifying data, generative AI models can produce new data that mimics the patterns and structures of the input data. This can include generating text, images, music, videos, and even code.

 

2. What is Salesforce Einstein Prediction Builder?

Ans:  Einstein Prediction Builder enables developers (even with minimal coding knowledge) to create AI models that predict outcomes based on Salesforce data. For example, developers can set up predictions for lead conversion, customer churn, or sales forecasts without needing deep data science expertise.

 

3. What is Salesforce Einstein Bot?

Ans:  Einstein Bot is a powerful feature of Salesforce’s AI suite designed to enhance customer interactions through intelligent automation. It’s part of Salesforce’s suite of AI-driven tools and provides businesses with a way to improve customer service and engagement by leveraging conversational AI.

Click here to know more.

]]>
https://blogs.perficient.com/2024/08/28/my-experience-at-the-salesforce-nagpur-ohana-gathering/feed/ 0 367734