Automation Articles / Blogs / Perficient https://blogs.perficient.com/tag/automation/ Expert Digital Insights Mon, 18 Nov 2024 19:13:25 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Automation Articles / Blogs / Perficient https://blogs.perficient.com/tag/automation/ 32 32 30508587 Enhance Self-Service Experience with ServiceNow Virtual Agent https://blogs.perficient.com/2024/10/03/enhance-self-service-experience-with-servicenow-virtual-agent/ https://blogs.perficient.com/2024/10/03/enhance-self-service-experience-with-servicenow-virtual-agent/#respond Thu, 03 Oct 2024 21:58:19 +0000 https://blogs.perficient.com/?p=370131

In today’s world, automation and self-service is all around us. From self-order tablets at restaurants to self-checkout lanes at grocery stores and self-check in kiosks at airports, the ability to complete tasks without requiring additional human assistance is incredibly valuable, saving both time and resources.

For organizations utilizing ServiceNow as their IT Service Management (ITSM) platform, the ServiceNow Virtual Agent offers a powerful solution to streamline support and enhance the self-service experience for users.

What is the ServiceNow Virtual Agent?

The ServiceNow Virtual Agent is an intelligent conversational chatbot that provides 24/7 automated support. It enables users to resolve common IT service issues, submit new IT incidents/requests, and find information stored in knowledge bases.

Users can quickly get resolutions without waiting for human assistance. By handling routine inquiries and tasks, the Virtual Agent can reduce the volume of calls and lessen the workload of Service Desk agents, allowing them to focus on more complex issues. In other words, the Virtual Agent can act as a tier 1 level support, deflecting mundane tasks from the Service Desk.

Key features and benefits

Out-of-the-box conversation topics

ServiceNow provides out-of-the-box conversation topics that can quickly be tailored to an organization’s existing processes, resulting in immediate business value, such as:

  • Open IT Ticket
  • Password Reset
  • Unlock Account
  • VPN Connectivity
  • Hardware Issues

Natural language understanding

The Virtual Agent comes with pre-built natural language understand (NLU) models, allowing the Virtual Agent to understand what the user enters into the chat and map it to specific topics, for example:

  • “My account is locked”
  • “I cannot connect to the VPN”
  • “I need a new keyboard”

Custom NLU models can also be created and trained for terminology specific to the organization.

Topic recommendation analysis

The ServiceNow platform has machine learning capabilities that can analyze historical Incident data, identifying frequent issues within the organization and then recommend new topics for the Virtual Agent.

Multi-channel integration

The ServiceNow Virtual Agent can easily be integrated across multiple channels, including:

  • Employee Center/Service Portal
  • Intranet site
  • Microsoft Teams
  • Slack
  • ServiceNow Now Mobile.

This ensures users can receive automated support easily, anywhere and anytime.

Transfer to live agent

In scenarios where users cannot resolve their issue with the Virtual Agent, a request can be made to reroute the chat to a live Service Desk agent. The agent can view the user’s chat logs with the Virtual Agent and provide further assistance.

Conversational Analytics

In addition to reporting capabilities available within the ServiceNow platform, the Virtual Agent comes with a built-in Conversational Analytics dashboard that provides insight on user interactions. This lets admins see data on how the Virtual Agent is performing, and allows them to optimize it further.

Example use cases

Below are two examples of how the ServiceNow Virtual Agent can provide users self-service options to resolve common issues, reducing the number of calls and repetitive tasks that the Service Desk receives.

Password reset

Without Virtual Agent: a user calls the Service Desk and talks with an agent because they require instructions on how to reset their password.

With the Virtual Agent: a user initiates a new chat, selects the Password Reset topic, and the Virtual Agent will guide them through the self-service password reset process.

Troubleshooting computer issues

Without Virtual Agent: a user calls the Service Desk and describes an issue they are experiencing on their computer. The Service Desk agent spends time trying to diagnose the issue and provide a solution.

With Virtual Agent: a user initiates a new chat and provides details of a computer issue. The Virtual Agent searches the knowledge base and suggests solutions.

]]>
https://blogs.perficient.com/2024/10/03/enhance-self-service-experience-with-servicenow-virtual-agent/feed/ 0 370131
Unlock Efficiency: How Salesforce CPQ’s Renewal and Amend Features Simplify Your Business https://blogs.perficient.com/2024/10/01/unlock-efficiency-how-salesforce-cpqs-renewal-and-amend-features-simplify-your-business/ https://blogs.perficient.com/2024/10/01/unlock-efficiency-how-salesforce-cpqs-renewal-and-amend-features-simplify-your-business/#respond Tue, 01 Oct 2024 16:02:20 +0000 https://blogs.perficient.com/?p=369806

Imagine running a business where you offer subscription-based products. As your customer base grows, you begin to notice something slipping—renewal deadlines, contract complexities, and your sales team being bogged down with manual updates. Enter Salesforce CPQ (Configure, Price, Quote), a powerful tool designed to help businesses streamline the often-complex process of managing quotes, pricing, and contracts. But that’s not all—Salesforce CPQ’s renewal and amend functionalities are here to make your contract management process seamless and automatic.

Let’s dive into how CPQ works, how it simplifies renewals and amendments, and why it’s a game-changer for any business using subscription models.

Cpq

What is Salesforce CPQ?

At its core, Salesforce CPQ helps businesses configure their products, set pricing, and generate quotes quickly and accurately. Whether your product comes in different sizes, packages, or configurations, CPQ automates the process of calculating pricing based on your business rules, ensuring everything stays consistent. It also handles complex contracts, helping your sales team focus on selling rather than getting lost in the weeds of paperwork.

Now, imagine adding automation to this process, especially when it comes to renewing contracts or amending existing ones. This is where CPQ truly shines, offering standard functionality that reduces the workload while improving accuracy and customer satisfaction.

The Challenge of Renewals

Picture this: It’s the start of the week, and your inbox is overflowing with reminders—expiring contracts, upcoming renewals, and customer requests for service changes. Each contract has unique pricing, terms, and configurations. Manually tracking them is time-consuming and prone to human error. Missing a renewal date could lead to a loss of revenue or, worse, a dissatisfied customer.

Managing renewals manually can be overwhelming. But with Salesforce CPQ’s renewal functionality, this process is automated. Contracts are renewed at the right time, with minimal intervention from your team. No more worrying about missed deadlines or scrambling to send out renewal quotes. The system handles it for you, transforming what was once a cumbersome task into a smooth, efficient process.

 

How Renewal Functionality Works

Let’s say you have a loyal customer, Sara, whose subscription is nearing its end. In the past, you might have had to manually track her contract, reconfigure the terms, and send her a quote. But now, thanks to Salesforce CPQ’s renewal feature, the system automatically generates a renewal quote in advance, accounting for any updated pricing or discounts.

Your sales team receives a notification and can review the quote before sending it out. Sara, impressed with the efficiency, signs off on the renewal without delay. The entire process is handled smoothly, saving your team hours of manual work and ensuring customer satisfaction. Renewals become a way to strengthen your customer relationships, all while keeping your operations running efficiently.

Tackling Contract Amendments with Ease

But what happens when a customer wants to make changes mid-contract? Perhaps Sara reaches out midway through the year, wanting to upgrade her service package. In the past, you’d have to manually adjust the contract, update pricing, and notify the billing team. The whole process was time-consuming and left room for mistakes.

That’s where Salesforce CPQ’s amend functionality comes into play. Instead of starting from scratch, the system pulls up the existing contract, applies the requested changes, and automatically updates the quote. Whether Sara wants to add more users to her service or change the scope of her subscription, the amend functionality ensures everything is handled efficiently.

The amend feature also updates billing automatically, preventing errors that could arise from manual adjustments. Your team saves time, reduces the risk of miscommunication, and ensures that your customer is getting exactly what they need—without the hassle.

Automation Transforms Business Operations

Let’s face it—managing contracts manually is inefficient. Every contract expiration requires revisiting the original terms, configuring renewal details, and generating quotes. The more complex the contract, the higher the chances of errors. Handling amendments mid-term also introduces challenges, often leading to confusion or customer dissatisfaction.

But with Salesforce CPQ’s automated renewal and amend functionalities, the pressure is off. These features allow you to focus on what matters most: growing your business and building relationships with your customers. Automation increases accuracy, reduces manual effort, and ensures no details slip through the cracks.

Conclusion: A New Era of Contract Management

If your business is still managing renewals and amendments manually, now is the time to embrace the future with Salesforce CPQ. By automating these critical processes, you not only save time but also improve customer experience and protect your revenue streams.

Think about Sara—her smooth, seamless contract renewal and service upgrade are just one example of how CPQ’s renewal and amend features make a real difference. Your team can now focus on closing new deals, knowing that contract management is handled automatically.

Say goodbye to manual management and welcome the efficiency of Salesforce CPQ. It’s time to streamline your operations and let automation pave the way to a more successful, customer-focused future.

]]>
https://blogs.perficient.com/2024/10/01/unlock-efficiency-how-salesforce-cpqs-renewal-and-amend-features-simplify-your-business/feed/ 0 369806
Powering the Future: Key Highlights from PPCC24 and What’s Next for Power Platform https://blogs.perficient.com/2024/09/26/powering-the-future-key-highlights-from-ppcc24-and-whats-next-for-power-platform/ https://blogs.perficient.com/2024/09/26/powering-the-future-key-highlights-from-ppcc24-and-whats-next-for-power-platform/#respond Thu, 26 Sep 2024 23:55:49 +0000 https://blogs.perficient.com/?p=369888

The energy was electric last week as thousands of attendees invaded MGM Grand along the Las Vegas Strip for the 3rd Annual Power Platform Community Conference (PPCC24).

From groundbreaking announcements to new features unveiled during keynotes from Microsoft’s Charles Lamanna, Corporate Vice President of Business Industry and Copilot, and Jeff Teper, President of Apps and Platforms, PPCC24 offered an electrifying three days of innovation and collaboration.

Lamanna kicked off day one with an eye-opening overview of Microsoft’s low-code superhero of today, Power Platform. With more than 48 million active users every month – surpassing the population of Spain – Power Platform has become the “one platform” for everyone, whether it’s for no code, low code or pro code. But what truly stole the show this year was Copilot – set to revolutionize how developers work, bringing automation dreams to life.

The future of low-code development is evolving, and at PPCC24, it was clear: Power Platform plus Copilot equals transformative potential for businesses across industries, signaling a new road ahead for citizen developers and Microsoft automation:


“Most people overestimate what they can do in one year and underestimate what they can do in ten years.”

Let’s dive into key announcements and takeaways from PPC24:

The Rise of AI and Natural Language in Power Platform

AI is more deeply integrated into Power Platform than ever before, with a major emphasis on natural language capabilities and intelligent apps. Here are some of the top features unveiled during the conference:

  • Desktop Flows from Natural Language – Now in public preview, this feature enables users to generate desktop flows in Power Automate simply by using natural language. The barriers to automation just got lower for everyone, regardless of technical expertise.

 

  • Power Automate AI Recording for Desktop Flows – Also in public preview, this “show and tell” experience allows users to record desktop flows, making RPA workflows easier for users of all skill levels. The AI will interpret recordings to generate automated processes, speeding up adoption and productivity.

 

  • AI Agents for Copilot Studio – A game-changer for developers, AI agents will dynamically execute actions based on instructions and automatically handle workflow based on parameters. These agents can be trained and improved continuously, turning Copilot Studio into a true powerhouse for automation.

Coauthoring in Power Apps Now Generally Available

A highly anticipated feature from the Power Community, Co-Authoring in Power will beckon the next level of developer collaboration. This functionality allows up to 10 developers to collaborate in real time, editing apps simultaneously and a bringing new level of teamwork to app development.

As Charles Lamanna put it, “We are now all coauthors of this vision.” The seamless collaboration made possible through coauthoring will undoubtedly push the boundaries of what’s possible for low-code development.


The Road Ahead is Copilot-First

A standout theme from the conference was a Copilot-first vision for the future of low-code development. With tools like Copilot Studio set to be upgraded with GPT-4, the next generation of low-code technologies will be supported by AI agents that assist with tasks like solution design, data modeling, development, and visual design.


Perficient a Standout in Power Platform’s Future

As a leading Microsoft Solutions Partner, ranked 12th for Microsoft Power Platform partners, Perficient is thrilled to be at the forefront of this Community. From hosting a successful happy hour at Chez Bippy’s the night before the conference, to engaging with attendees at our booth—where we proudly supported donations to St. Jude’s Children’s Hospital—we’re excited to continue building on PPCC24 momentum. Our focus on helping organizations harness the full power of the latest Power Platform features to innovate faster and more intelligently will continue to help us lead the way.

While PPCC24 offered new announcements and innovations, it is only the beginning. As an award-winning Microsoft Solutions Provider, we’re committed to building groundbreaking solutions and bringing the robust capabilities of Power Platform to organizations everywhere. Whether it’s through AI-driven automation, real-time app coauthoring, or our continued work with Copilot, we’re dedicated to empowering businesses to innovate at scale.

Read more about our Power Platform practice here and stay tuned for upcoming events, workshops, and other exciting Power Platform activities!

]]>
https://blogs.perficient.com/2024/09/26/powering-the-future-key-highlights-from-ppcc24-and-whats-next-for-power-platform/feed/ 0 369888
Maximize Your PPCC24 Experience with Perficient: Insights, Innovation, and Impact https://blogs.perficient.com/2024/08/26/maximize-your-ppcc24-experience-with-perficient-insights-innovation-and-impact/ https://blogs.perficient.com/2024/08/26/maximize-your-ppcc24-experience-with-perficient-insights-innovation-and-impact/#respond Mon, 26 Aug 2024 17:12:43 +0000 https://blogs.perficient.com/?p=368082

The Power Platform Community Conference 2024 in Las Vegas is fast approaching, and it’s shaping up to be one of the most impactful events of the year for anyone involved in digital transformation. Whether you’re a seasoned professional or just getting started with Microsoft’s Power Platform, this conference offers unparalleled opportunities to learn, connect, and grow. At Perficient, we’re excited to share our expertise, showcase our success stories, and connect with you to explore how we can help you maximize your Power Platform investment. Here’s everything you need to know to make the most of this conference, from what to expect to why you should engage with Perficient.

What is the Power Platform Community Conference?

The Power Platform Community Conference (PPCC) is the premier event for professionals who use or are interested in Microsoft’s Power Platform. This annual gathering brings together thousands of developers, business leaders, and technology enthusiasts from around the world to explore the latest trends, tools, and best practices in Power Platform. PPCC 2024 is set to showcase cutting-edge AI innovations, building on the success of previous years. It offers more than 150 sessions and keynotes, along with 20 hands-on workshops, and opportunities to connect with and gain insights from Microsoft thought leaders, product experts and developers, MVPs, and peers.

Key Takeaways from Last Year’s Conference

The 2nd annual Power Platform Community Conference in 2023 was a major success, highlighting the growing momentum behind low-code development. Some key takeaways include:

  • Low-Code Momentum: The 2023 conference underscored the rapid expansion of the low-code market, with Power Platform playing a central role in enabling organizations to innovate quickly and efficiently.
  • AI-Powered Solutions: There was a significant focus on integrating AI with Power Platform, particularly through tools like AI Builder and Power Automate. These advancements are helping organizations automate more complex tasks, driving efficiency, and reducing manual work.
  • Community and Collaboration: The strength of the Power Platform community was a key theme, with thousands of professionals collaborating to share insights, solutions, and best practices.

What’s New for the 2024 Conference?

The 2024 conference will build on these themes, with an even stronger focus on AI-driven innovation. Microsoft plans to unveil several new AI features designed to help users automate more complex tasks and gain deeper insights from their data. The conference will highlight how generative AI advancements can be integrated seamlessly with existing Power Platform solutions to enhance productivity and efficiency.

This year, you can expect:

  • Showcasing AI Innovations: New AI capabilities in Copilot Studio, Power Automate, Power BI, and AI Builder that simplify the implementation of intelligent automation and analytics solutions.
  • Hands-On Labs and Networking: Continued opportunities to engage directly with the technology through hands-on labs and to connect with other professionals and experts in the field.
  • Expert-Led Sessions: Sessions led by industry experts focused on how AI is transforming the approach to digital transformation.

For more details on what to expect from this year’s conference, check out Microsoft’s announcement here.

Getting Registered

To register for the Power Platform Community Conference, visit the official conference registration page. Full conference passes start at $1,849 and will be raised to $1,899 after August 27th. You can add on one, two, or three full-day workshops for additional costs.

Once registered, take some time to plan your conference experience by reviewing the agenda and identifying which sessions align with your current projects or areas of interest.

Why Perficient Leads in Power Platform Solutions

At Perficient, our passion for Power Platform stems from its transformative impact across various industries. We’ve developed a proven track record, backed by 30+ certified experts and over 50 successful enterprise projects, delivering tangible results for our clients. Whether it’s implementing a Center of Excellence (COE) for a global auto manufacturer or building an automation program for a healthcare provider, our diverse industry experience allows us to craft tailored solutions that address unique business challenges.

We understand that every organization is at a different stage of its Power Platform journey. Whether you’re just starting or looking to optimize, our solutions and workshops are designed to align with your organization’s maturity level, ensuring you maximize your Power Platform investment.

Why Talk to Us at PPCC24

  1. Custom Solutions for Unique Challenges: We tailor our Power Platform solutions to meet your specific business needs, from app development to automation and data analytics.
  2. Deep Industry Insights: Our extensive experience across industries equips us with the insights needed to leverage Power Platform for addressing sector-specific challenges.
  3. Commitment to Long-Term Success: Beyond implementation, we offer ongoing support, maintenance, and optimization to ensure your Power Platform environment continues to deliver value as your business grows.

By connecting with Perficient at PPCC24, you’re not just getting a solution; you’re gaining a partner committed to your success.

We’re looking forward to the Power Platform Community Conference and hope to see you there. Be sure to visit us at booth #134, where you can learn more about our success stories, discuss your specific challenges, and discover how Perficient can help you harness the full potential of Power Platform. Let’s work together to turn your vision into reality.

For more information about our Power Platform capabilities, visit Perficient’s Power Platform page.

]]>
https://blogs.perficient.com/2024/08/26/maximize-your-ppcc24-experience-with-perficient-insights-innovation-and-impact/feed/ 0 368082
Automated Resolution of IBM Sterling OMS Exceptions https://blogs.perficient.com/2024/08/02/automated-resolution-of-ibm-sterling-oms-exceptions/ https://blogs.perficient.com/2024/08/02/automated-resolution-of-ibm-sterling-oms-exceptions/#respond Fri, 02 Aug 2024 21:31:14 +0000 https://blogs.perficient.com/?p=366827

In IBM Sterling OMS, Exception Handling is the procedure for managing deviations from the normal order processing flow – including incorrect pricing, missing information, inventory issues, stock shortages, payment issues, or shipping errors – which require immediate attention to preserve service quality and operational continuity. Retail businesses manage order processing and exception handling through manual entries and semi-automated systems. These tasks are typically divided among customer service teams, logistics staff, and operations managers, who rely heavily on traditional tools like spreadsheets and email communications.

The Strategic Impact of Automation

Order Exception handling procedures are crucial to maintaining competitive advantage and customer satisfaction. This traditional approach affects workload. A report suggests that employees spend around 30% of their time managing email alone, which involves communications related to order and exception management. In addition to being time-consuming, these manual processes are prone to errors that can affect your bottom line and customer satisfaction. With rising consumer expectations for quick service and flawless execution, automating these processes has become a strategic priority. Automation can transform every aspect of exception handling by improving efficiency and precision.

In IBM OMS, we have a reprocessing flag which makes the exception re-processible. And there is not out of the box automation process.

Automatic exception handling can be done in various ways in OMS including the following.

  1. Writing a utility: We can write a utility to query all the alerts and exceptions and have all the possible solution for each exception. For example, getting cache issue because of multi thread while creating the order. In this case, simple reprocess will work. So, we need to specify the Error code inside utility to reprocess this exception.

In Utility, we must call the OMS rest API to get the exception and its details and then identify the solution and based on that reprocess as it is or modify the xml and reprocess.

Some time we must modify the input xml to fix the issues and reprocess with modified xml.

  • Pros: This is the better automatic exception resolution in SAS environment. We are not allowed to query directly to database.
  • For any changes to utility, we do not need a build.
  • Cons: we need a separate environment to run this utility.
  1. Writing an agent servers: We process the exception within the OMS. In this case we create an agent server in OMS. We will have to specify error codes and what to do for what error, fix the exception and reprocess or just reprocess depending upon the error code.
  • Pros: This does not require a separate environment to run this utility, we can create OMS agent server to use this.
  • Cons: This will be tied to the project workspace and if we need to change any code, it must be done using the build process.
  1. Utility with database query: This can only be done in on-perm, Sas environment does not support the querying database directly. In this case we get directly query the database to get the exceptions and then reprocess or fix and reprocess depending upon specify error codes the exception using API.
  • Pros: This is an easy and quick utility where you just write the database query and reprocess.
  • Cons: we need a separate environment to run this utility
  1. Reprocess when you get the exception – This automatic resolving exception has limitation as if it is not handled properly, it can cause the server to crash or not process the actual message. And since the risk of the implementation is too high, it is highly recommended to minimize this implementation or do it properly so that it never gets stuck in a loop.
  • Pros: This does not require any overhead or utility to reprocess the exception.
  • Cons: This can only be done for certain exception which we know can be fixed by reprocess

Advantages of Automation

  • Operational cost reductions from minimizing manual labor and streamlining processes. Automation can cut operational expenses related to order processing by up to 40% by reducing the need for manual labor and decreasing the incidence of errors.
  • Accuracy enhancements and lower error rates in order processing.
  • Automated systems are highly scalable, allowing businesses to handle increased order volumes without proportionate staffing or manual workload increases.

Automation significantly improves customer satisfaction and loyalty by ensuring accurate, timely order processing and proactive exception handling. Automation not only brings substantial cost savings and operational efficiencies, but it also enhances the overall customer experience, paving the way for sustained business growth and success. Automation can be a valuable tool in managing order exceptions. By automating the process, we can reduce the risk of human error and ensure that exceptions are handled consistently. These benefits are not just specific to IBM Sterling OMS, but any OMS system can have these benefits by automating the processing of exceptions.

]]>
https://blogs.perficient.com/2024/08/02/automated-resolution-of-ibm-sterling-oms-exceptions/feed/ 0 366827
Perficient Recognized as a Major Player in IDC MarketScape for Cloud Professional Services https://blogs.perficient.com/2024/07/02/perficient-recognized-in-idc-marketscape-for-cloud-professional-services/ https://blogs.perficient.com/2024/07/02/perficient-recognized-in-idc-marketscape-for-cloud-professional-services/#respond Tue, 02 Jul 2024 15:53:21 +0000 https://blogs.perficient.com/?p=364651

Navigating the complexities of cloud technology requires an exceptional partner. We are thrilled to announce that Perficient has been named a Major Player in the IDC MarketScape: Worldwide Cloud Professional Services 2024 Vendor Assessment (Doc #US51406224, June 2024).

What Does This Inclusion Mean for Perficient?

“We’re honored to be recognized as a Major Player in this IDC MarketScape Report, a distinction we believe highlights our holistic approach to cloud strategy and our implementation expertise,” said Glenn Kline, Perficient’s Area Vice President of Product Development Operations. “We combine our Envision Framework, migration and modernization expertise, and our strong network of partnerships with leading cloud providers to drive measurable business outcomes for our customers. Our Agile-ready global team enables businesses to think big, start small, and act fast so they can scale their cloud ecosystem over time and deliver on the outcomes promised by cloud computing.”

According to the IDC MarketScape, businesses should “consider Perficient if [they] are looking for a midsized cloud services provider that can combine client intimacy with industrial-strength capabilities in technology transformation and experience design and build.” Additionally, our global managed services group has created comprehensive accelerators such as the App Modernization IQ, Cloud FinOps IQ, and Green Impact IQ, serving as effective tools for guiding clients in cloud operations strategies.

What Does This Mean for Our Clients?

We believe this inclusion reaffirms Perficient as a trusted partner in cloud transformation. Perficient Cloud, our comprehensive suite of six solution areas, serves as a roadmap to navigate the evolving landscape of cloud technology. These areas focus on delivering critical business and technology capabilities, with agnostic offers and accelerators tailored to meet the unique needs of each client. Our Agile-ready global team enables businesses to think big, start small, and act fast, allowing scalable cloud ecosystems that maximize investment. Our focus areas include:

  • Technology Modernization: Enhancing performance and efficiency through updated infrastructure.
  • Product Differentiation: Creating innovative product offerings that stand out.
  • Customer Engagement: Improving interactions and experiences with personalized, data-driven approaches.
  • Data & AI Enablement: Driving insights and innovation with advanced analytics and AI.
  • Automation & Operational Agility: Boosting efficiency with automation solutions.
  • Sustainable Practices: Promoting responsible and impactful cloud strategies.

Join Us on Our Cloud Journey

We believe our inclusion in the IDC MarketScape report highlights our commitment to helping businesses navigate the complexities of cloud transformation. We are dedicated to delivering top-tier cloud solutions that drive growth and innovation.

To learn more about Perficient’s cloud professional services, download the IDC MarketScape: Worldwide Cloud Professional Services 2024 Vendor Assessment report available to IDC subscribers and for purchase. You can also read our News Release for more details on this recognition.

 

]]>
https://blogs.perficient.com/2024/07/02/perficient-recognized-in-idc-marketscape-for-cloud-professional-services/feed/ 0 364651
Demystifying Regex: A Comprehensive Guide for Automation Engineers https://blogs.perficient.com/2024/06/24/demystifying-regex-a-comprehensive-guide-for-automation-engineers/ https://blogs.perficient.com/2024/06/24/demystifying-regex-a-comprehensive-guide-for-automation-engineers/#respond Mon, 24 Jun 2024 14:08:29 +0000 https://blogs.perficient.com/?p=349336

Introduction:

Regular expressions, often abbreviated as regex, stand as indispensable assets for automation engineers. These dynamic constructs facilitate pattern matching and text manipulation, forming a robust foundation for tasks ranging from data validation to intricate search and replace operations. This comprehensive guide aims to navigate through the intricacies of regex, catering to various proficiency levels — from beginners to intermediates and advanced users.

 

Beginner-Friendly Regex

\d – Digit Matching

The \d expression is a foundational tool for identifying digits within the 0-9 range. For instance, using \d{3} allows precise capture of three consecutive digits, offering accuracy in recognizing numerical patterns. In a practical scenario:

import java.util.regex.*;

public class Main {

    public static void main(String[] args) {

        String text = "The price is $500.";

        Pattern pattern = Pattern.compile("\\d{3}");

        Matcher matcher = pattern.matcher(text);

        if (matcher.find()) {

            System.out.println("Found: " + matcher.group());

        }

    }

}

 

\w – Embracing Word Characters

\w proves useful for recognizing word characters, encompassing alphanumeric characters and underscores. When coupled with the + quantifier (\w+), it transforms into a versatile tool for capturing one or more word characters. For example:

import java.util.regex.*;

public class Main {

    public static void main(String[] args) {

        String text = "User_ID: john_doe_123";

        Pattern pattern = Pattern.compile("\\w+");

        Matcher matcher = pattern.matcher(text);

        if (matcher.find()) {

            System.out.println("Found: " + matcher.group());

        }

    }

}

 

\s – Recognizing Whitespace Characters

\s becomes the preferred expression for identifying whitespace characters, including spaces, tabs, and line breaks. The flexibility of \s* enables the recognition of zero or more whitespace characters. An example:

import java.util.regex.*;

public class Main {

    public static void main(String[] args) {

        String text = "   This is a sentence with spaces.   ";

        Pattern pattern = Pattern.compile("\\s*");

        Matcher matcher = pattern.matcher(text);

        if (matcher.find()) {

            System.out.println("Found: " + matcher.group());

        }

    }

}

 

Intermediate Regex Techniques

\D – Non-Digit Character Recognition

Building on the \d foundation, \D complements by identifying any character that is not a digit. The application of \D+ efficiently captures one or more non-digit characters. Consider the following:

import java.util.regex.*;

public class Main {

    public static void main(String[] args) {

        String text = "#XYZ123";

        Pattern pattern = Pattern.compile("\\D+");

        Matcher matcher = pattern.matcher(text);

        if (matcher.find()) {

            System.out.println("Found: " + matcher.group());

        }

    }

}

 

\W – Non-Word Character Identification

Parallel to \w, \W expands the horizon by identifying any character that is not a word character. Consider \W{2,} for capturing two or more non-word characters. Example:

import java.util.regex.*;

public class Main {

    public static void main(String[] args) {

        String text = "Special characters: @$!%";

        Pattern pattern = Pattern.compile("\\W{2,}");

        Matcher matcher = pattern.matcher(text);

        if (matcher.find()) {

            System.out.println("Found: " + matcher.group());

        }

    }

}

 

Advanced Regex Tactics

[g-s] – Character Range Inclusion

Introducing the concept of character ranges, [g-s] identifies any character falling between ‘g’ and ‘s,’ inclusive. This proves valuable for capturing a specific set of characters within a defined range. For instance:

import java.util.regex.*;

public class Main {

    public static void main(String[] args) {

        String text = "The highlighted section goes from g to s.";

        Pattern pattern = Pattern.compile("[g-s]+", Pattern.CASE_INSENSITIVE);

        Matcher matcher = pattern.matcher(text);

        if (matcher.find()) {

            System.out.println("Found: " + matcher.group());

        }

    }

}

 

Real Data Application

True proficiency in regex lies in its practical application to real-world data. Regularly practicing with authentic datasets enhances understanding and proficiency.

Suppose you have a dataset of phone numbers, and you want to extract all the area codes. You could use the following regex:

import java.util.regex.*;
import java.util.ArrayList;
import java.util.List;

public class Main {

    public static void main(String[] args) {

        String data = "Phone numbers: (123) 456-7890, (987) 654-3210, (555) 123-4567";

        Pattern pattern = Pattern.compile("\\(\\d{3}\\)");

        Matcher matcher = pattern.matcher(data);

        List<String> areaCodes = new ArrayList<>();

        while (matcher.find()) {

            areaCodes.add(matcher.group());

        }

        System.out.println("Area Codes: " + areaCodes);

    }

}

Output:

2024 06 20 15 25 04 Eclipse Workspace Seleniumframework Src Practice Launchbrowser.java Eclipse

In Conclusion:

In conclusion, regex stands as a powerful tool that, when employed adeptly, empowers automation engineers to tackle diverse challenges in software development and testing. By comprehending the nuances of regex expressions at different proficiency levels, engineers can enhance their ability to create efficient and effective automation scripts.

]]>
https://blogs.perficient.com/2024/06/24/demystifying-regex-a-comprehensive-guide-for-automation-engineers/feed/ 0 349336
Web APIs in Appian: Bridging the Gap Between Systems https://blogs.perficient.com/2024/05/27/appian-web-apis/ https://blogs.perficient.com/2024/05/27/appian-web-apis/#comments Mon, 27 May 2024 08:40:44 +0000 https://blogs.perficient.com/?p=344465

Seamless integration between various systems and applications is crucial for efficient data sharing and enhanced functionality. Appian, a leading low-code automation platform, recognizes this need and provides a powerful toolset for creating Web APIs.

Web APIs: Bridging the Gap

Web APIs, or Application Programming Interfaces, serve as a bridge between different software applications, enabling them to communicate and share data seamlessly. In the context of Appian, Web APIs provide a way to expose Appian data and services to external systems, facilitating integration with other software solutions.

Key Features of Web APIs

  • Integration and Data Exchange: Appian’s Web API feature allows for seamless integration with external systems and services, enabling the exchange of data in real time. It supports RESTful web services, which can be used to expose Appian data and processes to other applications or to consume external data within Appian.
  • Security and Customization: Appian Web APIs come with built-in security features such as authentication and authorization, ensuring that only authorized users can access the API. Additionally, they can be customized to perform complex business logic, validate inputs, and format responses, providing flexible and secure data handling capabilities.
  • Scalability and Performance: Appian Web APIs are designed to handle high volumes of requests efficiently, ensuring that performance remains optimal even as the demand grows. This scalability is crucial for enterprise-level applications that require reliable and fast data processing and integration capabilities.

How to Harness the Power of Web APIs in Appian

Define Your API

  • When defining your API, carefully choose the URLs or URIs that serve as access points for various resources or specific actions within your system. This crucial step sets the foundation for seamless interaction with your API.

Create the API in Appian

  1. Choose the Appropriate HTTP Methods
    • Determine the HTTP methods by specifying which ones (GET, POST, PUT, DELETE, etc.) your API will support for each endpoint.
    • Define the request/response formats by specifying the data formats (such as JSON, XML, etc.) that your API will use for sending requests and receiving responses.
  2. Design Your API
    • Consider the needs of both Appian and the external system when designing your Web API. Define clear and concise documentation that outlines the API’s functionality, required parameters, and expected responses.
  3. Implement Security Measures
    • Security actively takes centre stage when exposing your Appian data and services to external systems. Actively implement authentication and authorization mechanisms, such as API keys or OAuth tokens, to ensure that only authorized entities can actively access your API.

Test Thoroughly

  • Before making your Web API available to external systems, thoroughly test it using various scenarios and edge cases. Identify and resolve potential issues to ensure a smooth and reliable integration experience.

Deploy the API

  • Once you have finished creating and testing your API, deploy it to the desired environment (development, test, or production).
  • Ensure that the necessary resources (servers, databases, etc.) are appropriately configured and accessible for the API to function correctly in the deployment environment.

Document and Publish the API

  • Create documentation for your API, including details about the endpoints, supported methods, request/response formats, input/output parameters, and any authentication/authorization requirements.
  • Publish the documentation internally or externally to make it available to the API consumers.

Monitor and Maintain

  • Establish monitoring and logging mechanisms to track your API’s performance, usage, and errors.

Challenges while developing Appian Web API

  • Authentication Challenges: Struggles with configuring and maintaining authentication methods like API keys, tokens, or OAuth can result in issues accessing the system.
  • Data Validation Complexity: Verifying and managing data input accuracy, as well as dealing with validation errors, can be tricky, particularly with intricate data structures.
  • Endpoint Configuration: Errors in configuring endpoints, including incorrect URLs or URIs, can disrupt API functionality.
  • Security Vulnerabilities: Overlooking security best practices may expose APIs to vulnerabilities, potentially leading to data breaches or unauthorized access.
  • Third-Party Service Dependencies: If the API relies on third-party services, developers may face difficulties when those services experience downtime or changes.
  • Error Handling: Inadequate error handling and unclear error messages can make troubleshooting and debugging challenging.
  • Documentation Gaps: Poorly documented APIs or incomplete documentation can lead to misunderstandings, making it difficult for developers to use the API effectively.
  • Integration Challenges: Integrating the API with external systems, especially those with differing data formats or protocols, can pose integration challenges.

Developers building Web APIs often face tricky situations like ensuring secure access, validating data correctly, and making sure everything communicates smoothly. Solving these challenges leads to powerful APIs that make sharing information between different systems easier and safer.

Creating a Web API to Share Information

We will be creating a Web API to share information about people that is stored in the Appian Database with three parties who can access it via a GET call on a specific URL.

  • Log into Appian Designer from your Appian developer account.
  • In Appian Designer, navigate to the “Objects” section.
  • Create a new object by clicking on “New.”
  • In the object creation menu, select “Web API”.

template

  • You will be prompted to define your Web API. Provide a name and description for your API.

create details name and other create details method endpoint

  • Configure the endpoints by specifying the URLs or URIs used to access resources or perform actions through your API.
  • Specify the data inputs (request parameters) and outputs (response data) for each endpoint within the Web API.

rule and test input

  • Define the structure of the data that your API will send and receive.
  • For each endpoint, implement the logic using Appian expressions, business rules, or by integrating with external data sources or services. Ensure the logic meets the endpoint’s requirements.

expression mode

  • After configuring your Web API, save your changes.

Appian web api screen

  • Use the built-in Appian testing capabilities or external tools like Postman to test your Web API. Send requests to the defined endpoints and verify the responses.

Appian Result and test screen Appian Response of API

In conclusion, following these steps, you can efficiently create and configure a Web API in Appian, ensuring it is ready for use and thoroughly tested for seamless integration with other systems. For more information, you can visit documentation.

]]>
https://blogs.perficient.com/2024/05/27/appian-web-apis/feed/ 1 344465
Exploring Blue Prism’s Web-Based Extension https://blogs.perficient.com/2024/04/17/exploring-blue-prisms-web-based-extension/ https://blogs.perficient.com/2024/04/17/exploring-blue-prisms-web-based-extension/#comments Wed, 17 Apr 2024 09:07:32 +0000 https://blogs.perficient.com/?p=361671

Empowering Automation in the Digital Era

In this highly digitally connected world, companies are always looking for new and creative ways to improve efficiency, simplify procedures, and provide better customer service. Robotic Process Automation (RPA) has become a game-changing technology that helps businesses accelerate up operations, cut down on human error, and automate repetitive activities. One of the leading RPA platforms, Blue Prism, provides an extensive set of tools and features to automate Various business processes. A good example of this is the online-based Extension, a potent element that allows online applications the ability to be automated, providing up new avenues of opportunities for businesses.

Blueprism

Utilizing the Web-Based Extension for Blue Prism

The main objective of Blue Prism’s Web-based Extension is to enable seamless interaction between web-based apps and Blue Prism robots. With the help of this extension, robots can communicate with websites in the same manner as people do—they can provide data, extract information, and initiate activities. By leveraging this capability, businesses can automate complex processes that require interaction with web interfaces, boosting operational accuracy and efficiency. It acts as a link between the web browsers and the Blue Prism platform, enabling robots to communicate with web content, retrieve data, and take actions within web applications in the same manner as human beings do.

 

Key Features and Capabilities

Browser Agnostic The web-based extension ensures flexibility and adaptability in automation by working with frequently utilized web browsers like Microsoft Edge, Mozilla Firefox, and Google Chrome.
Element Interrogation With the help of this plugin, reliable automation becomes possible by permitting robots to detect and examine web elements such as buttons, drop-down menus, text fields, and links.

 

Blue Prism offers two modes for interacting with online applications HTML Mode and Accessibility Mode. Applications with typical HTML structures should use HTML mode; on the other hand, accessibility mode offers improved interoperability with web frameworks and dynamically generated content.
Event Handling The extension facilitates event-driven automation, allowing effective process automation by enabling robots to react to inputs like mouse clicks, keyboard inputs, and page load events.

 

Data Extraction and Validation Ensure accuracy and integrity in data processing, robots can extract data from web pages, verify form entries, and perform all data verification tasks.
Seamless integration with Blue Prism’s Object Studio The Web-based Extension allows developers to create reusable automation objects for web applications, hence boosting automation development efficiency and scalability.

 

 

Using Blue Prism browser extensions, Blue Prism offers native support for automating websites and apps in Google Chrome, Mozilla Firefox, and Microsoft Edge, a Chromium-based web browser. Blue Prism can interact with websites and apps that appear in these browsers because of to the extensions, which makes it simple to model business processes that depend on these websites and apps.

The Blue Prism extensions create a connection between Blue Prism and the web page in Chrome, Edge, and Firefox. This connection enables data interchange and element manipulation.

Type of Extensions in Blue Prism:

  • Chrome: Used to automate websites and apps in Edge and Chrome versions that are based on Chromium.
  • Firefox: Used to automate Firefox web sites and applications.

Chrome Extension for Blue Prism

Created with the goal to simplify online automation simpler within the Google Chrome browser environment, the Blue Prism Chrome Extension—also commonly known as the Blue Prism Browser—is an insignificant extension. It offers a simplified user interface for Chrome users to interact with web elements and automated tasks.

Important Characteristics:

  • Integration with Chrome: This extension works in combination with the Google Chrome web browser to take advantage of its features and offer an automated task environment that is comfortable and recognizable.
  • Point-and-Click Interface: By enabling users to interact with site items directly from the Chrome browser window, a point-and-click interface makes task automation simpler.
  • Lightweight and User-Friendly: Users with various levels of technical proficiency can utilize the Chrome Extension because it is both lightweight and user-friendly.
  • Basic Automation Capabilities: It provides basic automation features like data entry, form filling, and element selection on web sites.

 

Firefox Extension for Blue Prism

The objective of the Blue Prism Firefox Extension is to provide web automation in the Mozilla Firefox browser environment. It is a browser extension. This Firefox plugin lets users interact with web elements and automate tasks right within Firefox, much like the Chrome extension does.

Important Characteristics:

  • Efficient Integration with Firefox: By leveraging Firefox’s features and offering a comfortable environment for online automation activities, the extension smoothly integrates with the Mozilla Firefox browser.
  • Point-and-Click Interface: This function allows users to automate operations by allowing them to interact with web items directly within the Firefox browser window utilizing a point-and-click interface. Blue Prism’s cross-browser compatibility is ensured via its support for Firefox, which enables users to automate tasks in both Chrome and Firefox settings.
  • Key Automation Features: such the Chrome Extension, this addon provides key automation features such element selection, data entry, and form completion on websites.

 

Benefits and Use Cases

  • Web-Based Form Filling and Data Entry

Robots can automate data input operations which include submitting requests, filling up online forms, and updating data in web-based apps because of to the Web-based Extension. This function accelerates up data processing cycles, reduces manual error rates, and streamlines company processes.

  • Data extraction and web scraping

Organizations may deploy the extension to monitor market trends, acquire competitive intelligence, generate data from websites, and add pertinent information to databases. This enhances strategic initiatives, enhances the organization insights, and makes more straightforward to make well-informed decisions.

  • Automation in E-commerce

Web-based extensions can automate a wide range of operations within the e-commerce industry, including inventory management, order processing, and customer service. Businesses can improve order accuracy, maximize customer satisfaction, and optimize operational efficiency by automating repetitive processes.

  • Automation in Customer Service

Robots that have been outfitted with the Web-based Extension have the capacity to automate customer service processes through their interactions with web-based chatbots, account information retrieval, and service request processing. The result makes it possible for businesses to provide individualized customer experiences, speed up response times, and increase client retention.

  • Automation in Financial Services

The extension can automate operations in the financial services sector, including compliance reporting, transaction monitoring, and account reconciliation. Financial institutions can minimize operational risks, guarantee regulatory compliance, as well as enhance audit trails by automating repetitive processes.

Best Practices for Implementing Web-Based Extensions

Conclusion

Blue Prism’s Web-based Extension empowers organizations to extend automation capabilities to web-based applications enabling efficient and scalable process automation across a range of sectors and business operations. Organizations in the digital age may accomplish unprecedented levels of productivity, agility, and innovation by utilizing this feature to its full potential and following the suggested processes.

To sum up, Blue Prism’s Web-based Extension is an essential component of its goal to promote automation excellence and enable organizations to achieve success in an extremely competitive marketplace.

]]>
https://blogs.perficient.com/2024/04/17/exploring-blue-prisms-web-based-extension/feed/ 2 361671
Mastering Blue Prism Debugging Techniques https://blogs.perficient.com/2024/04/15/mastering-blue-prism-debugging-techniques/ https://blogs.perficient.com/2024/04/15/mastering-blue-prism-debugging-techniques/#comments Mon, 15 Apr 2024 11:07:26 +0000 https://blogs.perficient.com/?p=338578

In the world of robotic process automation (RPA), Blue Prism stands out as a leading platform that empowers organizations to automate their business processes. While designing and developing automated solutions using Blue Prism is essential, debugging plays a crucial role in ensuring smooth operation and efficient execution. This blog will explore some invaluable debugging techniques in Blue Prism that can help you identify and resolve issues, ultimately optimizing your automation workflows.

1. Enable Logging

Blue Prism provides comprehensive logging capabilities to track the execution of processes and identify potential errors. By enabling logging, you gain valuable insights into the system’s behavior, allowing you to trace the flow of execution and pinpoint the exact location of issues. Depending on the granularity required, you can enable logging at various levels, such as object, process, or business object. Analyzing logs can provide crucial information for troubleshooting and improving your automation solutions.Enable logs

2. Use Breakpoints

Break allows you to pause the debugging process at any given moment. Once this happens, you can choose to Step, Step Into, Step Over, or Stop the debugging process.Use Breakpoint

3. Step

This is used to step inside any particular stage at a time, so if you have a page stage, an action stage, or a process stage and if you press ‘Step’ then the flow would go inside the reference workflow. For example, if you are calling an action from a Process Studio and press step, the control would simply go to that action in the Object Studio. (Shortcut F11)

4. Step Over

This is used to step over any particular stage at a time, so if you have a page stage, an action stage, or a process stage and if you press ‘Step Over’, the flow will proceed to the next stage on that page, executing all the workflows that the stage is referencing in the reference workflow, but not allowing you to enter it. For example, if you are calling an action from a Process Studio and you press step over, then the control would simply go to the next action on that page of the Process Studio while all the workflow inside the action will be executed at the backend without you ever going inside the Object Studio. (Shortcut F10)

5. Step Out

This is used to step out of the currently visible page executing all the particular stages called within the workflow at that point of time, so if you have a page stage, an action stage, or a process stage and if you press ‘Step Out’ then the entire workflow in that page is executed taking the control back to the parent page, process or action from where it is called.

For example, if you are calling an action from a Process Studio and press step out, the entire workflow on that page would be executed, including the action and whatever stage you have called after it in the specific workflow. In case you press step out from a Main Page, then the workflow would simply be completed as there is no parent page in this case, but if you press step out inside a subpage, action, or sub-process, then the workflow would be executed, and the control would be back to the parent page or process. (Shortcut Shift+F11)Step in Blueprism

6. Exception Handling

Exception handling plays a vital role in robust automation design. Blue Prism provides exceptional handling capabilities to gracefully handle errors and exceptions during process execution. By implementing proper exception-handling techniques, you can catch and log errors, perform specific actions or retries, and gracefully recover from failures. Effective exception handling minimizes the impact of errors on the overall automation solution and enhances its reliability.

7. Utilize the Blue Prism Developer Tools

Blue Prism offers a set of developer tools that can greatly assist in debugging and troubleshooting. The Application Modeler allows you to inspect and interact with target applications, verifying selectors and ensuring the correct identification of elements. The Object Viewer provides a detailed view of the application’s hierarchy and properties, aiding in object configuration and validation. Leveraging these developer tools can help identify object recognition and interaction issues.

8. Collaboration and Documentation

Debugging complex automation solutions often requires collaboration with other team members, such as business analysts or subject matter experts. Maintaining clear and concise documentation regarding the automation process, including inputs, expected outputs, and known issues, fosters effective communication and problem-solving. Collaborative efforts ensure a holistic approach to debugging, harnessing collective expertise, and reducing the time taken to identify and resolve issues.

These debugging techniques are used in various situations depending on circumstances so that you can quickly debug a very complex workflow no matter how long it is.

Conclusion

Debugging is an integral part of the automation development lifecycle and employing effective debugging techniques is crucial for successful Blue Prism implementations. By enabling logging, using breakpoints, inspecting data and variables, implementing proper exception handling, leveraging developer tools, and fostering collaboration, you can efficiently troubleshoot and resolve issues within your Blue Prism automation. With these techniques at your disposal, you’ll be well-equipped to tackle challenges and ensure the smooth functioning of your automation solutions.

]]>
https://blogs.perficient.com/2024/04/15/mastering-blue-prism-debugging-techniques/feed/ 2 338578
Azure SQL Server Performance Check Automation https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/ https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/#respond Thu, 11 Apr 2024 13:37:29 +0000 https://blogs.perficient.com/?p=361522

On Operational projects that involves heavy data processing on a daily basis, there’s a need to monitor the DB performance. Over a period of time, the workload grows causing potential issues. While there are best practices to handle the processing by adopting DBA strategies (indexing, partitioning, collecting STATS, reorganizing tables/indexes, purging data, allocating bandwidth separately for ETL/DWH users, Peak time optimization, effective DEV query Re-writes etc.,), it is necessary to be aware of the DB performance and consistently monitor for further actions. 

If Admin access is not available to validate the performance on Azure, building Automations can help monitor the space and necessary steps before the DB causes Performance issues/failures. 

Regarding the DB performance monitoring, IICS Informatica Job can be created with a Data Task to execute DB (SQL Server) Metadata tables query to check for the performance and Emails can be triggered once Free space goes below the threshold percentage (ex., 20 %). 

IICS Mapping Design below (scheduled Hourly once). Email alerts would contain the Metric percent values. 

                        Iics Mapping Design Sql Server Performance Check Automation 1

Note : Email alerts will be triggered only if the Threshold limit exceeds. 

                                             

IICS ETL Design : 

                                                     

                     Iics Etl Design Sql Server Performance Check Automation 1

IICS ETL Code Details : 

 

  1. Data Task is used to get the Used space of the SQL Server performance (CPU, IO percent).

                                          Sql Server Performance Check Query1a

Query to check if Used space exceeds 80% . I Used space exceeds the Threshold limit (User can set this to a specific value like 80%), and send an Email alert. 

                                                            

                                         Sql Server Performance Check Query2

If Azure_SQL_Server_Performance_Info.dat has data (data populated when CPU/IO processing exceeds 80%) the Decision task is activated and Email alert is triggered. 

                                          Sql Server Performance Result Output 1                                          

Email Alert :  

                                            Sql Server Performance Email Alert

]]>
https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/feed/ 0 361522
Exporting Media Items with Sitecore PowerShell Extensions https://blogs.perficient.com/2024/03/25/exporting-media-items-with-spe/ https://blogs.perficient.com/2024/03/25/exporting-media-items-with-spe/#respond Mon, 25 Mar 2024 19:42:39 +0000 https://blogs.perficient.com/?p=359781

Intro 📖

At this point, I think every Sitecore developer uses or has at least heard of Sitecore PowerShell Extensions (SPE). This powerful module for Sitecore is an auto-include for most teams and projects. The module is included with XM Cloud instances by default; if you use XM Cloud, then you already have SPE. If you somehow haven’t heard of SPE, stop whatever you’re doing and go check it out. 🚪🏃‍♂️

Included with SPE are several out-of-the-box (OOTB) reports that users can run. On a recent project, one of these reports came in clutch: the Unused media items report which is located in the Sitecore content tree at the following path:

/sitecore/system/Modules/PowerShell/Script Library/SPE/Reporting/Content Reports/Reports/Media Audit/Unused media items

This report generates a list of media items that are in the media library but not used in Sitecore, where “used” is defined as being referenced at least once in the Sitecore link database. In other words, the report lists media items that are (probably) just taking up space.

Extending the Report 🚀

In my case, rather than generating a report of unused media items, I needed to do kind of the opposite–generate a report of used media items and export those items using content packages (which will be installed in higher environments as part of a content migration). Using the OOTB Unused media items report as a starting point, I wrote a more generic, general-purpose script to package up media items based on several criteria. When the script is executed, it looks like this (I used PowerShell ISE included with SPE to run the script):

Export Media Items

…continued below 👇 (…I need a bigger monitor 🖥)

Export Media Items

Parameters ⚙

Here’s a rundown of the different parameters:

  • Media to Include
    • Default: Used
    • This parameter determines if the script processes used media items, unused media items, or both used and unused media items. To determine if an item is used or not, the script queries the link database to get the referrer count for the item.
  • Media Library Folders
    • Default: (none)
    • Use this parameter to designate the folders from which you’d like the script to pull and process media items. The script includes a check to prevent duplicate media items from being processed if, for whatever reason, multiple overlapping media folders are selected.
    • Note that the root media library node can be selected, but doing so isn’t ideal for performance reasons. It’s better to limit the script to those media folders you know contain the media items you need to export. If no folders are selected, the script does nothing.
  • Extensions to Include
    • Default: (none)
    • If you’d like the script to only process media items with certain extensions, check the relevant extensions here. Note that, by default, no extensions are checked, and no extension filtering is applied. Only interested in exporting PDFs? Check the pdf extension. Want every media item regardless of extension? Don’t check any extensions.
  • Cutoff Date
    • Default: (none)
    • If specified, this parameter causes the script to only process media items that were either created or update after this date.
    • This parameter is useful if you need to run a “delta” export to pick up any new or updated items since a previous run. Just remember the date of your last run. Note that the file names of the generated packages include a date stamp, e.g., 20240323T0504075151Z – Export Media Items 1.zip.
  • Maximum Package Size
    • Default: 25 MB
    • The script uses the Size field on media items to estimate the overall size of the files in a given content package. If the total size of the packaged media items reaches this threshold, an additional package is created until all media items are packaged.
    • The script can generate packages that are larger than this size–the threshold check happens after a file has been added. Also note that the size on disk of a serialized item isn’t exactly the same as the number of bytes stored in the Size field. In other words, the “package chunking” logic is approximate.
    • Anecdotally, I’ve noticed that when content packages get to be large (x > 200 MB), uploading and installing them can get dicey, depending on the environment. Pick a size that makes sense for your use-case.
  • Exclude System Folders
    • Default: Checked (☑)
    • If checked, the script will ignore any media items whose path contains /System/. This is useful for excluding, say, thumbnails that are generated by Sitecore.
  • Verbose Console Output
    • Default: Checked (☑)
    • If checked, additional output is written to the console which can be useful when performing a dry run together with Debug Mode (below).
  • Debug Mode
    • Default: Checked (☑)
    • If checked, the script won’t write any content packages to disk. This is useful when performing dry runs of the script to generate the report detailing which item is in which package before committing to writing a potentially large number of files or large individual files to disk.
    • Assuming this checkbox is unchecked, the resulting content packages are saved to disk under the $SitecorePackageFolder path, which is usually C:\inetpub\wwwroot\App_Data\packages.
    • If you aren’t seeing any content packages on disk, make sure this parameter is unchecked.

Output 📝

The script essentially provides three different forms of output: console output, the typical SPE report results dialog (with CSV and Excel export functionality), and the content packages themselves which are written to disk. For example, assuming Verbose Console Output is checked and a Maximum Package Size of 100 MB, the console output could look something like this:

Export Media Items - Console Output

The cyan line outputs the total number of media items to be processed based on the parameters. The green lines detail the first package to be created with the magenta lines listing each media item in that first package. The green and magenta lines repeat until all of the media items are processed and all the content packages are generated. The last bit of console output will look something like this:

Export Media Items - Console

The grey lines are the paths on disk for each of the generated content packages. Note that the generated packages are named using a pattern that includes a time stamp; this is the (server) time that the script was executed and will be the same date and time for all packages generated on a particular run.

The report results dialog is similar to other SPE reports and could look something like this:

Export Media Items - Report Dialog

From here, the user can see which media items are in which packages. Exporting this data to a CSV or Excel file, users can audit installed content packages, set up additional downstream automation, use the file as a manifest for archiving media items, etc.

Ideas for Improvements 💡

There’s always room for improvement. These were some of my ideas:

  • Adding support for non-media items would be cool, though it would mean having to figure out how to calculate the size of the packages without the use of the Size field (which is unique to media items and is automatically set by Sitecore when uploading a file). I suppose you could determine the size of a serialized item in memory and use that…🤔
  • Including more extensions: other image extensions, video extensions, Office extensions, etc.
  • Allowing for a date range to process files added or modified within a given time span.
  • Exposing more options for package installation rather than assuming an overwrite.
  • Supporting a configurable naming convention for generated packages.
  • General performance tweaks.

Do you have other ideas for improvements? Do you see a bug or typo that I missed? Please drop me a comment below! 💬 👇

The Code 💻

The script is available as a public GitHub Gist here and is also duplicated below if you don’t get the Gist (dad joke?).

<#
    .SYNOPSIS
        Creates content packages for media items matching certain criteria.
        By default, packages are saved to disk at C:\inetpub\wwwroot\App_Data\packages.
        
    .NOTES
        Original "Unused media items" report (/sitecore/system/Modules/PowerShell/Script Library/SPE/Reporting/Content Reports/Reports/Media Audit/Unused media items) written by Michael West.
        Additional parameters, filtering, content package creation, etc. written by Nick Sturdivant.

        This script requires that Sitecore PowerShell Extensions be installed.
#>

$reportName = "Export Media Items"

$extensionOptions = [ordered]@{ "bmp" = "bmp"; "gif" = "gif"; "jpg" = "jpg"; "jpeg" = "jpeg"; "pdf" = "pdf"; "png" = "png"; "svg" = "svg"; }
$maxPackageSizeOptions = [ordered]@{ "5 MB" = 5000000; "10 MB" = 10000000; "25 MB" = 25000000; "50 MB" = 50000000; "100 MB" = 100000000; "250 MB" = 250000000 }
$usedMediaItemOptions = [ordered]@{ "Both" = "both"; "Used" = "used"; "Unused" = "unused" }

$props = @{
    Parameters  = @(
        @{
            Name    = "usedMediaMode"
            Title   = "Media to Include"
            Tooltip = "Determines if the script processes used, unused, or both used and unused media items (where ""used"" is defined as having at least one entry in the link database)."
            Value   = "used"
            Options = $usedMediaItemOptions 
        }
        @{
            Name    = "selectedMediaFolders"
            Title   = "Media Library Folders"
            Tooltip = "The media library folders from which to include items."
            Value   = @()
            Editor  = "treelist" 
        }
        @{
            Name    = "selectedExtensions"
            Title   = "Extensions to Include"
            Tooltip = "The file extension(s) for the media items to process and include in the package(s)."
            Value   = @()
            Options = $extensionOptions
            Editor  = "check" 
        }
        @{
            Name    = "cutoffDate"
            Title   = "Cutoff Date"
            Tooltip = "If set, causes the script to only process media items that were created or updated after this date."
            Value   = [datetime]::MinValue
            Editor  = "date" 
        }
        @{
            Name    = "selectedMaxPackageSize"
            Title   = "Maximum Package Size"
            Tooltip = "The maximum size package the script will generate. If the total size of the media items to be packaged exceeds this limit, then multiple packages are created until all items have been packaged."
            Value   = 25000000
            Options = $maxPackageSizeOptions 
        }
        @{
            Name    = "excludeSystemFolders"
            Title   = "Exclude System Folders"
            Tooltip = "If checked, any media items with ""/System/"" anywhere in their path are ignored."
            Value   = $true
            Editor  = "check" 
        }
        @{
            Name    = "verboseOutput"
            Title   = "Verbose Console Output"
            Tooltip = "If checked, additional output will be written to the console."
            Value   = $true
            Editor  = "check" 
        }
        @{
            Name    = "debugMode"
            Title   = "Debug Mode"
            Tooltip = "If checked, no packages will be saved to disk."
            Value   = $true
            Editor  = "check" 
        }
    )
    Title       = " $reportName"
    Icon        = "OfficeWhite/32x32/box_into.png"
    Description = "This script queries for used and/or unused (referenced) media items and generates content packages containing those items."
    Width       = 600
    ShowHints   = $true
}

$result = Read-Variable @props

$items = @()
$itemsReport = @()
$timestamp = (Get-Date -Format FileDateTimeUniversal)

if ($result -eq "cancel") {
    exit
}

function HasReference {
    param(
        $Item
    )
    
    $linkDb = [Sitecore.Globals]::LinkDatabase
    $linkDb.GetReferrerCount($Item) -gt 0
}

function Get-MediaItemWithReference {
    param(
        [string]$Path,
        [string[]]$Extensions
    )
    
    $mediaItemContainer = Get-Item ("master:" + $Path)
    $excludedTemplates = @([Sitecore.TemplateIDs]::MediaFolder, [Sitecore.TemplateIDs]::Node)
    $items = $mediaItemContainer.Axes.GetDescendants() | 
    Where-Object { $excludedTemplates -notcontains $_.TemplateID } | Initialize-Item | 
    Where-Object { -not $excludeSystemFolders -or ( -not ($_.FullPath -like "*/System/*") ) } |
    Where-Object { $cutoffDate -eq [datetime]::MinValue -or ( $_.__Created -gt $cutoffDate -or $_.__Updated -gt $cutoffDate ) } |
    Where-Object { $Extensions.Count -eq 0 -or $Extensions -contains $_.Fields["Extension"].Value }
    
    # filter based on usage (links)
    foreach ($item in $items) {
        if ($usedMediaMode -eq "both") {
            $item
        }
        if ($usedMediaMode -eq "used" -and (HasReference -Item $item)) {
            $item
        }
        if ($usedMediaMode -eq "unused" -and (-not (HasReference -Item $item))) {
            $item
        }
    }
}

function Build-Package {
    param(
        [Sitecore.Data.Items.Item[]]$Items,
        [int]$Size,
        [int]$PackageNumber,
        [ref]$ItemsReport
    )    
    
    if ($verboseOutput) {
        Write-Host ""
        Write-Host "Building package $PackageNumber..." -ForegroundColor Green
        Write-Host "Total items: $($Items.Count)" -ForegroundColor Green
        Write-Host "Total size: $Size bytes" -ForegroundColor Green
        Write-Host ""
    }
    
    $package = New-Package -Name "Export Media Items"
    $package.Sources.Clear()
    $package.Metadata.Author = "SPE"
    $package.Metadata.Version = $timestamp
    $package.Metadata.Readme = "A package containing media items; generated by a Sitecore PowerShell Extensions script."
    
    $packageZipFileName = "$( $package.Metadata.Version ) - $( $package.Name ) $PackageNumber.zip"
    
    foreach ($itemToPackage in $Items) {
        if ($verboseOutput) {
            Write-Host "`t+ $($itemToPackage.FullPath)` ($($itemToPackage.Fields["Size"].Value -as [int]) bytes)" -ForegroundColor Magenta
        }
        $source = Get-Item $itemToPackage.FullPath | New-ExplicitItemSource -Name "$($itemToPackage.ID)" -InstallMode Overwrite
        $package.Sources.Add($source)
        
        $ItemsReport.Value += @{
            ID       = $itemToPackage.ID
            FullPath = $itemToPackage.FullPath
            Package  = $packageZipFileName
        }
    }

    if (-not $debugMode) {
        Export-Package -Project $package -Path $packageZipFileName -Zip
    }
}

foreach ($selectedMediaFolder in $selectedMediaFolders) {
    # ensure selected media folder is the media library itself or a folder within the media library
    if ($selectedMediaFolder.FullPath -ne "/sitecore/media library" -and $selectedMediaFolder.TemplateID -ne [Sitecore.TemplateIDs]::MediaFolder) {
        Write-Host "Selected folder $($selectedMediaFolder.FullPath) is neither the media library root nor a media folder within the media library and will be ignored." -ForegroundColor Yellow
        continue
    }
    
    $itemsFromPath = Get-MediaItemWithReference -Path $selectedMediaFolder.FullPath -Extensions $selectedExtensions
    
    # prevent duplicate items if overlapping media folders are selected
    foreach ($itemFromPath in $itemsFromPath) {
        $existingItem = $items | Where-Object { $_.ID -eq $itemFromPath.ID }
        if ($null -eq $existingItem) {
            $items += $itemFromPath
        }
    }
}

if ($items.Count -eq 0) {
    Show-Alert "There are no media items matching the specified parameters."
}
else {
    Write-Host "Total media items to be processed and packaged: $($items.Count)" -ForegroundColor Cyan

    $packageSize = 0
    $itemsInPackage = @()
    $itemsProcessed = 0
    $packageCount = 0
    
    foreach ($itemToPackage in $items) {
        $itemsInPackage += $itemToPackage
        $packageSize += $itemToPackage.Fields["Size"].Value -as [int]
        $itemsProcessed++
        
        if ($packageSize -ge $selectedMaxPackageSize -or $itemsProcessed -eq $items.Count) {
            
            $packageCount++

            Build-Package -Items $itemsInPackage -Size $packageSize -PackageNumber $packageCount -ItemsReport ([ref]$itemsReport)
            
            $packageSize = 0
            $itemsInPackage = @()
        }
    }
    
    # report output
    $mediaToInclude = ""
    if ($usedMediaMode -eq "both") {
        $mediaToInclude = "Both (used and unused)"
    } else {
        $mediaToInclude = ($usedMediaMode.Substring(0, 1).ToUpper() + $usedMediaMode.Substring(1))
    }
    $mediaFolders = ""
    $selectedMediaFolders | ForEach-Object { $mediaFolders += "<br/>&nbsp;&nbsp;- $($_.FullPath)" }
    $extensions = $selectedExtensions -join ", "
    if ($extensions -eq "") {
        $extensions = "(all)"
    }
    
    $infoDescription = "List of the media items matching the specified criteria that are contained within the generated content packages.<br/><br/>" +
    "Media to Include: $mediaToInclude<br/>" + 
    "Media Library Folders: $mediaFolders<br/>" +
    "Extensions to Include: $extensions<br/>" + 
    "Cutoff Date: "
    if ($cutoffDate -eq [datetime]::MinValue) {
        $infoDescription += "(none)<br/>"
    }
    else {
        $infoDescription += "$($cutoffDate.ToShortDateString())<br/>"
    }
    $infoDescription += "Maximum Package Size: $($selectedMaxPackageSize / 1000000) MB<br/>" +
    "Exclude System Folders: $excludeSystemFolders<br/>" +
    "Verbose: $verboseOutput<br/>" + 
    "Debug: $debugMode"

    $reportProps = @{
        InfoTitle       = $reportName
        InfoDescription = $infoDescription
        PageSize        = 25
        Title           = $reportName
    }
    
    Write-Host ""
    Write-Host "Finished! 🎉" -ForegroundColor Cyan

    if ($verboseOutput) {
        Write-Host ""
        $itemsReport | 
        ForEach-Object { $_.Package } | 
        Select-Object -Unique | 
        ForEach-Object { Write-Host ($SitecorePackageFolder + "\" + $_) -ForegroundColor Gray }
    }

    # display report output
    $itemsReport |
    Show-ListView @reportProps -Property @{ Label = "ID"; Expression = { $_.ID } },
    @{Label = "Full Path"; Expression = { $_.FullPath } },
    @{Label = "Package"; Expression = { $_.Package } }
}

(📝 Note: the code snippet above may not be kept up-to-date with the Gist over time)

]]>
https://blogs.perficient.com/2024/03/25/exporting-media-items-with-spe/feed/ 0 359781