Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Tue, 09 Dec 2025 16:29:17 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 Getting Started with Python for Automation https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/ https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/#respond Tue, 09 Dec 2025 14:00:21 +0000 https://blogs.perficient.com/?p=388867

Automation has become a core part of modern work, allowing teams to reduce repetitive tasks, save time, and improve accuracy. Whether it’s generating weekly reports, organizing files, processing large amounts of data, or interacting with web applications, automation helps individuals and companies operate more efficiently. Among all programming languages used for automation, Python is one of the most widely adopted because of its simplicity and flexibility. 

Why Python Is Perfect for Automation 

Python is known for having a clean and readable syntax, which makes it easy for beginners to start writing scripts without needing deep programming knowledge. The language is simple enough for non-developers, yet powerful enough for complex automation tasks. Another major advantage is the availability of thousands of libraries. These libraries allow Python to handle file operations, manage Excel sheets, interact with APIs, scrape websites, schedule tasks, and even control web browsers – all with minimal code. Because of this, Python becomes a single tool capable of automating almost any repetitive digital task. 

What You Can Automate with Python 

Python can automate everyday tasks that would otherwise require significant manual effort. Simple tasks like renaming multiple files, organizing folders, or converting file formats can be completed instantly using small scripts. It is also commonly used for automating Excel-based workflows, such as cleaning datasets, merging sheets, generating monthly summaries, or transforming data between formats. Python is equally powerful for web-related automation: collecting data from websites, making API calls, sending automated emails, downloading content, and filling out online forms. For more advanced uses, Python can also automate browser testing, server monitoring, and deployment processes. 

Setting Up Your Python Automation Environment 

Getting started is straightforward. After installing Python, you can use an editor like VS Code or PyCharm to write your scripts. Libraries required for automation can be installed using a single command, making setup simple. Once you have your environment ready, writing your first script usually takes only a few minutes. For example, a short script can rename files in a folder, send an email, or run a function at a specific time of the day. Python’s structure is beginner-friendly, so even basic programming knowledge is enough to start automating everyday tasks. 

Examples of Simple Automation 

A typical example is a script that automatically renames files. Instead of renaming hundreds of files one by one, Python can loop through the folder and rename them instantly. Another example is an automated email script that can send daily reminders or reports. Python can also schedule tasks so that your code runs every morning, every hour, or at any time you choose. These examples show how even small scripts can add real value to your workflow by reducing repetitive manual tasks. 

Best Practices When Building Automation 

As you begin writing automation scripts, it helps to keep the code organized and reliable. Using virtual environments ensures that your project libraries remain clean. Adding error-handling prevents scripts from stopping unexpectedly. Logging enables you to track what your script does and when it executes. Once your automation is ready, you can run it automatically using tools like Task Scheduler on Windows or cron on Linux, so the script works in the background without your involvement. 

How Companies Use Python Automation 

Python automation is widely used across industries. IT teams rely on it to monitor servers, restart services, and handle deployment tasks. Business teams use it to generate reports, clean data, update dashboards, and manage document workflows. Marketing teams use automation for scraping competitor information, scheduling social media posts, or tracking engagement. For developers, Python helps with testing, error checking, and system integration via APIs. Across all these areas, automation improves efficiency and reduces human error. 

Conclusion 

Python is an excellent starting point for anyone who wants to begin automating daily tasks. Its simplicity, combined with its powerful ecosystem of libraries, makes it accessible to beginners and useful for professionals. Even basic automation scripts can save hours of work, and as you grow more comfortable, you can automate more complex processes involving data, web interactions, and system management. Learning Python for automation not only makes your work easier but also adds valuable skills for professional growth. 

 

]]>
https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/feed/ 0 388867
Creators in Coding, Copycats in Class: The Double-Edged Sword of Artificial Intelligence https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/ https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/#respond Thu, 04 Dec 2025 00:30:15 +0000 https://blogs.perficient.com/?p=388808

“Powerful technologies require equally powerful ethical guidance.” (Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014).

The ethics of using artificial intelligence depend on how we apply its capabilities—either to enhance learning or to prevent irresponsible practices that may compromise academic integrity. In this blog, I share reflections, experiences, and insights about the impact of AI in our environment, analyzing its role as a creative tool in the hands of developers and as a challenge within the academic context.

Between industry and the classroom

As a Senior Developer, my professional trajectory has led me to delve deeply into the fascinating discipline of software architecture. Currently, I work as a Backend Developer specializing in Microsoft technologies, facing daily the challenges of building robust, scalable, and well-structured systems in the business world.

Alongside my role in the industry, I am privileged to serve as a university professor, teaching four courses. Three of them are fundamental parts of the software development lifecycle: Software Analysis and Design, Software Architecture, and Programming Techniques. This dual perspective—as both a professional and a teacher—has allowed me to observe the rapid changes that technology is generating both in daily development practice and in the formation of future engineers.

Exploring AI as an Accelerator in Software Development

One of the greatest challenges for those studying the software development lifecycle is transforming ideas and diagrams into functional, well-structured projects. I always encourage my students to use Artificial Intelligence as a tool for acceleration, not as a substitute.

For example, in the Software Analysis and Design course, we demonstrate how a BPMN 2.0 process diagram can serve as a starting point for modeling a system. We also work with class diagrams that reflect compositions and various design patterns. AI can intervene in this process in several ways:

  • Code Generation from Models: With AI-based tools, it’s possible to automatically turn a well-built class diagram into the source code foundation needed to start a project, respecting the relationships and patterns defined during modeling.
  • Rapid Project Architecture Setup: Using AI assistants, we can streamline the initial setup of a project by selecting the technology stack, creating folder structures, base files, and configurations according to best practices.
  • Early Validation and Correction: AI can suggest improvements to proposed models, detect inconsistencies, foresee integration issues, and help adapt the design context even before coding begins.

This approach allows students to dedicate more time to understanding the logic behind each component and design principle, instead of spending hours on repetitive setup and basic coding tasks. The conscious and critical use of artificial intelligence strengthens their learning, provides them with more time to innovate, and helps prepare them for real-world industry challenges.

But Not Everything Is Perfect: The Challenges in Programming Techniques

However, not everything is as positive as it seems. In “Programming Techniques,” a course that represents students’ first real contact with application development, the impact of AI is different compared to more advanced subjects. In the past, the repetitive process of writing code—such as creating a simple constructor public Person(), a function public void printFullName() or practicing encapsulation in Java with methods like public void setName(String name) and public String getName()—kept the fundamental programming concepts fresh and clear while coding.

This repetition was not just mechanical; it reinforced their understanding of concepts like object construction, data encapsulation, and procedural logic. It also played a crucial role in developing a solid foundation that made it easier to understand more complex topics, such as design patterns, in future courses.

Nowadays, with the widespread availability and use of AI-based tools and code generators, students tend to skip these fundamental steps. Instead of internalizing these concepts through practice, they quickly generate code snippets without fully understanding their structure or purpose. As a result, the pillars of programming—such as abstraction, encapsulation, inheritance, and polymorphism—are not deeply absorbed, which can lead to confusion and mistakes later on.

Although AI offers the promise of accelerating development and reducing manual labor, it is important to remember that certain repetition and manual coding are essential for establishing a solid understanding of fundamental principles. Without this foundation, it becomes difficult for students to recognize bad practices, avoid common errors, and truly appreciate the architecture and design of robust software systems.

Reflection and Ethical Challenges in Using AI

Recently, I explained the concept of reflection in microservices to my Software Architecture students. To illustrate this, I used the following example: when implementing the Abstract Factory design pattern within a microservices architecture, the Reflection technique can be used to dynamically instantiate concrete classes at runtime. This allows the factory to decide which object to create based on external parameters, such as a message type or specific configuration received from another service. I consider this concept fundamental if we aim to design an architecture suitable for business models that require this level of flexibility.

However, during a classroom exercise where I provided a base code, I asked the students to correct an error that I had deliberately injected. The error consisted of an additional parameter in a constructor—a detail that did not cause compilation failures, but at runtime, it caused 2 out of 5 microservices that consumed the abstract factory via reflection to fail. From their perspective, this exercise may have seemed unnecessary, which led many to ask AI to fix the error.

As expected, the AI efficiently eliminated the error but overlooked a fundamental acceptance criterion: that parameter was necessary for the correct functioning of the solution. The task was not to remove the parameter but to add it in the Factory classes where it was missing. Out of 36 students, only 3 were able to explain and justify the changes they made. The rest did not even know what modifications the AI had implemented.

This experience highlights the double-edged nature of artificial intelligence in learning: it can provide quick solutions, but if the context or the criteria behind a problem are not understood, the correction can be superficial and jeopardize both the quality and the deep understanding of the code.

I haven’t limited this exercise to architecture examples alone. I have also conducted mock interviews, asking basic programming concepts. Surprisingly, even among final-year students who are already doing their internships, the success rate is alarmingly low: approximately 65% to 70% of the questions are answered incorrectly, which would automatically disqualify them in a real technical interview.

Conclusion

Artificial intelligence has become increasingly integrated into academia, yet its use does not always reflect a genuine desire to learn. For many students, AI has turned into a tool for simply getting through academic commitments, rather than an ally that fosters knowledge, creativity, and critical thinking. This trend presents clear risks: a loss of deep understanding, unreflective automation of tasks, and a lack of internalization of fundamental concepts—all crucial for professional growth in technological fields.

Various authors have analyzed the impact of AI on educational processes and emphasize the importance of promoting its ethical and constructive use. As Luckin et al. (2016) suggest, the key lies in integrating artificial intelligence as support for skill development rather than as a shortcut to avoid intellectual effort. Similarly, Selwyn (2019) explores the ethical and pedagogical challenges that arise when technology becomes a quick fix instead of a resource for deep learning.

References:

]]>
https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/feed/ 0 388808
Aligning Your Requirements with the Sitecore Ecosystem https://blogs.perficient.com/2025/11/07/sitecore-dxp-products-and-ecosystem/ https://blogs.perficient.com/2025/11/07/sitecore-dxp-products-and-ecosystem/#respond Fri, 07 Nov 2025 19:20:25 +0000 https://blogs.perficient.com/?p=388241

In my previous blogs, I outlined key considerations for planning a Sitecore migration and shared strategies for executing it effectively. The next critical step is to understand how your business and technical requirements align with the broader Sitecore ecosystem.
Before providing careful recommendations to a customer, it’s essential to map your goals—content management, personalization, multi-site delivery, analytics, and future scalability onto Sitecore’s composable and cloud-native offerings. This ensures that migration and implementation decisions are not only feasible but optimized for long-term value.
To revisit the foundational steps and execution strategies, check out these two helpful resources:
•  Planning Sitecore Migration: Things to Consider
•  Executing a Sitecore Migration: Development, Performance, and Beyond

Sitecore is not just a CMS; it’s a comprehensive digital experience platform.
Before making recommendations to a customer, it’s crucial to clearly define what is truly needed and to have a deep understanding of how powerful Sitecore is. Its Digital Experience Platform (DXP) capabilities, including personalization, marketing automation, and analytics—combined with cloud-native SaaS delivery, enable organizations to scale efficiently, innovate rapidly, and deliver highly engaging digital experiences.
By carefully aligning customer requirements with these capabilities, you can design solutions that not only meet technical and business needs but also maximize ROI, streamline operations, and deliver long-term value.

In this blog, I’ll summarize Sitecore’s Digital Experience Platform (DXP) offerings to explore how each can be effectively utilized to meet evolving business and technical needs.

1. Sitecore XM Cloud

Sitecore Experience Manager Cloud (XM Cloud) is a cloud-native, SaaS, hybrid headless CMS designed to help businesses create and deliver personalized, multi-channel digital experiences across websites and applications. It combines the flexibility of modern headless architecture with robust authoring tools, enabling teams to strike a balance between developer agility and marketer control.

Key Capabilities

  • Cloud-native: XM Cloud is built for the cloud, providing a secure, reliable, scalable, and enterprise-ready system. Its architecture ensures high availability and global reach without the complexity of traditional on-premises systems.
  • SaaS Delivery: Sitecore hosts, maintains, and updates XM Cloud regularly. Organizations benefit from automatic updates, new features, and security enhancements without the need for costly installations or manual upgrades. This ensures that teams always work with the latest technologies while reducing operational overhead.
  • Hybrid Headless: XM Cloud separates content and presentation, enabling developers to build custom front-end experiences using modern frameworks, while marketers utilize visual editing tools like the Page Builder to make real-time changes. This allows routine updates to be handled without developer intervention, maintaining speed and agility.
  • Developer Productivity: Developers can model content with data templates, design reusable components, and assign content through data sources. Sitecore offers SDKs like the Content SDK for building personalized Next.js apps, the ASP.NET Core SDK for .NET integrations, and the Cloud SDK for extending DXP capabilities into Content SDK and JSS applications connected to XM Cloud. Starter kits are provided for setting up the code base.
  • Global Content Delivery: With Experience Edge, XM Cloud provides scalable GraphQL endpoints to deliver content rapidly across geographies, ensuring consistent user experiences worldwide.
  • Extensibility & AI Integration: XM Cloud integrates with apps from the Sitecore Marketplace and leverages Sitecore Stream for advanced AI-powered content generation and optimization. This accelerates content creation while maintaining brand consistency.
  • Continuous Updates & Security: XM Cloud includes multiple interfaces, such as Portal, Deploy, Page Builder, Explorer, Forms, and Analytics, which are regularly updated. Deploy app to deploy to XM Cloud projects.

XM Cloud is ideal for organizations seeking a scalable, flexible, and future-proof content platform, allowing teams to focus on delivering compelling digital experiences rather than managing infrastructure.

2. Experience Platform (XP)

Sitecore Experience Platform (XP) is like an all-in-one powerhouse—it’s a complete box packed with everything you need for delivering personalized, data-driven digital experiences. While Experience Management (XM) handles content delivery, XP adds layers of personalization, marketing automation, and deep analytics, ensuring every interaction is contextually relevant and optimized for each visitor.

Key Capabilities

  • Content Creation & Management: The Content Editor and Experience Editor allow marketers and content authors to create, structure, and manage website content efficiently, supporting collaboration across teams.
  • Digital Marketing Tools: Built-in marketing tools enable the creation and management of campaigns, automating triggers and workflows to deliver personalized experiences across multiple channels.
  • Experience Analytics: XP provides detailed insights into website performance, visitor behavior, and campaign effectiveness. This includes metrics like page performance, conversions, and user engagement patterns.
  • Experience Optimization: Using analytics data, XP allows you to refine content and campaigns to achieve better results. A/B testing and multivariate testing help determine the most effective variations.
  • Path Analyzer: This tool enables you to analyze how visitors navigate through your site, helping you identify bottlenecks, drop-offs, and opportunities to enhance the user experience.
    By combining these capabilities, XP bridges content and marketing intelligence, enabling teams to deliver data-driven, personalized experiences while continuously refining and improving digital engagement.

By combining these capabilities, XP bridges content and marketing intelligence, enabling teams to deliver data-driven, personalized experiences while continuously refining and improving digital engagement.

3. Sitecore Content Hub

Sitecore Content Hub unifies content planning, creation, curation, and asset management into a single platform, enabling teams to collaborate efficiently and maintain control across the entire content lifecycle and digital channels.

Key Capabilities

  • Digital Asset Management (DAM): Content Hub organizes and manages images, videos, documents, and other digital assets. Assets can be tagged, annotated, searched, and shared efficiently, supporting teams in building engaging experiences without losing control over asset usage or consistency.
  • Campaign & Content Planning: Teams can plan campaigns, manage editorial calendars, and assign tasks to ensure smooth collaboration between marketing, creative, and operational teams. Structured workflows enforce version control, approvals, and accountability, ensuring that content moves systematically to the end user.
  • AI-Powered Enhancements: Advanced AI capabilities accelerate content operations. These intelligent features reduce manual effort, increase productivity, and help teams maintain brand consistency at scale.
  • Microservice Architecture & Integration & Multi-Channel Delivery: Content Hub is built on a microservice-based architecture, allowing flexible integration with external systems, headless CMS, and cloud development pipelines. Developers can extend capabilities or connect Content Hub to other platforms without disrupting core operations. Content Hub ensures that teams can deliver consistent, high-quality experiences across websites, social media, commerce, and other digital channels.

Sitecore Content Hub empowers organizations to manage content as a strategic asset, streamlining operations, enabling global collaboration, and providing the technical flexibility developers need to build integrated, scalable solutions.

strong>4. Sitecore Customer Data Platform (CDP)

Sitecore Customer Data Platform (CDP) enables organizations to collect customer data across all digital channels, providing a single, unified view of every user. By centralizing behavioral and transactional data, CDP allows businesses to deliver personalized experiences and data-driven marketing at scale.

Key Capabilities

  • Real-Time Data Collection: The Stream API captures live behavioral and transactional data from your applications and sends it to Sitecore CDP in real time. This ensures that customer profiles are always up-to-date and that personalization can be applied dynamically as users interact with your digital properties.
  • Batch Data Upload: For larger datasets, including guest data or offline orders, the Batch API efficiently uploads bulk information into CDP, keeping your customer data repository comprehensive and synchronized.
  • CRUD Operations: Sitecore CDP offers REST APIs for retrieving, creating, updating, and deleting customer data. This enables developers to integrate external systems, enrich profiles, or synchronize data between multiple platforms with ease.
  • Data Lake Export: With the Data Lake Export Service, all organizational data can be accessed from Amazon S3, allowing it to be downloaded locally or transferred to another S3 bucket for analysis, reporting, or integration with external systems.
  • SDK Integrations (Cloud SDK & Engage SDK): Developers can leverage Sitecore’s Cloud SDK and Engage SDK to streamline data collection, manage user information, and integrate CDP capabilities directly into applications. These SDKs simplify the process of connecting applications to XM Cloud and other services to CDP, enabling real-time engagement and seamless data synchronization.

Sitecore CDP captures behavioral and transactional interactions across channels, creating a unified, real-time profile for each customer. These profiles can be used for advanced segmentation, targeting, and personalization, which in turn informs marketing strategies and customer engagement initiatives.
By integrating CDP with other components of the Sitecore ecosystem—such as DXP, XM Cloud, and Content Hub —organizations can efficiently orchestrate personalized, data-driven experiences across websites, apps, and other digital touchpoints.

5. Sitecore Personalize

Sitecore Personalize enables organizations to deliver seamless, consistent, and highly relevant experiences across websites, mobile apps, and other digital channels. By leveraging real-time customer data, predictive insights, and AI-driven decisioning, it ensures that the right content, offers, and messages get delivered to the target customer/audience.

Key Capabilities

  • Personalized Experiences: Deliver tailored content and offers based on real-time user behavior, predictive analytics, and unified customer profiles. Personalization can be applied across web interactions, server-side experiences, and triggered channels, such as email or SMS, ensuring every interaction is timely and relevant.
  • Testing and Optimization: Conduct A/B/n tests and evaluate which variations perform best based on actual customer behavior. This enables continuous optimization of content, campaigns, and personalization strategies.
  • Performance Analytics: Track user interactions and measure campaign outcomes to gain actionable insights. Analytics support data-driven refinement of personalization, ensuring experiences remain effective and relevant.
  • Experiences and Experiments: Helps to create a tailored experience for each user depending on interaction and any other relevant user data.
  • AI-Driven Assistance: The built-in Code Assistant can turn natural language prompts into JavaScript, allowing developers to quickly create custom conditions, session traits, and programmable personalization scenarios without writing code from scratch.

By combining real-time data from CDP, content from XM Cloud and Content Hub, and AI-driven decisioning, Sitecore Personalize allows organizations to orchestrate truly unified, intelligent, and adaptive customer experiences. This empowers marketers and developers to respond dynamically to signals, test strategies, and deliver interactions that drive engagement and value, along with a unique experience for users.

6. Sitecore Send

Sitecore Send is a cloud-based email marketing platform that enables organizations to create, manage, and optimize email campaigns. By combining automation, advanced analytics, and AI-driven capabilities, marketing teams can design, execute, and optimize email campaigns efficiently without relying heavily on IT support.

Key Capabilities

  • Campaign Creation & Management: Sitecore Send offers a no-code campaign editor that enables users to design campaigns through drag-and-drop and pre-built templates. Marketers can create campaigns quickly, trigger messages automatically, and also perform batch sends.
  • A/B Testing & Optimization: Campaigns can be A/B tested to determine which version resonates best with the target audience, helping improve open rates, click-through rates, and overall engagement.
  • AI-Powered Insights: Built-in AI capabilities help optimize send times, segment audiences, and predict engagement trends, ensuring messages are timely, relevant, and impactful.
  • API Integration: The Sitecore Send API enables developers to integrate email marketing functionality directly into applications. It supports tasks such as:
    • Creating and managing email lists
    • Sending campaigns programmatically
    • Retrieving real-time analytics
    • Automating repetitive tasks
    • This API-driven approach allows teams to streamline operations, accelerate campaign delivery, and leverage programmatic control over their marketing initiatives.

Sitecore Send integrates seamlessly with the broader Sitecore ecosystem, using real-time data from CDP and leveraging content from XM Cloud or Content Hub. Combined with personalization capabilities, it ensures that email communications are targeted, dynamic, and aligned with overall customer experience strategies.
By centralizing email marketing and providing programmatic access, Sitecore Send empowers organizations to deliver scalable, data-driven campaigns while maintaining full control over creative execution and performance tracking.

7. Sitecore Search

Sitecore Search is a headless search and discovery platform that delivers fast, relevant, and personalized results across content and products. It enables organizations to create predictive, AI-powered, intent-driven experiences that drive engagement, conversions, and deeper customer insights.

Key Capabilities

  • Personalized Search & Recommendations: Uses visitor interaction tracking and AI/ML algorithms to deliver tailored search results and product/content recommendations in real time.
  • Headless Architecture: Decouples search and discovery from presentation, enabling seamless integration across websites, apps, and other digital channels.
  • Analytics & Optimization: Provides rich insights into visitor behavior, search performance, and business impact, allowing continuous improvement of search relevance and engagement.
  • AI & Machine Learning Core: Sophisticated algorithms analyze large datasets—including visitor location, preferences, interactions, and purchase history to deliver predictive, personalized experiences.

With Sitecore Search, organizations can provide highly relevant, omnichannel experiences powered by AI-driven insights and advanced analytics.

8. Sitecore Discover

Sitecore Discover is an AI-driven product search similar to sitecore search, but this is more product and commerce-centric. It enables merchandisers and marketers to deliver personalized shopping experiences across websites and apps. By tracking user interactions, it generates targeted recommendations using AI recipes, such as similar products and items bought together, which helps increase engagement and conversions. Merchandisers can configure pages and widgets via the Customer Engagement Console (CEC) to create tailored, data-driven experiences without developer intervention.

Search vs. Discover

  • Sitecore Search: Broad content/product discovery, developer-driven, AI/ML-powered relevance, ideal for general omnichannel search. Optimized for content and product discovery.
  • Sitecore Discover: Commerce-focused product recommendations, merchandiser-controlled, AI-driven personalization for buying experiences. Optimized for commerce personalization and merchandising.

9. Sitecore Connect

Sitecore Connect is an integration tool that enables seamless connections between Sitecore products and other applications in your ecosystem, creating end-to-end, connected experiences for websites and users.

Key Capabilities

  • Architecture: Built around recipes and connectors, Sitecore Connect offers a flexible and scalable framework for integrations.
  • Recipes: Automated workflows that define triggers (events occurring in applications) and actions (tasks executed when specific events occur), enabling process automation across systems.
  • Connectors: Manage connectivity and interactivity between applications, enabling seamless data exchange and coordinated workflows without requiring complex custom coding.

With Sitecore Connect, organizations can orchestrate cross-system processes, synchronize data, and deliver seamless experiences across digital touchpoints, all while reducing manual effort and integration complexity.

10. OrderCloud

OrderCloud is a cloud-based, API-first, headless commerce and marketplace platform designed for B2B, B2C, and B2X scenarios. It provides a flexible, scalable, and fully customizable eCommerce architecture that supports complex business models and distributed operations.

Key Capabilities

  • Headless & API-First: Acts as the backbone of commerce operations, allowing businesses to build and connect multiple experiences such as buyer storefronts, supplier portals, or admin dashboards—on top of a single commerce platform.
  • Customizable Commerce Solutions: Supports large and complex workflows beyond traditional shopping carts, enabling tailored solutions for distributed organizations.
  • Marketplace & Supply Chain Support: Facilitates selling across extended networks, including suppliers, franchises, and partners, while centralizing order management and commerce operations.

OrderCloud empowers organizations to scale commerce operations, extend digital selling capabilities, and create fully customized eCommerce experiences, all while leveraging a modern, API-first headless architecture.

Final Thoughts

Sitecore’s composable DXP products and its suite of SDKs empower organizations to build scalable, personalized, and future-ready digital experiences. By understanding how each component fits into your architecture and aligns with your  business goals, you can make informed decisions that drive long-term value. Whether you’re modernizing legacy systems or starting fresh in the cloud, aligning your strategy with Sitecore’s capabilities ensures a smoother migration and a more impactful digital transformation.

]]>
https://blogs.perficient.com/2025/11/07/sitecore-dxp-products-and-ecosystem/feed/ 0 388241
Seamless Integration of DocuSign with Appian: A Step-by-Step Guide https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/ https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/#respond Wed, 05 Nov 2025 09:13:16 +0000 https://blogs.perficient.com/?p=388176

Introduction

In today’s digital-first business landscape, streamlining document workflows is essential for operational efficiency and compliance. DocuSign, a global leader in electronic signatures, offers secure and legally binding digital signing capabilities. When integrated with Appian, a powerful low-code automation platform, organizations can automate approval processes, reduce manual effort, and enhance document governance.

This guide walks you through the process of integrating DocuSign as a Connected System within Appian, enabling seamless eSignature workflows across your enterprise applications.

 

Why DocuSign?

DocuSign empowers organizations to manage agreements digitally with features that ensure security, compliance, and scalability.

Key Capabilities:

  • Legally Binding eSignatures compliant with ESIGN Act (U.S.), eIDAS (EU), and ISO 27001.
  • Workflow Automation for multi-step approval processes.
  • Audit Trails for full visibility into document activity.
  • Reusable Templates for standardized agreements.
  • Enterprise-Grade Security with encryption and access controls.
  • Pre-built Integrations with platforms like CRM, ERP, and BPM—including Appian.

Integration Overview

Appian’s native support for DocuSign as a Connected System simplifies integration, allowing developers to:

  • Send documents for signature
  • Track document status
  • Retrieve signed documents
  • Manage signers and templates

Prerequisites

Before starting, ensure you have:

  1. Appian Environment with admin access
  2. DocuSign Developer or Production Account
  3. API Credentials: Integration Key, Client Secret, and RSA Key

Step-by-Step Integration

Step 1: Register Your App in DocuSign

  1. Log in to the DocuSign Developer Portal
  2. Navigate to Apps and KeysAdd App
  3. Generate:
    • Integration Key
    • Secret Key
    • RSA Key
  1. Add your Appian environment’s Redirect URI:

https://<your-appian-environment>/suite/rest/authentication/callback

  1. Enable GET and POST methods and save changes.

Step 2: Configure OAuth in Appian

  1. In Appian’s Admin Console, go to Authentication → Web API Authentication
  2. Add DocuSign credentials under Appian OAuth 2.0 Clients
  3. Ensure all integration details match those from DocuSign

Step 3: Create DocuSign Connected System

  1. Open Appian DesignerConnected Systems
  2. Create a new system:
    • Type: DocuSign
    • Authentication: Authorization Code Grant
    • Client ID: DocuSign Integration Key
    • Client Secret: DocuSign Secret Key
    • Base URL:
      • Development: https://account-d.docusign.com
      • Production: https://account.docusign.com
  1. Click Test Connection to validate setup

Docusign Blog 1

Docusign Blog 2

Docusign Blog 3

Step 4: Build Integration Logic

  1. Go to IntegrationsNew Integration
  2. Select the DocuSign Connected System
  3. Configure actions:
    • Send envelope
    • Check envelope status
    • Retrieve signed documents
  4. Save and test the integration

Docusign Blog 4

Step 5: Embed Integration in Your Appian Application

  1. Add integration logic to Appian interfaces and process models
  2. Use forms to trigger DocuSign actions
  3. Monitor API usage and logs for performance and troubleshooting

Integration Opportunities

🔹 Legal Document Processing

Automate the signing of SLAs, MOUs, and compliance forms using DocuSign within Appian workflows. Ensure secure access, maintain version control, and simplify recurring agreements with reusable templates.

🔹 Finance Approvals

Digitize approvals for budgets, expenses, and disclosures. Route documents to multiple signers with conditional logic and securely store signed records for audit readiness.

🔹 Healthcare Consent Forms

Send consent forms electronically before appointments. Automatically link signed forms to patient records while ensuring HIPAA-compliant data handling.

Conclusion

Integrating DocuSign with Appian enables organizations to digitize and automate document workflows with minimal development effort. This powerful combination enhances compliance, accelerates approvals, and improves user experience across business processes.

For further details, refer to:

]]>
https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/feed/ 0 388176
Spring Boot + OpenAI : A Developer’s Guide to Generative AI Integration https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/ https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/#respond Mon, 27 Oct 2025 08:02:27 +0000 https://blogs.perficient.com/?p=387157

Introduction

In this blog, we’ll explore how to connect OpenAI’s API with a Spring Boot application, step by step.

We’ll cover the setup process, walk through the implementation with a practical example.

By integrating OpenAI with Spring Boot, you can create solutions that are not only powerful but also scalable and reliable.

Prerequisites

  • Java 17+
  • Maven
  • Spring Boot (3.x recommended)
  • OpenAI API Key (get it from platform.openai.com)
  • Basic knowledge of REST APIs

OpenAI’s platform helps developers to understand how to prompt a models to generate meaningful text. It’s basically a cheat sheet for how to communicate to AI so it gives you smart and useful answers by providing prompts. 

Implementation in Spring Boot

To integrate OpenAI’s GPT-4o-mini model into a Spring Boot application, we analyzed the structure of a typical curl request and response provided by OpenAI.

API docs reference:

https://platform.openai.com/docs/overview

https://docs.spring.io/spring-boot/index.html

Curl Request

<html>
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "assistant", "content": "Hello"},
      {"role": "user", "content": "Hi"}
    ]
  }'
</html>

Note-

“role”: “user” – Represents the end-user interacting with the assistant

“role”: “assistant” – Represents the assistant’s response.

The response generated from the model and it looks like this:

{
  "id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
  "object": "chat.completion",
  "created": 1741569952,
  "model": "gpt-4o-mini-2025-04-14",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?",
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 19,
    "completion_tokens": 10,
    "total_tokens": 29,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "service_tier": "default"
}

 

Controller Class:

In below snippet, we will explore a simple spring boot controller to interact with Open AI’s API. When end user sends a prompt to that url (e.g /bot/chat?prompt=what is spring boot), the controller reads the model name and API url from applocation.properties file. It then creates a request using prompt provided and sends it to Open AI using rest call(RestTemplate). After verifying the request, OpenAI sends back a response.

@RestController
@RequestMapping("/bot")
public class GenAiController {

    @Value("${openai.model}")
    private String model;

    @Value(("${openai.api.url}"))
    private String apiURL;

    @Autowired
    private RestTemplate template;

    @GetMapping("/chat")
    public String chat(@RequestParam("prompt") String prompt) {
        GenAiRequest request = new GenAiRequest(model, prompt);
        System.out.println("Request: " + request );
        GenAIResponse genAIResponse = template.postForObject(apiURL, request, GenAIResponse.class);
        return genAIResponse.getChoices().get(0).getMessage().getContent();
    }

 

Configuration Class:

Annotated with @Configuration, this class defines beans and settings for the application context. Pulling the Open API key from properties file and the a customized RestTemplate is created and configured to include the Authorization Bearer <API_KEY> header in all requests. This setup ensures that every call to OpenAI’s API is authenticated without manually adding headers in each request.

@Configuration
public class OpenAIAPIConfiguration {

    @Value("${openai.api.key}")
     private String openaiApiKey;

    @Bean
    public RestTemplate template(){
        RestTemplate restTemplate=new RestTemplate();
        restTemplate.getInterceptors().add((request, body, execution) -> {
            request.getHeaders().add("Authorization", "Bearer " + openaiApiKey);
            return execution.execute(request, body);
        });
        return restTemplate;
    }
    
}

Require getters and setters for request and response classes:

Based on the Curl structure and response, we generated the corresponding request and response java classes with appropriate getters and setters with selected attributes to repsesent request and response object. These getter/setter classes help turn JSON data into objects we can use in code, and also turn our code’s data back into JSON when interacting to the OpenAI API. We implemented a bot using the gpt-4o-mini model, integrating it with a REST controller and also handled the authentication via the API key.

//Request
@Data
public class GenAiRequest {

    private String model;
    private List<GenAIMessage> messages;

    public List<GenAIMessage> getMessages() {
        return messages;
    }

    public GenAiRequest(String model, String prompt) {
        this.model = model;
        this.messages = new ArrayList<>();
        this.messages.add(new GenAIMessage("user",prompt));
    }
}

@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIMessage {

    private String role;
    private String content;   
    
    public String getContent() {
        return content;
    }
    public void setContent(String content) {
        this.content = content;
    }
}

//Response
@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIResponse {

    private List<Choice> choices;

    public List<Choice> getChoices() {
        return choices;
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public static class Choice {

        private int index;
        private GenAIMessage message;
        public GenAIMessage getMessage() {
            return message;
        }
        public void setMessage(GenAIMessage message) {
            this.message = message;
        }

    }

}

 

Essential Configuration for OpenAI Integration in Spring Boot

To connect your Spring Boot application with OpenAI’s API, you need to define a few key properties in your application.properties or application.yml file:

  • server.port: Specifies the port on which your Spring Boot application will run. You can set it to any available port like 8080, 9090, etc. (The default port for a Spring Boot application is 8080)
  • openai.model: Defines the OpenAI model to be used. In this case, gpt-4o-mini is selected for lightweight and efficient responses.
  • openai.api.key: Your secret API key from OpenAI. This is used to authenticate requests. Make sure to keep it secure and never expose it publicly.
  • openai.api.url: The endpoint URL for OpenAI’s chat completion API. (This is where your application sends prompts and receives responses)
server.port=<add server port>
openai.model=gpt-4o-mini
openai.api.key=	XXXXXXXXXXXXXXXXXXXXXXXXXXXX
openai.api.url=https://api.openai.com/v1/chat/completions

 

Postman Collection:

GET API: http://localhost:<port>/bot/chat?prompt=What is spring boot used for ?

Content-Type: application/json

Prompt

Usage of Spring Boot + OpenAI Integration

  • AI-Powered Chatbots: Build intelligent assistants for customer support, internal helpdesks, or onboarding systems.
  • Content Generation Tools: Automate blog writing, email drafting, product descriptions, or documentation, generate personalized content based on user input.
  • Code Assistance & Review: Create tools that help developers write, refactor, or review code using AI, Integrate with IDEs or CI/CD pipelines for smart suggestions.
  • Data Analysis & Insights: Use AI to interpret data, generate summaries, answer questions about datasets combine with Spring Boot APIs to serve insights to dashboards or reports.
  • Search Enhancement: Implement semantic search or question-answering systems over documents or databases, use embeddings and GPT to improve relevance and accuracy.
  • Learning & Training Platforms: Provide personalized tutoring, quizzes, and explanations using AI & adapt content based on user performance and feedback.
  • Email & Communication Automation: Draft, summarize, or translate emails and messages, integrate with enterprise communication tools.
  • Custom usages: In a business-to-business context, usage can be customized according to specific client requirements.
]]>
https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/feed/ 0 387157
Perficient Wins Silver w3 Award for AI Utility Integration https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/ https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/#respond Fri, 24 Oct 2025 15:49:49 +0000 https://blogs.perficient.com/?p=387677

We’re proud to announce that we’ve been honored with a Silver w3 Award in the Emerging Tech Features – AI Utility Integration category for our work with a top 20 U.S. utility provider. This recognition from the Academy of Interactive and Visual Arts (AIVA) celebrates our commitment to delivering cutting-edge, AI-powered solutions that drive real-world impact in the energy and utilities sector.

“Winning this w3 Award speaks to our pragmatism–striking the right balance between automation capabilities and delivering true business outcomes through purposeful AI adoption,” said Mwandama Mutanuka, Managing Director of Perficient’s Intelligent Automation practice. “Our approach focuses on understanding the true cost of ownership, evaluating our clients’ existing automation tech stack, and building solutions with a strong business case to drive impactful transformation.”

Modernizing Operations with AI

The award-winning solution centered on the implementation of a ServiceNow Virtual Agent to streamline internal service desk operations for a major utility provider serving millions of homes and businesses across the United States. Faced with long wait times and a high volume of repetitive service requests, the client sought a solution that would enhance productivity, reduce costs, and improve employee satisfaction.

Our experts delivered a two-phase strategy that began with deploying an out-of-the-box virtual agent capable of handling low-complexity, high-volume requests. We then customized the solution using ServiceNow’s Conversational Interfaces module, tailoring it to the organization’s unique needs through data-driven topic recommendations and user behavior analysis. The result was an intuitive, AI-powered experience that allowed employees and contractors to self-serve common IT requests, freeing up service desk agents to focus on more complex work and significantly improving operational efficiency.

Driving Adoption Through Strategic Change Management

Adoption is the key to unlocking the full value of any technology investment. That’s why our team partnered closely with the client’s corporate communications team to launch a robust change management program. We created a branded identity for the virtual agent, developed engaging training materials, and hosted town halls to build awareness and excitement across the organization. This holistic approach ensured high engagement and a smooth rollout, setting the foundation for long-term success.

Looking Ahead

The w3 Award is a reflection of our continued dedication to innovation, collaboration, and excellence. As we look to the future, we remain committed to helping enterprises across industries harness the full power of AI to transform their operations. Explore the full success story to learn more about how we’re powering productivity with AI, and visit the w3 Awards Winners Gallery to see our recognition among the best in digital innovation.

For more information on how Perficient can help your business with integrated AI services, contact us today.

]]>
https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/feed/ 0 387677
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 2 https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/ https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/#comments Fri, 03 Oct 2025 07:25:24 +0000 https://blogs.perficient.com/?p=387517

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Talend Components:

Key components for batch processing as mention below:

  • tDBConnection: Establish and manage database connections within a job & allow configuration with single connection to reuse within Talend job.
  • tFileInputDelimited: For reading data from flat files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Data transformation allows you to map input data with output data and enables you to perform data filtering, complex data manipulation, typecasting, and multiple input source joins.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tPreJob, tPostJob: PreJob start the execution before the job & PostJob at the end of the job.
  • tDBOutput: Supports wide range of databases & used to write data to various databases.
  • tDBCommit:It retains and verifies the alterations applied to a connected database throughout a Talend job, guaranteeing that it permanently records the data changes.
  • tDBClose:  It explicitly close a database connection that was opened by a tDBConnection component.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.
  • tDie: We can stop the job execution explicitly if it fails. In addition, we can create a customized warning message and exit code.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after the execution, we can process it to insert it into MySQL database table as a target & we can achieve this without batch processing. But this data flow will take quite a longer time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into batch of records into MySQL database table at the target location.

Talend Job Design

Talend Job Design 

Solution:

  • Establish the database connection at the start of the execution so that we can reuse.
  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

Filter the dataset using tFilterRow

Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write the batch data MySQL database table using tDBOutput
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.
  • At the end of the job execution, we are committing the database modification & closing the connection to release the database resource.

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 1

]]>
https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/feed/ 3 387517
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572
AEM and Cloudflare Workers: The Ultimate Duo for Blazing Fast Pages https://blogs.perficient.com/2025/09/23/aem-and-cloudflare-workers-the-ultimate-duo-for-blazing-fast-pages/ https://blogs.perficient.com/2025/09/23/aem-and-cloudflare-workers-the-ultimate-duo-for-blazing-fast-pages/#respond Tue, 23 Sep 2025 10:30:23 +0000 https://blogs.perficient.com/?p=387173

If you’re using Adobe Experience Manager as a Cloud Service (AEMaaCS), you’ve likely wondered what to do with your existing CDN. AEMaaCS includes a fully managed CDN with caching, WAF, and DDoS protection. But it also supports a Bring Your Own CDN model.

This flexibility allows you to layer your CDN in front of Adobe’s, boosting page speed through edge caching.

The Challenge: Static vs. Dynamic Content

Many AEM pages combine static and dynamic components, and delivering both types of content through multiple layers of CDN can become a complex process.

Imagine a page filled with static components and just a few dynamic ones. For performance, the static content should be cached heavily. But dynamic components often require real-time rendering and can’t be cached. Since caching is typically controlled by page path—both in Dispatcher and the CDN—we end up disabling caching for the entire page. This workaround ensures dynamic components work as expected, but it undermines the purpose of caching and fast delivery.

Sling Dynamic Includes Provides a Partial Solution

AEM provides Sling Dynamic Includes (SDI) to partially cache the static content and dynamic content using placeholder tags. When a request comes in, it merges the static and dynamic content and then delivers it to the customer.

You can learn more about Sling Dynamic Include on the Adobe Experience League site.

However, SDI relies on the Dispatcher server for processing. This adds load and latency.

Imagine if this process is done on the CDN. This is where Edge Side Includes (ESI) comes into play.

Edge-Side Includes Enters the Chat

ESI does the same thing as SDI, but it uses ESI tags on the cached pages on the CDN.

ESI is powerful, but what if you want to do additional custom business logic apart from just fetching the content? That’s where Cloudflare Workers shines.

What is Cloudflare Workers?  

Cloudflare Workers is a serverless application that can be executed on the CDN Edge Network. Executing the code in edge locations closer to the user location reduces the latency and performance because the request does not have to reach the origin servers.  

Learn more about Cloudflare Workers on the Cloudflare Doc site.

ESI + Cloudflare Workers

In the following example, I’ll share how Cloudflare Workers intercepts ESI tags and fetches both original and translated content.

How to Enable ESI in AEM

  1. Enable SDI in AEM Publish: /system/console/configMgr/org.apache.sling.dynamicinclude.Configuration
  2. Add mod_include to your Dispatcher config.
  3. Set no-cache rules for SDI fragments using specific selectors.

Note: Set include-filter.config.include-type to “ESI” to enable Edge Side Includes.

Visit this article for more detailed steps on how to enable SDI, Dispatcher configuration.  

Writing the Cloudflare Worker Script

Next, write a custom script to intercept the ESI tag request and make a custom call to the origin to get the content, either from the original or translated content.

addEventListener('fetch', (event) => { 

  event.respondWith(handleRequest(event.request)); 

}); 

async function handleRequest(request){ 

//You can update the url and set it to your local aem url in case of local development 

const url = new URL(request.url); 

const origin = url.origin; 

// You can modify the headers based on your requirements and then create a new request with the new headers 

const originalHeaders = request.headers; 

            const newHeaders = new Headers(originalHeaders); 

//Append new headers here 

const aemRequest = new Request(url, { 

                    headers: newHeaders, 

                    redirect: 'manual', 

                }); 

// Get the response from the Origin 

try{ 

const aemresponse = await fetch(aemRequest); 

// Get the content type 

const contentType = aemresponse.headers.get("Content-Type") || ""; 

// If the content type is not “text/html”, return the response as usual or as per requirement, else check if the content has any “esi:include” tag 

If(!contentType.toLocaleLowerCase().includes("text/html")){ 

//return  

} 

//fetch the HTML response 

            const html = await aemresponse.text(); 

if(!html.includes("esi:include")){ 

//content doesn’t have esi tag, return the response 

} 

return fetchESIContent(aemresponse, html, origin) 

} 

} 

 

async function fetchESIContent(originResponse, html, origin) { 

    try{ 

   

      //RegEx expression to find all the esi:include tag in the page 

      const esiRegex = /<esi:include[^>]*\ssrc="([^"]+)"[^>]*\/?>/gi; 

   

      //fetch all fragments and replace those 

      const replaced = await replaceAsync(html, esiRegex, async (match, src) => { 

        try { 

          const absEsiUrl = resolveEsiSrc(src, origin); 

          const fragRes = await fetch(absEsiUrl, {headers: {"Cache-Control" : "no-store"}}); 

          console.log('Fragment response',fragRes.statusText) 

          return fragRes.ok ? await fragRes.text() : "Fragment Response didn't return anything"; 

        } catch (error) { 

          console.error("Error in fetching esi fragments: ",error.message); 

          return ""; 

        } 

      }) 

   

      const headers = appendResponseHeader(originResponse) 

      // Add this header to confirm that ESI has been injected successfully  

       headers.set("X-ESI-Injected", "true"); 

   

      return new Response(replaced, { 

        headers, 

        statusText: originResponse.statusText, 

        status: originResponse.status 

      }) 

    } 

    catch(err){ 

      new Response("Failed to fetch AEM page: "+ err.message, {status: 500}) 

    } 

  } 

 

// Function to fetch content asynchronously 

async function replaceAsync(str, regex, asycFn) { 

    const parts = []; 

    let lastIndex = 0; 

    for( const m of str.matchAll(regex)){ 

        //console.log("ESI Part of the page:: ",m) 

        parts.push(str.slice(lastIndex, m.index)); 

        parts.push(await asycFn(...m)); 

        lastIndex = m.index + m[0].length; 

    } 

    parts.push(str.slice(lastIndex)); 

    return parts.join(""); 

}

Bonus Tip: Local Testing With Miniflare

Want to test Cloudflare Workers locally? Use Miniflare, a simulator for Worker environments.

Check out the official Miniflare documentation.

You Don’t Need to Sacrifice Performance or Functionality

Implementing ESI through Cloudflare Workers is an excellent way to combine aggressive caching with dynamic content rendering—without compromising overall page performance or functionality. 

This approach helps teams deliver faster, smarter experiences at scale. As edge computing continues to evolve, we’re excited to explore even more ways to optimize performance and personalization.

]]>
https://blogs.perficient.com/2025/09/23/aem-and-cloudflare-workers-the-ultimate-duo-for-blazing-fast-pages/feed/ 0 387173
Why Oracle Fusion AI is the Smart Manufacturing Equalizer — and How Perficient Helps You Win https://blogs.perficient.com/2025/09/11/why-oracle-fusion-ai-is-the-smart-manufacturing-equalizer-and-how-perficient-helps-you-win/ https://blogs.perficient.com/2025/09/11/why-oracle-fusion-ai-is-the-smart-manufacturing-equalizer-and-how-perficient-helps-you-win/#respond Thu, 11 Sep 2025 20:24:13 +0000 https://blogs.perficient.com/?p=387047

My 30-year technology career has taught me many things…and one big thing: the companies that treat technology as a cost center are the ones that get blindsided. In manufacturing, that blindside is already here — and it’s wearing the name tag “AI.”

For decades, manufacturers have been locked into rigid systems, long upgrade cycles, and siloed data. The result? Operations that run on yesterday’s insights while competitors are making tomorrow’s moves. Sound familiar? It’s the same trap traditional IT outsourcing fell into — and it’s just as deadly in the age of smart manufacturing.

The AI Advantage in Manufacturing

Oracle Fusion AI for Manufacturing Smart Operations isn’t just another software upgrade. It’s a shift from reactive to predictive, from siloed to synchronized. Think:

  • Real-time anomaly detection that flags quality issues before they hit the line.
  • Predictive maintenance that slashes downtime and extends asset life.
  • Intelligent scheduling that adapts to supply chain disruptions in minutes, not weeks.
  • Embedded analytics that turn every operator, planner, and manager into a decision-maker armed with live data.

This isn’t about replacing people — it’s about giving them superpowers. Read more from Oracle here.

Proof in Action: Roeslein & Associates

If you want to see what this looks like in the wild, look at Roeslein & Associates. They were running on disparate, outdated legacy systems — the kind that make global process consistency a pipe dream. Perficient stepped in and implemented Oracle Fusion Cloud Manufacturing with Project Driven Supply Chain, plus full Financial and Supply Chain Management suites. The result?

  • A global solution template that can be rolled out anywhere in the business.
  • A redesigned enterprise structure to track profits across business units.
  • Standardized manufacturing processes that still flex for highly customized demand.
  • Integrated aftermarket parts ordering and manufacturing flows.
  • Seamless connections between Fusion, labor capture systems, and eCommerce.

That’s not just “going live” — that’s rewiring the operational nervous system for speed, visibility, and scale.

Why Standing Still is Riskier Than Moving Fast

In my words, “true innovation is darn near impossible” when you’re chained to legacy thinking. The same applies here: if your manufacturing ops are running on static ERP data and manual interventions, you’re already losing ground to AI‑driven competitors who can pivot in real time.

Oracle Fusion Cloud with embedded AI is the equalizer. A mid‑sized manufacturer with the right AI tools can outmaneuver industry giants still stuck in quarterly planning cycles.

Where Perficient Comes In

Perficient’s Oracle team doesn’t just implement software — they architect transformation. With deep expertise in Oracle Manufacturing Cloud, Supply Chain Management, and embedded Fusion AI solutions, they help you:

  • Integrate AI into existing workflows without blowing up your operations.
  • Optimize supply chain visibility from raw materials to customer delivery.
  • Leverage IoT and machine learning for continuous process improvement.
  • Scale securely in the cloud while keeping compliance and governance in check.

They’ve done it for global manufacturers, and they can do it for you — faster than you think.

The Call to Action

If you believe your manufacturing operations are immune to disruption, history says otherwise. The companies that win will be the ones that treat AI not as a pilot project, but as the new operating system for their business.

Rather than letting new entrants disrupt your position, take initiative and lead the charge—make them play catch-up.

]]>
https://blogs.perficient.com/2025/09/11/why-oracle-fusion-ai-is-the-smart-manufacturing-equalizer-and-how-perficient-helps-you-win/feed/ 0 387047
Why It’s Time to Move from SharePoint On-Premises to SharePoint Online https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/ https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/#respond Tue, 09 Sep 2025 14:53:50 +0000 https://blogs.perficient.com/?p=387013

In today’s fast-paced digital workplace, agility, scalability, and collaboration aren’t just nice to have—they’re business-critical. If your organization is still on Microsoft SharePoint On-Premises, now is the time to make the move to SharePoint Online. Here’s why this isn’t just a technology upgrade—it’s a strategic leap forward.

1. Work Anywhere, Without Barriers

SharePoint Online empowers your workforce with secure access to content from virtually anywhere. Whether your team is remote, hybrid, or on the go, they can collaborate in real time without being tethered to a corporate network or VPN.

2. Always Up to Date

Forget about manual patching and version upgrades. SharePoint Online is part of Microsoft 365, which means you automatically receive the latest features, security updates, and performance improvements—without the overhead of managing infrastructure.

3. Reduce Costs and Complexity

Maintaining on-premises servers is expensive and resource-intensive. By moving to SharePoint Online, you eliminate hardware costs, reduce IT overhead, and streamline operations. Plus, Microsoft handles the backend, so your team can focus on innovation instead of maintenance.

4. Enterprise-Grade Security and Compliance

Microsoft invests heavily in security, offering built-in compliance tools, data loss prevention, and advanced threat protection. SharePoint Online is designed to meet global standards and industry regulations, giving you peace of mind that your data is safe.

5. Seamless Integration with Microsoft 365

SharePoint Online integrates effortlessly with Microsoft Teams, OneDrive, Power Automate, and Power BI—enabling smarter workflows, better insights, and more connected experiences across your organization.

6. Scalability for the Future

Whether you’re a small business or a global enterprise, SharePoint Online scales with your needs. You can easily add users, expand storage, and adapt to changing business demands without worrying about infrastructure limitations.

Why Perficient for Your SharePoint Online Migration 

Migrating to SharePoint Online is more than a move to the cloud—it’s a chance to transform how your business works. At Perficient, we help you turn common migration challenges into measurable wins:
  • 35% boost in collaboration efficiency
  • Up to 60% cost savings per user
  • 73% reduction in data breach risk
  • 100+ IT hours saved each month
Our Microsoft 365 Modernization solutions don’t just migrate content—they build a secure, AI-ready foundation. From app modernization and AI-powered search to Microsoft Copilot integration, Perficient positions your organization for the future.
]]>
https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/feed/ 0 387013
Automating Azure Key Vault Secret and Certificate Expiry Monitoring with Azure Function App https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/ https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/#respond Tue, 26 Aug 2025 14:15:25 +0000 https://blogs.perficient.com/?p=386349

How to monitor hundreds of Key Vaults across multiple subscriptions for just $15-25/month

The Challenge: Key Vault Sprawl in Enterprise Azure

If you’re managing Azure at enterprise scale, you’ve likely encountered this scenario: Key Vaults scattered across dozens of subscriptions, hundreds of certificates and secrets with different expiry dates, and the constant fear of unexpected outages due to expired certificates. Manual monitoring simply doesn’t scale when you’re dealing with:

  • Multiple Azure subscriptions (often 10-50+ in large organizations)
  • Hundreds of Key Vaults across different teams and environments
  • Thousands of certificates with varying renewal cycles
  • Critical secrets that applications depend on
  • Different time zones and rotation schedules

The traditional approach of spreadsheets, manual checks, or basic Azure Monitor alerts breaks down quickly. You need something that scales automatically, costs practically nothing, and provides real-time visibility across your entire Azure estate.

The Solution: Event-Driven Monitoring Architecture

Keyvaultautomation

Single Function App, Unlimited Key Vaults

Instead of deploying monitoring resources per Key Vault (expensive and complex), we use a centralized architecture:

Management Group (100+ Key Vaults)
           ↓
   Single Function App
           ↓
     Action Group
           ↓
    Notifications

This approach provides:

  • Unlimited scalability: Monitor 1 or 1000+ Key Vaults with the same infrastructure
  • Cross-subscription coverage: Works across your entire Azure estate
  • Real-time alerts: Sub-5-minute notification delivery
  • Cost optimization: $15-25/month total (not per Key Vault!)

How It Works: The Technical Deep Dive

1. Event Grid System Topics (The Sensors)

Azure Key Vault automatically generates events when certificates and secrets are about to expire. We create Event Grid System Topics for each Key Vault to capture these events:

Event Types Monitored:
• Microsoft.KeyVault.CertificateNearExpiry
• Microsoft.KeyVault.CertificateExpired  
• Microsoft.KeyVault.SecretNearExpiry
• Microsoft.KeyVault.SecretExpired

The beauty? These events are generated automatically by Azure – no polling, no manual checking, just real-time notifications when things are about to expire.

2. Centralized Processing (The Brain)

A single Azure Function App processes ALL events from across your organization:

// Simplified event processing flow
eventGridEvent → parseEvent() → extractMetadata() → 
formatAlert() → sendToActionGroup()

Example Alert Generated:
{
  severity: "Sev1",
  alertTitle: "Certificate Expired in Key Vault",
  description: "Certificate 'prod-ssl-cert' has expired in Key Vault 'prod-keyvault'",
  keyVaultName: "prod-keyvault",
  objectType: "Certificate",
  expiryDate: "2024-01-15T00:00:00.000Z"
}

3. Smart Notification Routing (The Messenger)

Azure Action Groups handle notification distribution with support for:

  • Email notifications (unlimited recipients)
  • SMS alerts for critical expiries
  • Webhook integration with ITSM tools (ServiceNow, Jira, etc.)
  • Voice calls for emergency situations.

Implementation: Infrastructure as Code

The entire solution is deployed using Terraform, making it repeatable and version-controlled. Here’s the high-level infrastructure:

Resource Architecture

# Single monitoring resource group
resource "azurerm_resource_group" "monitoring" {
  name     = "rg-kv-monitoring-${var.timestamp}"
  location = var.primary_location
}

# Function App (handles ALL Key Vaults)
resource "azurerm_linux_function_app" "kv_processor" {
  name                = "func-kv-monitoring-${var.timestamp}"
  service_plan_id     = azurerm_service_plan.function_plan.id
  # ... configuration
}

# Event Grid System Topics (one per Key Vault)
resource "azurerm_eventgrid_system_topic" "key_vault" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  name                   = "evgt-${each.key}"
  source_arm_resource_id = "/subscriptions/${each.value.subscriptionId}/resourceGroups/${each.value.resourceGroup}/providers/Microsoft.KeyVault/vaults/${each.key}"
  topic_type            = "Microsoft.KeyVault.vaults"
}

# Event Subscriptions (route events to Function App)
resource "azurerm_eventgrid_event_subscription" "certificate_expiry" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  azure_function_endpoint {
    function_id = "${azurerm_linux_function_app.kv_processor.id}/functions/EventGridTrigger"
  }
  
  included_event_types = [
    "Microsoft.KeyVault.CertificateNearExpiry",
    "Microsoft.KeyVault.CertificateExpired"
  ]
}

CI/CD Pipeline Integration

The solution includes an Azure DevOps pipeline that:

  1. Discovers Key Vaults across your management group automatically
  2. Generates Terraform variables with all discovered Key Vaults
  3. Deploys infrastructure using infrastructure as code
  4. Validates deployment to ensure everything works
# Simplified pipeline flow
stages:
  - stage: DiscoverKeyVaults
    # Scan management group for all Key Vaults
    
  - stage: DeployMonitoring  
    # Deploy Function App and Event Grid subscriptions
    
  - stage: ValidateDeployment
    # Ensure monitoring is working correctly

Cost Analysis: Why This Approach Wins

Traditional Approach (Per-Key Vault Monitoring)

100 Key Vaults × $20/month per KV = $2,000/month
Annual cost: $24,000

This Approach (Centralized Monitoring)

Base infrastructure: $15-25/month
Event Grid events: $2-5/month  
Total: $17-30/month
Annual cost: $204-360

Savings: 98%+ reduction in monitoring costs

Detailed Cost Breakdown

ComponentMonthly CostNotes
Function App (Basic B1)$13.14Handles unlimited Key Vaults
Storage Account$1-3Function runtime storage
Log Analytics$2-15Centralized logging
Event Grid$0.50-2$0.60 per million operations
Action Group$0Email notifications free
Total$17-33Scales to unlimited Key Vaults

Implementation Guide: Getting Started

Prerequisites

  1. Azure Management Group with Key Vaults to monitor
  2. Service Principal with appropriate permissions:
    • Reader on Management Group
    • Contributor on monitoring subscription
    • Event Grid Contributor on Key Vault subscriptions
  3. Azure DevOps or similar CI/CD platform

Step 1: Repository Setup

Create this folder structure:

keyvault-monitoring/
├── terraform/
│   ├── main.tf              # Infrastructure definitions
│   ├── variables.tf         # Configuration variables
│   ├── terraform.tfvars     # Your specific settings
│   └── function_code/       # Function App source code
├── azure-pipelines.yml      # CI/CD pipeline
└── docs/                    # Documentation

Step 2: Configuration

Update terraform.tfvars with your settings:

# Required configuration
notification_emails = [
  "your-team@company.com",
  "security@company.com"
]

primary_location = "East US"
log_retention_days = 90

# Optional: SMS for critical alerts
sms_notifications = [
  {
    country_code = "1"
    phone_number = "5551234567"
  }
]

# Optional: Webhook integration
webhook_url = "https://your-itsm-tool.com/api/alerts"

Step 3: Deployment

The pipeline automatically:

  1. Scans your management group for all Key Vaults
  2. Generates infrastructure code with discovered Key Vaults
  3. Deploys monitoring resources using Terraform
  4. Validates functionality with test events

Expected deployment time: 5-10 minutes

Step 4: Validation

Test the setup by creating a short-lived certificate:

# Create test certificate with 1-day expiry
az keyvault certificate create \
  --vault-name "your-test-keyvault" \
  --name "test-monitoring-cert" \
  --policy '{
    "issuerParameters": {"name": "Self"},
    "x509CertificateProperties": {
      "validityInMonths": 1,
      "subject": "CN=test-monitoring"
    }
  }'

# You should receive an alert within 5 minutes

Operational Excellence

Monitoring the Monitor

The solution includes comprehensive observability:

// Function App performance dashboard
FunctionAppLogs
| where TimeGenerated > ago(24h)
| summarize 
    ExecutionCount = count(),
    SuccessRate = (countif(Level != "Error") * 100.0) / count(),
    AvgDurationMs = avg(DurationMs)
| extend PerformanceScore = case(
    SuccessRate >= 99.5, "Excellent",
    SuccessRate >= 99.0, "Good", 
    "Needs Attention"
)

Advanced Features and Customizations

1. Integration with ITSM Tools

The webhook capability enables integration with enterprise tools:

// ServiceNow integration example
const serviceNowPayload = {
  short_description: `${objectType} '${objectName}' expiring in Key Vault '${keyVaultName}'`,
  urgency: severity === 'Sev1' ? '1' : '3',
  category: 'Security',
  subcategory: 'Certificate Management',
  caller_id: 'keyvault-monitoring-system'
};

2. Custom Alert Routing

Different Key Vaults can route to different teams:

// Route alerts based on Key Vault naming convention
const getNotificationGroup = (keyVaultName) => {
  if (keyVaultName.includes('prod-')) return 'production-team';
  if (keyVaultName.includes('dev-')) return 'development-team';
  return 'platform-team';
};

3. Business Hours Filtering

Critical alerts can bypass business hours, while informational alerts respect working hours:

const shouldSendImmediately = (severity, currentTime) => {
  if (severity === 'Sev1') return true; // Always send critical alerts
  
  const businessHours = isBusinessHours(currentTime);
  return businessHours || isNearBusinessHours(currentTime, 2); // 2 hours before business hours
};

Troubleshooting Common Issues

Issue: No Alerts Received

Symptoms:

Events are visible in Azure, but no notifications are arriving

Resolution Steps:

  1. Check the Action Group configuration in the Azure Portal
  2. Verify the Function App is running and healthy
  3. Review Function App logs for processing errors
  4. Validate Event Grid subscription is active

Issue: High Alert Volume

Symptoms:

Too many notifications, alert fatigue

Resolution:

// Implement intelligent batching
const batchAlerts = (alerts, timeWindow = '15m') => {
  return alerts.reduce((batches, alert) => {
    const key = `${alert.keyVaultName}-${alert.objectType}`;
    batches[key] = batches[key] || [];
    batches[key].push(alert);
    return batches;
  }, {});
};

Issue: Missing Key Vaults

Symptoms: Some Key Vaults are not included in monitoring

Resolution:

  1. Re-run the discovery pipeline to pick up new Key Vaults
  2. Verify service principal has Reader access to all subscriptions
  3. Check for Key Vaults in subscriptions outside the management group
]]>
https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/feed/ 0 386349