Microsoft Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/microsoft/ Expert Digital Insights Thu, 22 Jan 2026 07:55:43 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Microsoft Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/microsoft/ 32 32 30508587 Build a Custom Accordion Component in SPFx Using React – SharePoint https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/ https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/#comments Thu, 22 Jan 2026 07:50:54 +0000 https://blogs.perficient.com/?p=389813

When building modern SharePoint Framework (SPFx) solutions, reusable UI components play a crucial role in keeping your code clean, scalable, and maintainable. In particular, interactive components help improve the user experience without cluttering the interface.

Among these components, the Accordion is a commonly used UI element. It allows users to expand and collapse sections, making it easier to display large amounts of information in a compact and organized layout. In this blog, we’ll walk through how to create a custom accordion component in SPFx using React.


Create the Accordion Wrapper Component

To begin with, we’ll create a wrapper component that acts as a container for multiple accordion items. At a high level, this component’s responsibility is intentionally simple: it renders child accordion items while keeping styling and layout consistent across the entire accordion.This approach allows individual accordion items to remain focused on their own behavior, while the wrapper handles structure and reusability.

Accordion.tsx

import * as React from 'react';
import styles from './Accordion.module.scss';
import classNames from 'classnames';
import { IAccordionItemProps } from './subcomponents/AccordionItem';

import { ReactElement } from 'react';

export interface IAccordionProps {
  children?:
    | ReactElement<IAccordionItemProps>
    | ReactElement<IAccordionItemProps>[];
  className?: string;
}


const Accordion: React.FunctionComponent<
  React.PropsWithChildren<IAccordionProps>
> = (props) => {
  const { children, className } = props;
  return (
    <div className={classNames(styles.accordionSubcomponent, className)}>
      {children}
    </div>
  );
};

export default Accordion;

Styling with SCSS Modules

Next, let’s focus on styling. SPFx supports SCSS modules, which is ideal for avoiding global CSS conflicts and keeping styles scoped to individual components. Let’s see styling for accordion and accordion items.

Accordion.module.scss

.accordionSubcomponent {
    margin-bottom: 12px;
    .accordionTitleRow {
        display: flex;
        flex-direction: row;
        align-items: center;
        padding: 5px;
        font-size: 18px;
        font-weight: 600;
        cursor: pointer;
        -webkit-touch-callout: none;
        -webkit-user-select: none;
        -khtml-user-select: none;
        -moz-user-select: none;
        -ms-user-select: none;
        user-select: none;
        border-bottom: 1px solid;
        border-color: "[theme: neutralQuaternaryAlt]";
        background: "[theme: neutralLighter]";
    }
    .accordionTitleRow:hover {
        opacity: .8;
    }
    .accordionIconCol {
        padding: 0px 5px;
    }
    .accordionHeaderCol {
        display: inline-block;
        width: 100%;
    }
    .iconExpandCollapse {
        margin-top: -4px;
        font-weight: 600;
        vertical-align: middle;
    }
    .accordionContent {
        margin-left: 12px;
        display: grid;
        grid-template-rows: 0fr;
        overflow: hidden;
        transition: grid-template-rows 200ms;
        &.expanded {
          grid-template-rows: 1fr;
        }
        .expandableContent {
          min-height: 0;
        }
    }
}

Styling Highlights

  • Grid‑based animation for expand/collapse
  • SharePoint theme tokens
  • Hover effects for better UX

Creating Accordion Item Component

Each expandable section is managed by AccordionItem.tsx.

import * as React from 'react';
import styles from '../Accordion.module.scss';
import classNames from 'classnames';
import { Icon } from '@fluentui/react';
import { useState } from 'react';


export interface IAccordionItemProps {
  iconCollapsed?: string;
  iconExpanded?: string;
  headerText?: string;
  headerClassName?: string;
  bodyClassName?: string;
  isExpandedByDefault?: boolean;
}
const AccordionItem: React.FunctionComponent<React.PropsWithChildren<IAccordionItemProps>> = (props: React.PropsWithChildren<IAccordionItemProps>) => {
  const {
    iconCollapsed,
    iconExpanded,
    headerText,
    headerClassName,
    bodyClassName,
    isExpandedByDefault,
    children
  } = props;
  const [isExpanded, setIsExpanded] = useState<boolean>(!!isExpandedByDefault);
  const _toggleAccordion = (): void => {
    setIsExpanded((prevIsExpanded) => !prevIsExpanded);
  }
  return (
    <Stack>
    <div className={styles.accordionTitleRow} onClick={_toggleAccordion}>
        <div className={styles.accordionIconCol}>
            <Icon
                iconName={isExpanded ? iconExpanded : iconCollapsed}
                className={styles.iconExpandCollapse}
            />
        </div>
        <div className={classNames(styles.accordionHeaderCol, headerClassName)}>
            {headerText}
        </div>
    </div>
    <div className={classNames(styles.accordionContent, bodyClassName, {[styles.expanded]: isExpanded})}>
      <div className={styles.expandableContent}>
        {children}
      </div>
    </div>
    </Stack>
  )
}
AccordionItem.defaultProps = {
  iconExpanded: 'ChevronDown',
  iconCollapsed: 'ChevronUp'
};
export default AccordionItem;

Example Usage in SPFx Web Part

<Accordion>
  <AccordionItem headerText="What is SPFx?">
    <p>SPFx is a development model for SharePoint customizations.</p>

  </AccordionItem>

  <AccordionItem
    headerText="Why use custom controls?"
    isExpandedByDefault={true}
  >
    <p>Custom controls improve reusability and UI consistency.</p>
  </AccordionItem>
</Accordion>

Accordion

Conclusion

By building a custom accordion component in SPFx using React, you gain:

  • Full control over UI behavior
  • Lightweight and reusable code
  • Native SharePoint theming

This pattern is perfect for:

  • FAQ sections
  • Configuration panels
  • Dashboard summaries
]]>
https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/feed/ 1 389813
Upgrading from Gulp to Heft in SPFx | Sharepoint https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/ https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/#respond Wed, 14 Jan 2026 09:59:20 +0000 https://blogs.perficient.com/?p=389727

With the release of SPFx v1.22, Microsoft introduced Heft as the new build engine, replacing Gulp. This change brings better performance, modern tooling, and a more standardized approach to building SPFx solutions. In this blog, we’ll explore what this means for developers and how to upgrade.

What is Gulp in SPFx?

In SharePoint Framework (SPFx), Gulp is a JavaScript-based task runner that was traditionally used to automate build and development tasks.

What Gulp Did in SPFx

Historically, the SharePoint Framework (SPFx) relied on Gulp as its primary task runner, responsible for orchestrating the entire build pipeline. Gulp did a series of scripted tasks, defined inside gulpfile.js and in different SPFx build rig packages. These tasks automate important development and packaging workflows.These tasks included:

  • Automates repetitive tasks such as:
    • TypeScript to JavaScript.
    • Bundling multiple files into optimized packages.
    • Minifying code for better performance.
    • Packaging the solution into a “.sppkg” file for deployment.
  • Runs development servers for testing (gulp serve).
  • Watches for changes and rebuilds automatically during development

Because these tasks depended on ad‑hoc JavaScript streams and SPFx‑specific build rig wrappers, the pipeline could become complex and difficult to extend consistently across projects.

The following are the common commands included in gulp:

  • gulp serve – local workbench/dev server
  • gulp build – build the solution
  • gulp bundle – produce deployable bundles
  • gulp package-solution – create the .sppkg for the App Catalog

What is Heft?

In SharePoint Framework (SPFx), Heft is the new build engine introduced by Microsoft, starting with SPFx v1.22. It replaces the older Gulp-based build system.

Heft has replaced Gulp to support modern architecture, improve performance, ensure consistency and standardization, and provide greater extensibility.

Comparison between heft and gulp:

Area Gulp (Legacy) Heft (SPFx v1.22+)
Core model Task runner with custom JS/streams (gulpfile.js) Config‑driven orchestrator with plugins/rigs
Extensibility Write custom tasks per project Use Heft plugins or small “patch” files; standardized rigs
Performance Sequential tasks; no native caching Incremental builds, caching, unified TypeScript pass
Config surface Often scattered across gulpfile.js and build rig packages Centralized JSON/JS configs (heft.json, Webpack patch/customize hooks)
Scale Harder to keep consistent across many repos Designed to scale consistently (Rush Stack)

Installation Steps for Heft

  • To work with the upgraded version, you need to install Node v22.
  • Run the command npm install @rushstack/heft –global

Removing Gulp from an SPFx Project and Adding Heft (Clean Steps)

  • To work with the upgraded version, install Node v22.
  • Remove your current node_modules and package-lock.json, and run npm install again
  • NOTE: deleting node_modules can take a very long time if you don’t skip the recycle bin.
    • Open PowerShell
    • Navigate to your Project folder
    • Run command Remove-Item -Recurse -Force node_modules
    • Run command Remove-Item -Force package-lock.json
  • Open the solution in VS Code
  • In terminal run command npm cache clean –force
  • Then run npm install
  • Run the command npm install @rushstack/heft –global

After that, everything should work, and you will be using the latest version of SPFx with heft. However, going forward, there are some commands to be aware of

Day‑to‑day Commands on Heft

  • heft clean → cleans build artifacts (eq. gulp clean)
  • heft build → compiles & bundles (eq. gulp build/bundle) (Note— prod settings are driven by config rather than –ship flags.)
  • heft start → dev server (eq. gulp serve)
  • heft package-solution → creates.sppkg (dev build)
  • heft package-solution –production → .sppkg for production (eq. gulp package-solution –ship)
  • heft trust-dev-cert → trusts the local dev certificate used by the dev server (handy if debugging fails due to HTTPS cert issues

Conclusion

Upgrading from Gulp to Heft in SPFx projects marks a significant step toward modernizing the build pipeline. Heft uses a standard, configuration-based approach that improves performance, makes things the same across projects, and can be expanded for future needs. By adopting Heft, developers align with Microsoft’s latest architecture, reduce maintenance overhead, and gain a more scalable and reliable development experience.

]]>
https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/feed/ 0 389727
Building Custom Search Vertical in SharePoint Online for List Items with Adaptive Cards https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/ https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/#respond Wed, 14 Jan 2026 06:25:15 +0000 https://blogs.perficient.com/?p=389614

This blog explains the process of building a custom search vertical in SharePoint Online that targets a specific list using a dedicated content type. It covers indexing important columns, and mapping them to managed properties for search. Afterward, a result type is configured with Adaptive Cards JSON to display metadata like title, category, author, and published date in a clear, modern format. Then we will have a new vertical on the hub site, giving users a focused tab for Article results. In last, the result is a streamlined search experience that highlights curated content with consistent metadata and an engaging presentation.

For example, we will start with the assumption that a custom content type is already in place. This content type includes the following columns:

  • Article Category – internal name article_category
  • Article Topic – internal name article_topic

We’ll also assume that a SharePoint list has been created which uses this content type, with the ContentTypeID: 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B

With the content type and list ready, the next steps focus on configuring search so these items can be surfaced effectively in a dedicated vertical.

Index Columns in the List

Indexing columns optimize frequently queried metadata, including category or topic, for faster search.. This improves performance and makes it easier to filter and refine results in a custom vertical.

  • Go to List Settings → Indexed Columns.
  • Ensure article_category and article_topic are indexed for faster search queries.

Create Managed Properties

First, check which RefinableString managed properties are available in your environment. After you identify them, configure them as shown below.:

Refinable stringField nameAlias nameCrawled property
RefinableString101article _topicArticleTopicows_article _topic
RefinableString102article_categoryArticleCategoryows_article_category
RefinableString103article_linkArticleLinkows_article_link

Tip: Creating an alias name for a managed property makes it easier to read and reference. This step is optional — you can also use the default RefinableString name directly.

To configure these fields, follow the steps below:

  • Go to the Microsoft Search Admin Center → Search schema.
  • Go to Search Schema → Crawled Properties
  • Look for the field (ex. article _topic or article_category),  find its crawled property (starts with ows_)
  • Click on property → Add mapping
  • Popup will open → Look for unused RefinableString properties (e.g., RefinableString101, RefinableString102) → click “Ok” button
  • Click “Save”
  • Likewise, create managed properties for all the required columns.

Once mapped, these managed properties can be searched, found, and defined. This means they can be used in search filters, result types, and areas.

Creating a Custom Search Vertical

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

Following the steps given below to create and configure a custom search vertical from the admin center:

  • In “Verticals” tab, add a new value as per following configuration:
    • Name = “Articles”
    • Content source = SharePoint and OneDrive
    • KQL query = It is the actual filter where we specify the filter for items from the specific list to display in search results. In our example, we will set it as: ContentTypeId:0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B*Verticalskql
    • Filters: Filters are an optional setting that allows users to narrow search results based on specific criteria. In our example, we can add a filter by category. To add “Categories” filter on search page, follow below steps:
      • Click on add filter
      • Select “RefinableString102” (This is a refinable string managed property for “article_category” column as setup in above steps)
      • Name = “Category” or other desired string to display on search

Set Vertical filter

Creating a Result Type

Creating a new result type in the Microsoft Search Admin Center lets you define how specific content (like items from a list or a content type) is displayed in search results. In this example, we set some rules and use Adaptive Card template to make search easier and more interesting.

Following are the steps to create a new result type in the admin center.

  • Go to admin center, https://admin.cloud.microsoft
  • Settings → Search & intelligence
  • In “Customizations”, go to “Result types”
  • Add new result types with the following configurations:
    • Name = “AarticlesResults” (Note: Specify any name you want to display in search vertical)
    • Content source = SharePoint and OneDrive
    • Rules
      • Type of content = SharePoint list item
      • ContentTypeId starts with 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B (Note: Content type Id created in above steps)Set Result type
      • Layout = Put the JSON string for Adaptive card to display search result. Following is the JSON for displaying the result:
        {
           "type": "AdaptiveCard",
          "version": "1.3",
          "body": [
            {
              "type": "ColumnSet",
              "columns": [
                {
                  "type": "Column",
                  "width": "auto",
                  "items": [
                    {
                    "type": "Image",
                    "url": <url of image/thumbnail to be displayed for each displayed item>,
                    "altText": "Thumbnail image",
                    "horizontalAlignment": "Center",
                    "size": "Small"
                    }
                  ],
                  "horizontalAlignment": "Center"
                },
                {
                  "type": "Column",
                  "width": 10,
                  "items": [
                    {
                      "type": "TextBlock",
                      "text": "[${ArticleTopic}](${first(split(ArticleLink, ','))})",
                      "weight": "Bolder",
                      "color": "Accent",
                      "size": "Medium",
                      "maxLines": 3
                    },
                    {
                      "type": "TextBlock",
                      "text": "**Category:** ${ArticleCategory}",
                      "spacing": "Small",
                      "maxLines": 3
                    }
                  ],
                  "spacing": "Medium"
                }
              ]
            }
          ],
          "$schema": "http://adaptivecards.io/schemas/adaptive-card.json"
        }

        Set Result type adaptive card

When you set up everything properly, the final output will look like this:

Final search results

Conclusion

Finally, we created a special search area in SharePoint Online for list items with adaptive cards. This changes how users use search. Important metadata becomes clearly visible when you index key columns, map them to managed properties, and design a tailored result type. Since we used Adaptive Card, it adds a modern, interesting presentation layer. It makes it easier to scan and more visually appealing. In the end, publishing a special section gives you a special tab that lets you access a special list of content. This makes it easier to work with and makes the user experience better.

]]>
https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/feed/ 0 389614
From Legacy to Modern: Migrating WCF to Web API with the Help of AI https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/ https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/#respond Tue, 13 Jan 2026 17:32:36 +0000 https://blogs.perficient.com/?p=389673

Introduction

The modernization of legacy applications has always been a costly process: understanding old code, uncovering hidden dependencies, translating communication models (for example, from SOAP to REST), and ensuring that nothing breaks in production. This is where artificial intelligence changes the game.

AI does not replace the architect or the developer, but it speeds up the heaviest steps in a migration: it helps read and summarize large codebases, proposes equivalent designs in the new technology, generates drafts of controllers, DTOs, and tests, and even suggests architectural improvements that take advantage of the change. Instead of spending hours on mechanical tasks, the team can focus on what really matters: the business rules and the quality of the new solution.

In this post, we’ll look at that impact applied to a concrete case: migrating a WCF service written in C# to an ASP.NET Core Web API, using a real public repository as a starting point and relying on AI throughout the entire process.

Sample project: a real WCF service to be migrated

For this article, we’ll use the public project jecamayo/t-facturo.net as a real-world example: a .NET application that exposes SOAP services based on WCF to manage advisors and branches, using NHibernate for data access. This kind of solution perfectly represents the scenario of many legacy applications currently running in production, and it will serve as our basis to show how artificial intelligence can speed up and improve their migration to a modern architecture with ASP.NET Core Web API.

Key Steps to Migrate from Legacy WCF to a Modern Web API

Migrating a legacy application is not just about “moving code” from one technology to another: it involves understanding the business context, the existing architecture, and designing a modern solution that will be sustainable over time. To structure that process—and to clearly show where artificial intelligence brings the most value—it’s useful to break the migration down into a few key steps like the ones we’ll look at next.

  1. Define the goals and scope of the migration
    Clarify what you want to achieve with the modernization (for example, moving to .NET 8, exposing REST, improving performance or security) and which parts of the system are in or out of the project, in order to avoid surprises and rework.
  2. Analyze the current architecture and design the target architecture
    Understand how the solution is built today (layers, projects, WCF, NHibernate, database) and, with that snapshot, define the target architecture in ASP.NET Core Web API (layers, patterns, technologies) that will replace the legacy system.
  3. Identify dependencies, models, DTOs, and business rules
    Locate external libraries, frameworks, and critical components; inventory domain entities and DTOs; and extract the business rules present in the code to ensure they are properly preserved in the new implementation.
  4. Design the testing strategy and migration plan
    Decide how you will verify that the new API behaves the same (unit tests, integration tests, comparison of WCF vs Web API responses) and define whether the migration will be gradual or a “big bang”, including phases and milestones.
  5. Implement the new Web API, validate it, and retire the legacy WCF
    Build the Web API following the target architecture, migrate the logic and data access, run the test plan to validate behavior, deploy the new solution and, once its stability has been confirmed, deactivate the inherited WCF service.

How to Use AI Prompts During a Migration

Artificial intelligence becomes truly useful in a migration when we know what to ask of it and how to ask it. It’s not just about “asking for code,” but about leveraging it in different phases: understanding the legacy system, designing the target architecture, generating repetitive parts, proposing tests, and helping document the change. To do this, we can classify prompts into a few simple categories (analysis, design, code generation, testing, and documentation) and use them as a practical guide throughout the entire migration process.

Analysis and Understanding Prompts

These focus on having the AI read the legacy code and help you understand it faster: what a WCF service does, what responsibilities a class has, how projects are related, or which entities and DTOs exist. They are ideal for obtaining “mental maps” of the system without having to review every file by hand.

Usage examples:

  • Summarize what a project or a WCF service does.
  • Explain what responsibilities a class or layer has.
  • Identify domain models, DTOs, or design patterns.

Design and Architecture Prompts

These are used to ask the AI for target architecture proposals in the new technology: how to translate WCF contracts into REST endpoints, what layering structure to follow in ASP.NET Core, or which patterns to apply to better separate domain, application, and infrastructure. They do not replace the architect’s judgment, but they offer good starting points and alternatives.

Usage examples:

  • Propose how to translate a WCF contract into REST endpoints.
  • Suggest a project structure following Clean Architecture.
  • Compare technological alternatives (keeping NHibernate vs migrating to EF Core).

Code Generation and Refactoring Prompts

These are aimed at producing or transforming specific code: generating Web API controllers from WCF interfaces, creating DTOs and mappings, or refactoring large classes into smaller, more testable services. They speed up the creation of boilerplate and make it easier to apply good design practices.

Usage examples:

  • Create a Web API controller from a WCF interface.
  • Generate DTOs and mappings between entities and response models.
  • Refactor a class with too many responsibilities into cleaner services/repositories.

Testing and Validation Prompts

Their goal is to help ensure that the migration does not break existing behavior. They can be used to generate unit and integration tests, define representative test cases, or suggest ways to compare responses between the original WCF service and the new Web API.

Usage examples:

  • Generate unit or integration tests for specific endpoints.
  • Propose test scenarios for a business rule.
  • Suggest strategies to compare responses between WCF and Web API.

Documentation and Communication Prompts

They help explain the before and after of the migration: documenting REST endpoints, generating technical summaries for the team, creating tables that show the equivalence between WCF operations and Web API endpoints, or writing design notes for future evolutions. They simplify communication with developers and non-technical stakeholders.

Usage examples:

  • Write documentation for the new API based on the controllers.
  • Generate technical summaries for the team or stakeholders.
  • Create equivalence tables between WCF operations and REST endpoints.

To avoid making this article too long and to be able to go deeper into each stage of the migration, we’ll leave the definition of specific prompts —with real examples applied to the t-facturo.net project— for an upcoming post. In that next article, we’ll go through, step by step, what to ask the AI in each phase (analysis, design, code generation, testing, and documentation) and how those prompts directly impact the quality, speed, and risk of a WCF-to-Web-API migration.

Conclusions

The experience of migrating a legacy application with the help of AI shows that its main value is not just in “writing code,” but in reducing the intellectual friction of the process: understanding old systems, visualizing possible architectures, and automating repetitive tasks. Instead of spending hours reading WCF contracts, service classes, and DAOs, AI can summarize, classify, and propose migration paths, allowing the architect and the team to focus their time on key design decisions and business rules.

At the same time, AI speeds up the creation of the new solution: it generates skeletons for Web API controllers, DTOs, mappings, and tests, acting as an assistant that produces drafts for the team to iterate on and improve. However, human judgment remains essential to validate each proposal, adapt the architecture to the organization’s real context, and ensure that the new application not only “works,” but is maintainable, secure, and aligned with business goals.

]]>
https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/feed/ 0 389673
GitLab to GitHub Migration https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/ https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/#respond Mon, 29 Dec 2025 07:59:05 +0000 https://blogs.perficient.com/?p=389333

1. Why Modern Teams Choose GitHub

Migrating from GitLab to GitHub represents a strategic shift for many engineering teams. Organizations often move to leverage GitHub’s massive open-source community and superior third-party tool integrations. Moreover, GitHub Actions provides a powerful, modern ecosystem for automating complex developer workflows. Ultimately, this transition simplifies standardization across multiple teams while improving overall project visibility.

2. Prepare Your Migration Strategy

A successful transition requires more than just moving code. You must account for users, CI/CD pipelines, secrets, and governance to avoid data loss. Consequently, a comprehensive plan should cover the following ten phases:

  • Repository and Metadata Transfer

  • User Access Mapping

  • CI/CD Pipeline Conversion

  • Security and Secret Management

  • Validation and Final Cutover

3. Execute the Repository Transfer

The first step involves migrating your source code, including branches, tags, and full commit history.

  • Choose the Right Migration Tool

For straightforward transfers, the GitHub Importer works well. However, if you manage a large organization, the GitHub Enterprise Importer offers better scale. For maximum control, technical teams often prefer the Git CLI.

Command Line Instructions:

git clone –mirror gitlab_repo_url
cd repo.git
git push –mirror github_repo_url

Manage Large Files and History:

During this phase, audit your repository for large binary files. Specifically, you should use Git LFS (Large File Storage) for any assets that exceed GitHub’s standard limits.

4. Map Users and Recreate Secrets

GitLab and GitHub use distinct identity systems, so you cannot automatically migrate user accounts. Instead, you must map GitLab user emails to GitHub accounts and manually invite them to your new organization.

Secure Your Variables and Secrets:

For security reasons, GitLab prevents the export of secrets. Therefore, you must recreate them in GitHub using the following hierarchy:

  • Repository Secrets: Use these for project-level variables.

  • Organization Secrets: Use these for shared variables across multiple repos.

  • Environment Secrets: Use these to protect variables in specific deployment stages.

5.Migrating Variables and Secrets

Securing your environment requires a clear strategy for moving CI/CD variables and secrets. Specifically, GitLab project variables should move to GitHub Repository Secrets, while group variables should be placed in Organization Secrets. Notably, secrets must be recreated manually or via the GitHub API because they cannot be exported from GitLab for security reasons.

6. Convert GitLab CI to GitHub Actions

Translating your CI/CD pipelines often represents the most challenging part of the migration. While GitLab uses a single.gitlab-ci.yml file, GitHub Actions utilizes separate workflow files in the .github/workflows/ directory.

Syntax and Workflow Changes:

When converting, map your GitLab “stages” into GitHub “jobs”. Moreover, replace custom GitLab scripts with pre-built actions from the GitHub Marketplace to save time. Finally, ensure your new GitHub runners have the same permissions as your old GitLab runners.

7.Finalize the Metadata and Cutover

Metadata like Issues, Pull Requests (Merge Requests in GitLab), and Wikis require special handling because Git itself does not track them.

The Pre-Cutover Checklist:

Before the official switch, verify the following:

  1. Freeze all GitLab repositories to stop new pushes.

  2. Perform a final sync of code and metadata.

  3. Update webhooks for tools like Slack, Jira, or Jenkins.

  4. Verify that all CI/CD pipelines run successfully.

8. Post-Migration Best Practices

After completing the cutover, archive your old GitLab repositories to prevent accidental updates. Furthermore, enable GitHub’s built-in security features like Dependabot and Secret Scanning to protect your new environment. Finally, provide training sessions to help your team master the new GitHub-centric workflow.

.

9. Final Cutover and Post-Migration Best Practices

Ultimately, once all repositories are validated and secrets are verified, you can execute the final cutover. Specifically, you should freeze your GitLab repositories and perform a final sync before switching your DNS and webhooks. Finally, once the move is complete, remember to archive your old GitLab repositories and enable advanced security features like Dependabot and secret scanning.

10.Summary and Final Thoughts

In conclusion, a GitLab to GitHub migration is a significant but rewarding effort. By following a structured plan that includes proper validation and team training, organizations can achieve a smooth transition. Therefore, with the right tooling and preparation, you can successfully improve developer productivity and cross-team collaboration

]]>
https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/feed/ 0 389333
Unifying Hybrid and Multi-Cloud Environments with Azure Arc https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/ https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/#respond Mon, 22 Dec 2025 08:06:05 +0000 https://blogs.perficient.com/?p=389202

1. Introduction to Modern Cloud Architecture

In today’s world, architects generally prefer to keep their compute resources—such as virtual machines and Kubernetes servers—spread across multiple clouds and on-premises environments. Specifically, they do this to achieve the best possible resilience through high-availability and disaster recovery. Moreover, this approach allows for better cost efficiency and higher security.

2. The Challenge of Management Complexity

However, this distributed strategy brings additional challenges. Specifically, it increases the complexity of maintaining and managing resources from different consoles, such as Azure, AWS, and Google portals. Consequently, even for basic operations like restarts or updates, administrators often struggle with multiple disparate portals. As a result, basic administration tasks become too complex and cumbersome.

3. How Azure Arc Provides a Solution

Azure Arc solves this problem by providing a simple “pane of glass” to manage and monitor servers regardless of their location. In addition, it simplifies governance by delivering a consistent management platform for both multi-cloud and on-premises resources. Specifically, it provides a centralized way to project existing non-Azure resources directly into the Azure Resource Manager (ARM).

4. Understanding Key Capabilities

Currently, Azure Arc allows you to manage several resource types outside of Azure. For instance, it supports servers, Kubernetes clusters, and databases. Furthermore, it offers several specific functionalities:

  • Azure Arc-enabled Servers: Connects physical or virtual Windows and Linux servers to Azure for centralized visibility.

  • Azure Arc-enabled Kubernetes: Additionally, you can onboard any CNCF-conformant Kubernetes cluster to enable GitOps-based management.

  • Azure Arc-enabled SQL Server: This brings external SQL Server instances under Azure governance for advanced security.

5. Architectural Implementation Details

The Azure Arc architecture revolves primarily around the Azure Resource Manager. Specifically, when a resource is onboarded, it receives a unique resource ID and becomes part of Azure’s management plane. Consequently, each resource installs a local agent that communicates with Azure to receive policies and upload logs.

6. The Role of the Connected Machine Agent

The agent package contains several logical components bundled together. For instance, the Hybrid Instance Metadata service (HIMDS) manages the connection and the machine’s Azure identity. Moreover, the guest configuration agent assesses whether the machine complies with required policies. In addition, the Extension agent manages VM extensions, including their installation and upgrades.

7. Onboarding and Deployment Methods

Onboarding machines can be accomplished using different methods depending on your scale. For example, you might use interactive scripts for small deployments or service principals for large-scale automation. Specifically, the following options are available:

  • Interactive Deployment: Manually install the agent on a few machines.

  • At-Scale Deployment: Alternatively, connect machines using a service principal.

  • Automated Tooling: Furthermore, you can utilize Group Policy for Windows machines.

8. Strategic Benefits for Governance

Ultimately, Azure Arc provides numerous strategic benefits for modern enterprises. Specifically, organizations can leverage the following:

  • Governance and Compliance: Apply Azure Policy to ensure consistent configurations across all environments.

  • Enhanced Security: Moreover, use Defender for Cloud to detect threats and integrate vulnerability assessments.

  • DevOps Efficiency: Enable GitOps-based deployments for Kubernetes clusters.

9. Important Limitations to Consider

However, there are a few limitations to keep in mind before starting your deployment. First, continuous internet connectivity is required for full functionality. Secondly, some features may not be available for all operating systems. Finally, there are cost implications based on the data services and monitoring tools used.

10. Conclusion and Summary

In conclusion, Azure Arc empowers organizations to standardize and simplify operations across heterogeneous environments. Whether you are managing legacy infrastructure or edge devices, it brings everything under one governance model. Therefore, if you are looking to improve control and agility, Azure Arc is a tool worth exploring.

]]>
https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/feed/ 0 389202
Introducing Microsoft Work IQ: The Intelligence Layer for Agents https://blogs.perficient.com/2025/11/25/introducing-microsoft-work-iq-the-intelligence-layer-for-agents/ https://blogs.perficient.com/2025/11/25/introducing-microsoft-work-iq-the-intelligence-layer-for-agents/#respond Tue, 25 Nov 2025 23:15:13 +0000 https://blogs.perficient.com/?p=388641

Microsoft Work IQ is a new AI-driven intelligence layer in Microsoft 365 that understands how your organization actually works – far beyond the org chart – and uses that knowledge to make Copilot and AI agents context-aware by default. Announced at Ignite 2025, Work IQ gives Copilot “brains,” turning raw workplace data into actionable understanding. In practical terms, it finds patterns and context in your enterprise data, so AI assistants deliver answers and actions as if they truly know your business. This is a game-changer for IT leaders looking to harness AI: it means your AI won’t just retrieve information, it will understand it in context.

What is Work IQ?

At its core, Work IQ is the intelligence layer that enables Microsoft 365 Copilot and agents to know you, your job, and your company inside and out. It continuously analyzes the rich signals in your digital workspace – emails, files, chats, meetings – and learns from how work gets done in your organization. Microsoft describes Work IQ in three parts:

  • Data: It connects to your work data in Microsoft 365 (SharePoint documents, Outlook emails, Teams meetings and chats, etc.), not as isolated files, but as a connected web of knowledge. Work IQ semantically indexes this content (understanding topics, intents, and projects) and captures business signals like relationships and timelines from it. In short, it codifies “how work gets done” from the daily flow of information.
  • Memory: Work IQ builds a persistent memory of preferences and patterns – your personal work habits, styles, and the network of colleagues you interact with most. This is sometimes called your “work chart,” as opposed to the formal org chart. For example, it learns your writing tone, recurrent tasks, and who your go-to collaborators are, regardless of who reports to whom. This lets it carry context across sessions and tailor responses to your way of working.
  • Inference: Finally, Work IQ uses inference to connect the dots between data and memory, turning raw information into insights and proactive assistance. It identifies patterns and relationships that might not be obvious – for instance, linking a chat mention of “Project Phoenix” to the related OneDrive folder and team members, or suggesting the next best action based on past similar projects. Work IQ essentially predicts needs and draws insights, going well beyond what any single API or connector can do in isolation.

Put simply, Work IQ maps the real flow of work in your company. It doesn’t just know the theoretical structure in an HR system – it knows who actually collaborates, what documents really matter for each project, how information moves across teams, and what context is relevant to the task at hand. It builds a living model of your organization’s workflows.

These are the kinds of insights Work IQ continuously curates to paint a holistic picture of your operational reality. That intelligence is built into Microsoft 365 Copilot today – it’s the same brain that makes Copilot’s answers feel enterprise-aware. Now, importantly, your own custom agents can tap into Work IQ as well. This means when you build an AI bot or automation for your organization, it can leverage that shared “work brain” to behave more like a smart teammate instead of a naive script.

Work IQ vs. Microsoft Graph: Data vs. Understanding

A common question is: How is Work IQ different from Microsoft Graph? After all, Microsoft Graph has long provided API access to mail, files, Teams, users, and more. The difference lies in raw data versus interpreted intelligence:

  • Microsoft Graph is essentially a rich data access layer – a unified API to query information from Microsoft 365 (emails, calendar events, documents, chat messages, directory info, etc.). You ask for data, and Graph returns exactly what you requested, but it’s up to you to make sense of it Graph gives you the raw information (for example, a list of files or the text of an email) and as a developer you must build the logic around it.
  • Work IQ is an intelligence layer built on top of that data. It leverages the data that Graph exposes, but adds a deep understanding of relationships, relevance, and context in that data. Instead of you writing code to figure out “who is working on what” or “which documents are important to this project,” Work IQ deduces that automatically by analyzing patterns. Work IQ gives you understanding – the meaning behind the data, not just the data itself.

In summary, Microsoft Graph is indispensable for accessing raw data, but Work IQ is what makes that data immediately useful for AI. The Graph pulls facts while Work IQ finds patterns and insights in those facts. This distinction is key: Work IQ is what elevates an AI assistant from a basic tool into a knowledgeable collaborator.

Why Work IQ Matters

Work IQ represents a strategic shift in how we build and deploy AI in the enterprise. Here are the key reasons it’s a big deal:

  • AI with your organization’s DNA: Because Work IQ continuously learns from your company’s data and interactions, it makes AI responses highly specific to your context. Copilot answers won’t be one-size-fits-all; they’ll reference your internal projects, priorities, and terminology appropriately. For example, ask Copilot for “update on Project Phoenix” and instead of a generic answer, it will leverage Work IQ to know who’s driving that project, recent updates from Teams, and relevant files to summarize – all more relevant, actionable insights and spend less time sifting through information.
  • Agents that act like teammates, not just tools: When your custom agents have Work IQ behind them, they gain a kind of common sense about the organization. They can anticipate needs and follow context in a human-like way. The goal is to have agents stop behaving like tools and start acting like teammates. For instance, an internal IT helpdesk bot with Work IQ could detect that a flurry of Teams messages and an email thread are all about the same incident and proactively alert the relevant engineer – a level of situational awareness that would feel almost proactive like a colleague, not a scripted Q&A bot.
  • Faster, easier development of AI solutions: From an IT leader or developer perspective, Work IQ removes a huge amount of grunt work. You no longer need to manually wire together data from multiple sources and painstakingly program the context for your bots. Microsoft has effectively packaged the context layer for you. This leads to Faster development, Less complicated prompts and Less stitching of disparate APIs. 
  • More out-of-the-box intelligence for any agent you build. In practice, that can cut down development cycles and let your team focus on higher-level logic instead of data plumbing. For example, a developer using Copilot Studio can drag in the Work IQ connection and immediately have their agent “know” the user’s recent meetings or team documents, without writing custom code to fetch and summarize those.
  • Built-in security and compliance: Work IQ is enterprise-ready by design. It respects all the existing permissions, sensitivity labels, and compliance rules on your data. Only information the user (or agent) is allowed to access will be surfaced, and it’s subject to audit and monitoring like the rest of Microsoft 365. For IT, this means you can trust Work IQ to handle corporate data responsibly. It’s not a rogue AI scraping everything – it’s operating within the governance framework you already manage. This distinction is key when enabling AI broadly in a company: Work IQ gives you intelligence and maintains the controls (something that pure large language models on external data don’t guarantee).

Real-World Applications and Examples

To make this more concrete, let’s look at how Work IQ can be applied in real scenarios that IT leaders care about:

  • Project Specific Copilot: Imagine your PMO builds a Project Copilot agent in Copilot Studio. The goal is to onboard new project team members quickly. With Work IQ, this agent can instantly gather all relevant knowledge for a project. It might say, “Hello, I’ve compiled the key documents for Project Phoenix and identified that Alice and Bob are the top collaborators on this initiative. Would you like a summary of recent progress updates from Teams?” This is possible because Work IQ already knows which documents are central to Project Phoenix and who has been driving the conversations. The new team member doesn’t have to hunt for information – the agent, powered by Work IQ, serves it up in context. This accelerates ramp-up and ensures consistency in what information people see.
  • Intelligent Helpdesk Bot: In your IT department, you could enhance a helpdesk chatbot (perhaps built with Power Copilot Studio) using Work IQ’s API. For example, an employee asks the bot a question about a system outage. A Work IQ-enabled bot could recognize, “This issue was discussed in an email thread yesterday and a Teams chat involves the network team”. It can then pull the pertinent info or even loop in the right expert automatically. Essentially, the bot understands the who and where of past incident knowledge. During Ignite, Microsoft showcased a Sales Development Agent that does something similar for sales – it pulls in context from CRM and internal comms to qualify a lead and suggest next steps. Your helpdesk bot can analogously use context to route and resolve IT tickets faster, by knowing what’s happened already across channels.
  • Enterprise App with Contextual AI: Microsoft is also weaving Work IQ into its own tools for creators. In fact, the new Copilot App Builder in Power Platform (announced at Ignite) uses Work IQ to inject organizational context into the apps people build. For example, if a business user creates a budget approval app with App Builder, Work IQ could enable the app’s AI assistant to automatically show related budget files or identify the manager who usually approves similar requests, without extra configuration. This means citizen developers can create smarter apps that “know” the workplace. As an IT lead, you can encourage adoption of such tools, confident that the intelligence layer (Work IQ) will make these solutions far more useful and integrated into daily work.

Each of these scenarios highlights a pattern: Work IQ provides situational awareness that was previously missing in our software. It brings the same kind of contextual understanding that a long-tenured employee might have (“Oh, I know exactly who to ask about this issue, and I recall a similar project from last year…”) directly into our apps and agents. That dramatically improves both the user experience and the effectiveness of AI automation.

Conclusion

Microsoft Work IQ is a cornerstone of the “frontier firm” vision – a company where AI is woven into every workflow with a rich understanding of the business. For IT leaders, Work IQ offers a path to operationalize AI at scale: you get the power of Microsoft’s Graph data plus an intelligence model trained on your organization’s nuances. The end result is AI that feels native to your enterprise. Copilot and custom agents become smarter, more helpful colleagues rather than blunt instruments. Work IQ allows AI to find insights in context, rather than just pulling disjointed data fragments.

By leveraging Work IQ, you enable your AI systems to “know” your business in ways that were previously only in employees’ heads. That translates to faster decisions, less reinventing the wheel, and a significant leap in productivity. In short, Work IQ turns enterprise AI from a cool gadget into a deeply integrated, competitive capability. It is the intelligence that will help your organization’s digital workforce act with the insight and awareness of a seasoned team member – which is exactly what we need for AI to truly drive the next wave of workplace transformation.

]]>
https://blogs.perficient.com/2025/11/25/introducing-microsoft-work-iq-the-intelligence-layer-for-agents/feed/ 0 388641
The Agentic Enterprise: Key Agent Announcements from Microsoft Ignite 2025 https://blogs.perficient.com/2025/11/25/the-agentic-enterprise-key-agent-announcements-from-microsoft-ignite-2025/ https://blogs.perficient.com/2025/11/25/the-agentic-enterprise-key-agent-announcements-from-microsoft-ignite-2025/#respond Tue, 25 Nov 2025 21:44:28 +0000 https://blogs.perficient.com/?p=388623

Microsoft Ignite 2025 marked a pivotal shift in enterprise AI strategy, introducing a new generation of autonomous agents and the governance tools needed to manage them responsibly. From sales and HR to IT and productivity, Microsoft’s announcements signal a future where AI agents are not just assistants—but active participants in business operations.

Key Ignite Announcements

New AI Agents: Expanding the Autonomous Workforce

Several new AI agents debuted at Ignite, each designed to automate and assist in specific business processes:

  • Sales Development Agent – a fully autonomous sales AI that researches prospects, qualifies leads, and engages in personalized outreach to grow the sales pipeline. It works around the clock to nurture leads (via emails or meeting scheduling) and can hand off hot prospects to human sellers when needed. Sales teams can scale outreach and ensure no lead is overlooked, driving revenue growth without proportional headcount increases. (Preview via the Frontier early access program in Dec 2025).
  • Agents in Microsoft Teams Channels – collaboration agents that live in Teams channels and can interact with third-party apps and other agents through the new Model Context Protocol (MCP). For example, a project team’s channel agent can automatically pull issue trackers from Jira and then schedule follow-up meetings based on the risks identified. Teams users get a proactive AI teammate that bridges data across tools and coordinates team tasks, improving productivity and cross-app workflows. (Now in Preview).
  • Workforce Insights, People, and Learning Agents – a trio of HR and employee experience agents powered by Microsoft’s Work IQ intelligence layer. The Workforce Insights Agent provides leaders with real-time analytics on team composition, skills, and attrition to inform data-driven HR decisions. The People Agent helps employees find colleagues by expertise or role and suggests the best ways to connect (e.g. highlighting shared projects). The Learning Agent delivers personalized micro-learning and upskilling content to each employee, tailored to their role and goals. These agents enhance workforce management and development – leadership can respond faster to organizational trends, and employees benefit from stronger internal networks and continuous skill growth. (Available in Preview via the Frontier program.)
  • IT Admin Agents (Teams and SharePoint) – new agents to assist IT administrators in managing Microsoft 365 environments. The Teams Admin Agent (preview) resides in the Teams Admin Center and can automate routine admin tasks like monitoring meeting quality or provisioning users, executing these workflows autonomously and securely. Meanwhile, the SharePoint Admin Agent (preview) helps govern SharePoint by monitoring for inactive or ownerless sites, overshared files, or permission sprawl, then applying policies or automatic fixes such as archiving sites or adjusting access rights.  These admin agents reduce IT workload and enforce best practices consistently – ensuring collaboration platforms stay well-configured, secure, and compliant without requiring constant manual oversight.

Microsoft also announced Office Copilot Agents for Word, Excel, and PowerPoint within Microsoft 365 Copilot chat, which can generate and format content in those apps based on user prompts. These content-creation agents, while not fully autonomous, help users produce high-quality documents, spreadsheets, and presentations more efficiently. They are available in early access for Copilot customers.

Governance Tools: Managing AI Agents with Confidence

Recognizing that deploying dozens or even hundreds of AI agents raises new oversight challenges, Microsoft introduced governance tools to help customers adopt agents safely and transparently:

  • Microsoft Agent 365“the control plane for agents” that extends Microsoft’s existing management infrastructure to cover AI agents. Agent 365 provides a unified dashboard for IT to register, monitor, and secure all agents in the organization. Its core features include an Agent Registry (an inventory of every agent, including those built in-house or by third parties), Access Control to limit what data/resources an agent can access (applying conditional access and least privilege principles), Visualization tools to map relationships between agents, people, and data and to watch agent behavior in real time, and built-in Security integration (with Microsoft Defender and Purview) to detect threats or data leaks involving agents. In short, Agent 365 lets organizations govern AI agents as rigorously as they govern human users, using familiar tools like Microsoft Entra ID and Purview that are now extended to agents. Agent 365 is available in early access (via the Frontier program in the Microsoft 365 admin center) for customers to start piloting now.
  • Microsoft Entra Agent ID – a new capability in the Entra identity suite that provides unique, first-class identities for AI agents. Just as every employee has a digital identity and login, now each agent can be issued an Entra Agent ID to authenticate itself and be assigned role-based access permissions. This brings Zero Trust security to AI agents: every agent’s access can be tightly governed (e.g. a finance-focused agent gets access only to finance data) and monitored via Entra’s conditional access and risk detection. If an agent behaves anomalously or is compromised, its credentials can be revoked immediately, just like for a human account.  Entra Agent ID ensures no “rogue” or unmanaged agents are operating; companies get full control over what each agent is allowed to do, reducing the risk of data leaks or unauthorized actions by AI. (Introduced at Ignite 2025; in preview as part of the Agent 365 ecosystem.)
  • Microsoft Purview Extensions for AI – enhancements in Microsoft Purview (the data governance and compliance suite) to cover AI-generated content and agent activities. Data Loss Prevention (DLP) policies in Purview now apply to interactions with Copilots and agents, preventing sensitive information from being disclosed by an AI. For example, if an internal user asks an agent a question that would output confidential data, Purview can block or mask that response. Additionally, Purview’s Data Security Posture Management (DSPM) can now discover and assess all AI agents running in the environment (including third-party agents) and flag any that pose compliance risks. Audit logging and eDiscovery are extended to agent actions, so every decision an agent makes can be traced for compliance and analysis. Organizations can embrace AI automation while maintaining their compliance obligations and security safeguards. The same oversight used for user actions (DLP, audit logs, risk management) will automatically cover AI agent actions, which is critical for industries with strict regulatory requirements. (Purview’s AI governance features began rolling out at Ignite in preview form.)
  • Foundry Control Plane – for companies developing their own AI solutions, Azure’s Foundry platform introduced a control plane paralleling Agent 365’s capabilities. It allows development and ops teams to set policies, monitor performance, and manage costs for custom-built agents across their lifecycle. By using the Foundry control plane, even AI agents created with open-source tools or non-Microsoft frameworks can be brought under a unified governance umbrella.  This ensures that custom AI projects don’t become a governance blind spot – they too can be centrally managed for security and compliance from day one, making enterprise AI portfolios more coherent and controlled.

Impact

The Ignite 2025 announcements underscore a dual message: significant productivity gains are now within reach through AI agents, and Microsoft is delivering the controls to deploy these agents responsibly. The potential benefits include:

  • Boosted Productivity and Automation: The new agents can handle labor-intensive tasks – from scouring CRM systems and sending outreach emails (Sales Agent) to auto-monitoring IT systems (Admin Agents) – which frees up employees to focus on higher-value strategic work. Early adopters can expect faster cycle times (e.g. quicker lead follow-ups, faster issue resolution) and extended service availability (agents working 24/7).
  • Improved Employee and Customer Experiences: AI agents embedded in everyday workflows mean employees have on-demand assistance. Projects move faster when a Teams channel agent can gather data or schedule meetings automatically. Employees get personalized support in learning and finding information via the People and Learning agents. Customers, in turn, benefit from more responsive service (since AI can help address their needs instantly or outside of business hours). Overall, these agents promise more proactive, responsive operations in many areas of the business.
  • Enterprise-Grade Trust and Control: Perhaps most crucially, Microsoft’s focus on governance provides IT leaders and compliance officers the confidence to scale AI usage safely. Features like Agent 365 and Entra Agent ID mean that introducing an army of AI agents won’t result in loss of visibility or unchecked access to sensitive data. Every agent is accounted for, governed, and subject to security and compliance rules. This lowers the barrier to adoption because organizations can enforce their existing security policies on AI agents just as they do for employees, preventing the kind of “shadow AI” chaos that uncontrolled agents might cause.

Microsoft Ignite 2025 marked a clear shift from AI as a mere assistant to AI as a full-fledged workforce layer, with Microsoft unveiling a unified agent ecosystem across Microsoft 365, Windows, and Azure, centered on Agent 365, a control plane for registering, securing, and managing agents with Entra-issued IDs. New features include Work IQ for personalized agent recommendations, dedicated Office and industry-specific agents, and Windows’ native agent infrastructure for secure integration. The message was clear: the future of work is agent-powered, but trust, compliance, and control must be built in from the start.


Table: Key Announcements on AI Agents and Governance (Ignite 2025)

Feature / Tool  Description  Impact  Availability 
Microsoft Agent 365  Central command center for AI agents – provides a registry of all agents, access controls, real-time monitoring dashboards, and integrates security/compliance tools (Defender, Entra, Purview) for agents. Enables IT to manage and secure AI agents at scale just like user accounts. Increases trust by preventing unmanaged “shadow” agents and enforcing consistent policies (identity, data protection) across all AI-driven processes. Early Access Preview (Available now via the Frontier program in the M365 admin center.)
Microsoft Entra Agent ID  New identity management for AI agents – assigns each agent a unique Entra ID identity and credentials, with full support for Conditional Access and audit logging of agent sign-ins. Extends Zero Trust security to autonomous agents. Tight access control for agents: Every agent operates under a known identity and role, so companies can apply least-privilege access and instantly revoke or adjust an agent’s permissions if needed. Builds trust that agents will only reach the data they’re authorized to use. Preview (Introduced at Ignite; part of Entra updates rolling out in late 2025.)
Sales Development Agent  AI sales representative that autonomously researches prospects, crafts outreach emails, follows up with leads, and hands off interested customers to human sellers. Integrates with CRM systems (Dynamics 365, Salesforce) and works within Outlook/Teams to drive pipeline. Scales up sales capacity by ensuring every lead is engaged promptly and persistently. Sales teams can convert more leads without adding staff, as routine prospecting and follow-ups are handled by the agent (with consistency and no downtime). Frontier Preview (Available to test for participants in Dec 2025.)
Teams Channel AI Agents  Intelligent agents embedded in Microsoft Teams channels that can collaborate with users and connect to third-party apps via MCP (Model Context Protocol). They can aggregate data from other services (e.g. project trackers, DevOps tools) and initiate actions like scheduling meetings or updating tasks. Enhances team collaboration by acting as a smart coordinator: the agent surfaces information from across the toolchain into Teams and automates cross-app steps. Teams become more productive as the agent reduces the need to manually check different apps or remember follow-ups. Preview (New capability in Microsoft Teams, announced at Ignite 2025.)
Workforce Insights & HR Agents  A set of Work IQ-powered agents for HR: Workforce Insights Agent (real-time org analytics for leaders), People Agent (find colleagues by skill/role and suggest connections), Learning Agent (personalized training and upskilling content). Data-driven people management and development. Leaders gain immediate insight into workforce composition and trends for better planning. Employees can more easily network internally and get targeted learning resources, leading to a more connected and skilled workforce. Preview (Available via Frontier program as of Ignite 2025.)
Teams & SharePoint Admin Agents  IT administration agents for Microsoft 365: one in Teams Admin Center to automate tasks like user provisioning and system monitoring; another in SharePoint Admin Center to audit and fix site issues (inactive sites, oversharing, permission drift) via AI. Always-on IT assistance that improves governance. Routine admin tasks are handled consistently and faster, reducing IT effort and human error. These agents also proactively enforce policies (e.g. cleaning up unused sites or tightening permissions), which strengthens security/compliance across collaboration platforms. Preview (Both announced in preview at Ignite 2025.)
Microsoft Purview AI Governance  Purview compliance features for AI – extended DLP policies to monitor and block sensitive data in AI prompts or outputs; Purview’s DSPM now inventories all AI agents and assesses their risk posture; audit trails cover AI agent activities for eDiscovery and oversight. Maintains compliance and security in an AI-driven environment. Companies can trust that adopting AI agents won’t lead to data leaks or compliance violations, because existing data protection rules automatically apply. Every action by an agent is logged and auditable, which is crucial for industries with strict regulations. Preview / Rolling Out (Announced at Ignite; incremental rollout through late 2025 into 2026 for various Purview enhancements.)

 

]]>
https://blogs.perficient.com/2025/11/25/the-agentic-enterprise-key-agent-announcements-from-microsoft-ignite-2025/feed/ 0 388623
See Perficient’s Amarender Peddamalku at the Microsoft 365, Power Platform & Copilot Conference https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/ https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/#respond Thu, 23 Oct 2025 17:35:19 +0000 https://blogs.perficient.com/?p=388040

As the year wraps up, so does an incredible run of conferences spotlighting the best in Microsoft 365, Power Platform, and Copilot innovation. We’re thrilled to share that Amarender Peddamalku, Microsoft MVP and Practice Lead for Microsoft Modern Work at Perficient, will be speaking at the Microsoft 365, Power Platform & Copilot Conference in Dallas, November 3–7.

Amarender has been a featured speaker at every TechCon365, DataCon, and PWRCon event this year—and Dallas marks the final stop on this year’s tour. If you’ve missed him before, now’s your chance to catch his insights live!

With over 15 years of experience in Microsoft technologies and a deep focus on Power Platform, SharePoint, and employee experience, Amarender brings practical, hands-on expertise to every session. Here’s where you can find him in Dallas:

Workshops & Sessions

  • Power Automate Bootcamp: From Basics to Brilliance
    Mon, Nov 3 | 9:00 AM – 5:00 PM | Room G6
    A full-day, hands-on workshop for Power Automate beginners.

 

  • Power Automate Multi-Stage Approval Workflows
    Tue, Nov 4 | 9:00 AM – 5:00 PM | Room G2
    Wed, Nov 5 | 3:50 PM – 5:00 PM | Room G6
    Learn how to build dynamic, enterprise-ready approval workflows.

 

  • Ask the Experts
    Wed, Nov 5 | 12:50 PM – 2:00 PM | Expo Hall
    Bring your questions and get real-time answers from Amarender and other experts.

 

  • Build External-Facing Websites Using Power Pages
    Thu, Nov 6 | 1:00 PM – 2:10 PM | Room D
    Discover how to create secure, low-code websites with Power Pages.

 

  • Automate Content Processing Using AI & SharePoint Premium
    Thu, Nov 6 | 4:20 PM – 5:30 PM | Room G6
    Explore how AI and SharePoint Premium (formerly Syntex) can transform content into knowledge.

 

Whether you’re just getting started with Power Platform or looking to scale your automation strategy, Amarender’s sessions will leave you inspired and equipped to take action.

Register now!

]]>
https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/feed/ 0 388040
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828
Perficient at Microsoft Ignite 2025: Let’s Talk AI Strategy https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/ https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/#respond Tue, 21 Oct 2025 16:49:06 +0000 https://blogs.perficient.com/?p=387885

Microsoft Ignite 2025 is right around the corner—and Perficient is showing up with purpose and a plan to help you unlock real results with AI.

As a proud member of Microsoft’s Inner Circle for AI Business Solutions, we’re at the forefront of helping organizations accelerate their AI transformation. Whether you’re exploring custom copilots, modernizing your data estate, or building secure, responsible AI solutions, our team is ready to meet you where you are—and help you get where you want to go.

Here’s where you can connect with us during Ignite:

Join Us for Happy Hour
Unwind and connect with peers, Microsoft leaders, and the Perficient team at our exclusive happy hour just steps from the Moscone Center.
📍 Fogo de Chão | 🗓 November 17 | 🕔 6:00–9:00 PM
Reach out to our team to get registered!

 

Book a Strategy Session
Need a quiet space to talk AI strategy? We’ve secured a private meeting space across from the venue—perfect for 1:1 conversations about your AI roadmap.
📍 Ember Lounge — 201 3rd St, 8th floor, Suite 8016 | 🗓 November 18-20
Reserve Your Time

 

From copilots to cloud modernization, we’re helping clients across industries turn AI potential into measurable impact. Let’s connect at Ignite and explore what’s possible.

]]>
https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/feed/ 0 387885
What is Microsoft Copilot Studio? https://blogs.perficient.com/2025/10/14/what-is-microsoft-copilot-studio/ https://blogs.perficient.com/2025/10/14/what-is-microsoft-copilot-studio/#respond Tue, 14 Oct 2025 16:20:52 +0000 https://blogs.perficient.com/?p=387419

Microsoft Copilot Studio is a low-code/no-code workspace from Microsoft that helps you build, customize, and manage AI-powered assistants (called copilots or agents) for your organization. Think of it as a visual toolkit that connects large language models (LLMs) to your data sources (SharePoint, OneDrive, Dataverse, etc.), custom logic, and UI behaviors, enabling you to create helpful assistants that understand your company context and answer real workplace questions. Why it matters (in plain layman terms) Copilot Studio lets non-AI experts create assistants that do real work — summarize documents, answer policy questions, extract data, or route requests — without writing massive amounts of code. For beginners, it’s a gateway to real-world AI: you don’t need to manage models or infra; you focus on prompts, connectors, and experience.

Picture1

Key Uses and Benefits of Copilot Studio

  • Create knowledge assistants that read your SharePoint/OneDrive files and answer employee questions.
  • Build task-oriented agents that can schedule meetings, draft emails, or generate reports.
  • Customize tone and behavior (persona) so the Copilot matches your team’s voice.
  • Control security and data access through Microsoft 365 connectors and policies.
  • Rapidly prototype and test in a playground, then publish to users.

A Beginner-Friendly Guide to Creating Your First Agent in Microsoft Copilot Studio

Prerequisites

  • A Microsoft 365 account with admin/user access where Copilot is enabled.
  • Appropriate licensing that includes Copilot or Copilot Studio access.
  • Basic familiarity with SharePoint/OneDrive if you want to connect document data.
  • Browser access to Copilot Studio (works best in Chrome/Edge).

Step 1 — Open Copilot Studio

  1. Go to the Copilot Studio URL (copilot.microsoft.com) provided by Microsoft for your tenant, or sign into Microsoft 365 and navigate to Copilot Studio from the apps list.
  2. If it’s your first time, the studio may prompt permission; approve the necessary consent for your account.

Step 2 — Start a New Agent (Copilot)

  1. Click “Create” or “New Copilot/Agent”.
  2. Enter a name (e.g., “Sales Helper”) and a short description of what it should do.
  3. Optionally upload an icon or image to help users recognize it.

Step 3 — Define the Copilot’s Persona and Behavior

  1. Choose a persona/tone: professional, friendly, concise, etc.
  2. Create a few example prompts or instruction templates that guide how the Copilot responds (e.g., “Summarize this document into 3 bullet points focused on action items.”).
  3. Set limits for response length and determine whether to ask clarifying questions.

Step 4 — Connect Data Sources (Skills/Connectors)

  1. Click “Add Connector” or “Add Skill”.
  2. Choose from built-in connectors like SharePoint, OneDrive, Teams, Dataverse, or custom connectors.
  3. Authenticate the connector and select which sites/folders or tables the agent can access.
  4. Optionally configure indexing or metadata settings to enable the Copilot to find relevant content quickly.

Step 5 — Add Actions and Tools (Optional)

  1. Add any task-specific tools, such as calendar access, email drafting tools, or approvals.
  2. Map triggers (for example, if the user asks “create meeting,” the Copilot can open a meeting draft).
  3. If you have developer resources, you can add custom actions via low-code flows (Power Automate) or APIs.

Step 6 — Test in the Playground

  1. Use the built-in test/playground area to ask sample questions.
  2. Try different prompts and tweak persona or data scope based on the results.
  3. Check for hallucinations or wrong data access; adjust the connector or prompt controls.

Step 7 — Set Security and Governance

  1. Configure user access to determine who can use and edit this Copilot.
  2. Define data usage settings and retention as per your org policies.
  3. Enable logging and monitoring to audit queries and outputs.

Step 8 — Publish and Share

  1. When satisfied, click “Publish” or “Deploy.”
  2. Share the agent with users or teams (via Teams, SharePoint, or a link).
  3. Collect user feedback and iterate—improvements are usually quick and ongoing.

Simple Prompt Examples to Get Useful Answers

  • “Summarize the attached policy into three key action points for frontline staff.”
  • “Find mentions of ‘budget’ in these documents and list the related amounts and dates.”
  • “Draft a 150-word email to schedule a follow-up meeting next Wednesday.”

Common Beginner Mistakes and Tips

  • Too broad data access: limit connectors to only the sites/folders needed.
  • Vague prompts: provide examples and templates to help the assistant understand the expected format.
  • Skipping governance: always set clear permissions and logging for sensitive data.
  • Expect iterative improvement: start small, test often, and update prompts and connectors.

Conclusion

Microsoft Copilot Studio makes creating AI assistants approachable even if you’re new to AI, and it combines model power with your company data, connectors, and low-code tooling so you can craft copilots that actually solve workplace problems. Start with a small, focused agent (one data source, one use case), test it in the playground, tighten security, and expand from there. With the official docs and community articles as guides, you’ll be iterating and improving helpful assistants in no time.

Further Learning Links

]]>
https://blogs.perficient.com/2025/10/14/what-is-microsoft-copilot-studio/feed/ 0 387419