Microsoft Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/microsoft/ Expert Digital Insights Wed, 08 Oct 2025 13:24:23 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Microsoft Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/microsoft/ 32 32 30508587 AI and the Non-Technical Worker https://blogs.perficient.com/2025/10/08/ai-and-the-non-technical-worker/ https://blogs.perficient.com/2025/10/08/ai-and-the-non-technical-worker/#comments Wed, 08 Oct 2025 13:24:23 +0000 https://blogs.perficient.com/?p=387735

If you’d told me twenty years ago that I’d be working in the technology industry, I would have laughed. My background? Art, history, and social services—not exactly the usual path to tech. I still sew and do needlework, and I miss the days when you had to check an answering machine or hunt down a pay phone. Honestly, I thrived without knowing what my second cousin, three times removed, had for breakfast this morning. Sometimes, I feel like the old guy on the corner yelling, “get off my lawn!”

But life has a way of surprising us. My career took a hard left turn and landed me in the world of technology. At first, I thought, “No problem. My skills—problem solving, analysis, and communication—are still useful.” I didn’t need to dive too deep into the tech pool; basic Excel and PowerPoint were enough. Fourteen years later, I’m proof that a non-technical person can survive (and even thrive) in a technology company.

Then came 2025. Our new CEO announced that we were going to be an AI-First company. We were not only encouraged, but expected to use AI in our day-to-day work. Cue the mini panic attack. My mind raced: What part of my job could I hand over to AI? How was I supposed to figure this out?

The lowest hanging fruit was my semi-annual reports. But I actually enjoy making those! Me and my trusty adding machine (yes, I still use it) making charts and graphs, discovering trends, and recapping the year—it’s something I look forward to. I didn’t want to give that away to my computer.

But change was coming, whether I liked it or not. So, I took a deep breath and got on board. I completed every training offered and sat down with a coworker who was genuinely excited about AI. Together, we made a plan and found ways to integrate AI into my workflow. Anytime someone said, “You can use Copilot for that,” I’d ask how. Gradually, my panic eased and I started to see the possibilities.

Now, I use Copilot to summarize long technical documents in plain language, generate images for presentations, and organize meeting notes. I’m still not ready to give up my trusty adding machine or hand over my reports, but I’m finding other ways to use these tools to make my life easier.

Most people see AI as something that gives them more time in their day. I used to think it would take away the parts of my job I enjoyed. Turns out, I was mistaken. If you’re like me—afraid of adopting new technology and unsure how it can help—take a deep breath and start small. Try it out and see if it really does save you time. And if you haven’t asked Copilot to rewrite an email as if it were from a pirate yet, you’re missing out! Not every use case works for everyone, but everyone can find a use case.

So here’s my advice:
You don’t have to be an expert to benefit from AI. Start with one small task, ask questions, and let curiosity lead the way. You might be surprised at how much easier your work becomes—and you might even have a little fun along the way.

PS… I was able to use this post as a learning opportunity.  I asked Copilot to review my article and make suggestions – most of which made the final draft.  Another small step.

]]>
https://blogs.perficient.com/2025/10/08/ai-and-the-non-technical-worker/feed/ 1 387735
IDC ServiceScape for Microsoft Power Apps Low-Code/No-Code Custom Application Development Services https://blogs.perficient.com/2025/09/24/idc-servicescape-for-microsoft-power-apps-low-code-no-code-custom-application-development-services/ https://blogs.perficient.com/2025/09/24/idc-servicescape-for-microsoft-power-apps-low-code-no-code-custom-application-development-services/#respond Wed, 24 Sep 2025 19:45:10 +0000 https://blogs.perficient.com/?p=387388

Perficient is proud to be included in the IDC ServiceScape: Worldwide and U.S. Microsoft Power Apps Low-Code/No-Code Custom Application Development Services, 2025 (Doc# US53748825 September 2025) report. We believe this inclusion highlights our commitment to helping enterprises accelerate innovation and streamline development through Microsoft Power Platform.

This IDC ServiceScape offers a comprehensive guide on the key capabilities of custom application development service providers using the Microsoft Power Apps low-code/no-code development platform, featuring services from companies including Perficient. The status of each service capability is categorized as fully supported, partially supported, partner provided, road map, or not supported, aiding services buyers in quickly identifying which vendors align with their changing requirements.

 

Powering Innovation with Microsoft Power Platform

As digital transformation accelerates, low-code/no-code platforms are becoming essential tools for agility and innovation. Perficient is proud to be included in this evolving landscape and remains committed to delivering solutions that drive real business outcomes.

“Our approach to Power Platform is rooted in strategy, scale, and speed. We’re not just building apps—we’re enabling transformation. By combining governance frameworks, multi-shore delivery, and AI-powered experiences, we help clients unlock the full potential of low-code development and drive meaningful business outcomes.” – Eric Schmitt, Director of Microsoft Business Applications.

Perficient offers a comprehensive suite of low-code/no-code services designed to accelerate transformation:

  • App modernization programs to migrate legacy systems to Power Platform
  • Intelligent automation and rapid RPA migration (e.g., UiPath to Power Automate Desktop)
  • Custom Copilot envisioning workshops and enterprise app development
  • Governance engagements including CoE setup and citizen developer enablement
  • Process mining and lifecycle management
  • Multi-shore delivery models with agile development pods and managed services

We’re also increasing investment in Copilot Studio Agents, helping clients build custom functionality and deploy agents within Power Platform environments. Our governance frameworks ensure scalable, secure adoption—whether you’re enabling citizen developers or launching enterprise-wide automation programs.

Learn more about our Power Platform capabilities: Power Platform / Perficient

 

Perficient is a global digital consultancy with over 7,000 colleagues worldwide, operating as one unified team across North America, LATAM, and India. With deep expertise in industries like Healthcare & Life Sciences, Manufacturing, and Automotive, we deliver strategic technology solutions that drive measurable outcomes. Our Microsoft practice is backed by more than 25 years of experience and over 250 certified cloud consultants, with strong capabilities in Azure, M365, Dynamics CRM, and Power Platform.

Perficient differentiates through global delivery, scalability, and robust governance. With 95% of our business coming from repeat clients, we’re proud to be a trusted partner in building AI-first, low-code solutions that deliver real business value.

Ready to move from ambition to impact? Let’s define your low-code strategy and build the foundation to lead what’s next.

]]>
https://blogs.perficient.com/2025/09/24/idc-servicescape-for-microsoft-power-apps-low-code-no-code-custom-application-development-services/feed/ 0 387388
What is Microsoft Copilot? https://blogs.perficient.com/2025/09/16/what-is-microsoft-copilot/ https://blogs.perficient.com/2025/09/16/what-is-microsoft-copilot/#comments Tue, 16 Sep 2025 12:25:18 +0000 https://blogs.perficient.com/?p=386992

Microsoft Copilot is an AI-powered assistant embedded across the Microsoft 365 ecosystem, designed to enhance productivity, streamline workflows, and empower users with intelligent automation. Built on Large Language Models (LLMs) like GPT-4 and GPT-5, and tightly integrated with Microsoft Graph, Copilot transforms how professionals interact with tools like Word, Excel, PowerPoint, Outlook, and Teams.

Picture1

Microsoft Copilot is an LLM-powered AI assistant by Microsoft, similar to OpenAI’s ChatGPT. Under the Copilot brand, Microsoft has released a variety of products. Here’s a timeline of key releases:

Picture2

Core Features of Microsoft Copilot

Copilot offers a wide range of capabilities. Specifically, it supports:

  • Data Analysis: Analyzes large datasets, identifies trends, and generates insights.
  • Document Creation: Drafts reports, emails, and presentations using natural language prompts.
  • Project Management: Tracks tasks, schedules meetings, and summarizes conversations.
  • Workflow Automation: Automates repetitive tasks like data entry and report generation.
  • Communication: Summarizes emails, drafts responses, and manages inboxes.
  • Security & Privacy: Honors Conditional Access, MFA, and data boundaries.

These features span across multiple Microsoft 365 apps, making Copilot a versatile productivity tool.

How Microsoft Copilot Works as an AI tool

Picture3

At its core, Copilot combines:

  • Large Language Models (LLMs) from OpenAI
    • It orchestrates large language models (like GPT-4 /GPT-5 via Azure OpenAI) to understand, generate, and summarize content in context.
  • Microsoft Graph for contextual enterprise data
    • Copilot uses Microsoft Graph and semantic indexing—adding metadata and vector embeddings to content—enhancing intelligent retrieval when generating responses.
  • Natural Language Processing (NLP) for understanding user intent
  • Uses plain language prompts to perform complex tasks
  • Microsoft 365 APIs for secure integration

 Microsoft Copilot Workflow

Picture4

Security & Compliance

  • Data Access: Copilot only accesses data that the user is authorized to view.
  • Encryption: All data is encrypted in transit.
  • MFA & Conditional Access: Fully supported for enterprise-grade security.

 Advantages of Microsoft Copilot

Copilot delivers several benefits:

  • Automation: Reduces manual tasks like writing and formatting.
  • Data Insights: Analyzes trends and creates visualizations.
  • Contextual Intelligence: Uses enterprise data for tailored responses.
  • Time Savings: Speeds up routine work and decision-making.
  • Security: Honors user permissions and governance policies.

As a result, organizations can achieve measurable productivity gains.

Day-to-Day Uses in the Software Industry

For Developers

  • Code Suggestions: GitHub Copilot offers real-time code completions.
  • Documentation: Drafts technical documentation in Word.
  • Data Analysis: Uses Excel + Python for forecasting and modeling.
  • Meeting Summaries: Teams Copilot summarizes stand-ups and action items.

For Project Managers

  • Task Tracking: Automates task updates and reminders.
  • Presentation Creation: Builds stakeholder decks from project data.
  • Email Drafting: Summarizes threads and drafts follow-ups.

For QA/Testers

  • Bug Reporting: Drafts structured bug reports.
  • Test Case Generation: Suggests test scenarios based on requirements

Industry Applications Uses

  • Retail: Optimize shift scheduling and inventory
  • Finance: Automate reporting and investment analysis
  • Healthcare: Streamline clinical documentation
  • Education: Personalize learning and automate grading
  • Government: Draft public communications and manage budgets

Picture5

Business Impact of Using Microsoft Copilot

1. Boosted Productivity

Automates repetitive tasks like drafting emails, generating summaries, or designing slides—freeing up more time for strategic work

2. Time Savings

  • UK civil servants saved approximately 26 minutes daily (entry users averaged 37 minutes).
  • TAL insurer saved up to 6 hours a week per employee.

3. Creativity & Quality Enhancements

Helps generate polished content, insightful visual designs, and improves presentation efficacy.

4. Seamless Integration

Works natively within existing Microsoft tools, reducing learning curves and ensuring smooth adoption.

5. Executive Workflow Optimization

Satya Nadella uses GPT-5–powered Copilot for prompts like meeting prep, project updates, time categorization, and decision analysis

Scaling Copilot—Building Intelligence for Your Organization

1. Copilot Connectors

Ingest data from ERP, CRM, and internal systems to enrich Copilot’s knowledge base, letting it reason over broader, organization-specific content, while respecting access controls.

2. Copilot Agents

Build or use prebuilt AI agents to automate workflows—e.g., employee onboarding, sales leads creation, IT requests—right from within Copilot.

Agents can:

  • Take real-time actions (database updates, triggering flows)
  • Tailor automation to your context

3. Copilot APIs

Use the Copilot Retrieval API to programmatically access enterprise data indexes and integrate AI capabilities into custom applications or workflows, with full compliance and governance.

Picture6

How to Integrate Microsoft Copilot into Microsoft 365

Prerequisites

  • Active Microsoft 365 subscription (E3/E5 or Business Standard/Premium).
  • Microsoft Copilot license.
  • Admin permissions for organizational deployment.

Integration Steps

  1. Enable Copilot in Admin Center
    1. Go to Microsoft 365 Admin Center → Apps → Enable Copilot.
  2. Assign Licenses
    1. Allocate Copilot licenses to users.
  3. Configure Security & Compliance
    1. Ensure Conditional Access, MFA, and data governance policies are in place.
  4. Deploy Across Apps
    1. Activate Copilot in Word, Excel, PowerPoint, Outlook, and Teams.

Where Copilot is Natively Embedded

  • Word: Draft and edit documents.
  • Excel: Analyze data and automate calculations.
  • PowerPoint: Design presentations with AI.
  • Outlook: Manage emails and calendars.
  • Teams: Summarize meetings and manage tasks.
  • OneNote: Organize notes and generate summaries.
  • It also integrates with SharePoint, Planner, Project, and Power Platform for extended automation and data insights.

Why Microsoft Copilot is a Game-Changer

  • By combining AI intelligence with enterprise data, Copilot transforms how businesses operate:
  • Reduces manual effort.
  • Improves decision-making.
  • Enhances collaboration.
  • Delivers measurable productivity gains across all departments.

Conclusion

In summary, Microsoft Copilot is not just a productivity tool—it’s a strategic AI partner that empowers professionals across industries. From automating mundane tasks to enhancing creativity and collaboration, Copilot is reshaping the future of work.

]]>
https://blogs.perficient.com/2025/09/16/what-is-microsoft-copilot/feed/ 4 386992
Why It’s Time to Move from SharePoint On-Premises to SharePoint Online https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/ https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/#respond Tue, 09 Sep 2025 14:53:50 +0000 https://blogs.perficient.com/?p=387013

In today’s fast-paced digital workplace, agility, scalability, and collaboration aren’t just nice to have—they’re business-critical. If your organization is still on Microsoft SharePoint On-Premises, now is the time to make the move to SharePoint Online. Here’s why this isn’t just a technology upgrade—it’s a strategic leap forward.

1. Work Anywhere, Without Barriers

SharePoint Online empowers your workforce with secure access to content from virtually anywhere. Whether your team is remote, hybrid, or on the go, they can collaborate in real time without being tethered to a corporate network or VPN.

2. Always Up to Date

Forget about manual patching and version upgrades. SharePoint Online is part of Microsoft 365, which means you automatically receive the latest features, security updates, and performance improvements—without the overhead of managing infrastructure.

3. Reduce Costs and Complexity

Maintaining on-premises servers is expensive and resource-intensive. By moving to SharePoint Online, you eliminate hardware costs, reduce IT overhead, and streamline operations. Plus, Microsoft handles the backend, so your team can focus on innovation instead of maintenance.

4. Enterprise-Grade Security and Compliance

Microsoft invests heavily in security, offering built-in compliance tools, data loss prevention, and advanced threat protection. SharePoint Online is designed to meet global standards and industry regulations, giving you peace of mind that your data is safe.

5. Seamless Integration with Microsoft 365

SharePoint Online integrates effortlessly with Microsoft Teams, OneDrive, Power Automate, and Power BI—enabling smarter workflows, better insights, and more connected experiences across your organization.

6. Scalability for the Future

Whether you’re a small business or a global enterprise, SharePoint Online scales with your needs. You can easily add users, expand storage, and adapt to changing business demands without worrying about infrastructure limitations.

Why Perficient for Your SharePoint Online Migration 

Migrating to SharePoint Online is more than a move to the cloud—it’s a chance to transform how your business works. At Perficient, we help you turn common migration challenges into measurable wins:
  • 35% boost in collaboration efficiency
  • Up to 60% cost savings per user
  • 73% reduction in data breach risk
  • 100+ IT hours saved each month
Our Microsoft 365 Modernization solutions don’t just migrate content—they build a secure, AI-ready foundation. From app modernization and AI-powered search to Microsoft Copilot integration, Perficient positions your organization for the future.
]]>
https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/feed/ 0 387013
Perficient Quoted in Forrester Report on Intelligent Healthcare Organizations https://blogs.perficient.com/2025/08/29/perficient-quoted-in-forrester-report-on-intelligent-healthcare-organizations/ https://blogs.perficient.com/2025/08/29/perficient-quoted-in-forrester-report-on-intelligent-healthcare-organizations/#respond Fri, 29 Aug 2025 14:45:01 +0000 https://blogs.perficient.com/?p=386542

Empathy, Resilience, Innovation, and Speed: The Blueprint for Intelligent Healthcare Transformation

Forrester’s recent report, Becoming An Intelligent Healthcare Organization Is An Attainable Goal, Not A Lost Cause, confirms what healthcare executives already know: transformation is no longer optional.

Perficient is proud to be quoted in this research, which outlines a pragmatic framework for becoming an intelligent healthcare organization (IHO)—one that scales innovation, strengthens clinical and operational performance, and delivers measurable impact across the enterprise and the populations it serves.

Why Intelligent Healthcare Is No Longer Optional

Healthcare leaders are under pressure to deliver better outcomes, reduce costs, and modernize operations, all while navigating fragmented systems and siloed departments. The journey to transformation requires more than technology; it demands strategic clarity, operational alignment, and a commitment to continuous improvement.

Forrester reports, “Among business and technology professionals at large US healthcare firms, only 63% agree that their IT organization can readily reallocate people and technologies to serve the newest business priority; 65% say they have enterprise architecture that can quickly and efficiently support major changes in business strategy and execution.”

Despite widespread investment in digital tools, many healthcare organizations struggle to translate those investments into enterprise-wide impact. Misaligned priorities, inconsistent progress across departments, and legacy systems often create bottlenecks that stall innovation and dilute momentum.

Breaking Through Transformation Barriers

These challenges aren’t just technical or organizational. They’re strategic. Enterprise leaders can no longer sit on the sidelines and play the “wait and see” game. They must shift from reactive IT management to proactive digital orchestration, where technology, talent, and transformation are aligned to business outcomes.

Business transformation is not a fleeting trend. It’s an essential strategy for healthcare organizations that want to remain competitive as the marketplace evolves.

Forrester’s report identifies four hallmarks of intelligent healthcare organizations, emphasizing that transformation is not a destination but a continuous practice.

Four Hallmarks of An Intelligent Healthcare Organization (IHO)

To overcome transformation barriers, healthcare organizations must align consumer expectations, digital infrastructure, clinical workflows, and data governance with strategic business goals.

1. Empathy At Scale: Human-Centered, Trust-Enhancing Experiences

A defining trait of intelligent healthcare organizations is a commitment to human-centered experiences.

  • Driven By: Continuous understanding of consumer needs
  • Supported By: Strategic technology investments that enable timely, personalized interventions and touchpoints

As Forrester notes, “The most intelligent organizations excel at empathetic, swift, and resilient innovation to continuously deliver new value for customers and stay ahead of the competition.”

Empathy is a performance driver. Organizations that prioritize human-centered care see higher engagement, better adherence, and stronger loyalty.

Our experts help clients reimagine care journeys using journey sciences, predictive analytics, integrated CRM and CDP platforms, and cloud-native architectures that support scalable personalization. But personalization without protection is a risk. That’s why empathy must extend beyond experience design to include ethical, secure, and responsible AI adoption.

Healthcare organizations face unique constraints, including HIPAA, PHI, and PII regulations that limit the utility of plug-and-play AI solutions. To meet these challenges, we apply our PACE framework—Policies, Advocacy, Controls, and Enablement—to ensure AI is not only innovative but also rooted in trust.

  • Policies establish clear boundaries for acceptable AI usage, tailored to healthcare’s regulatory landscape.
  • Advocacy builds cross-functional understanding and adoption through education and collaboration.
  • Controls implement oversight, auditing, and risk mitigation to protect patient data and ensure model integrity.
  • Enablement equips teams with the tools and environments needed to innovate confidently and securely.

This approach ensures AI is deployed with purpose, aligned to business goals, and embedded with safeguards that protect consumers and care teams alike. It also supports the creation of reusable architectures that blend scalable services with real-time monitoring, which is critical for delivering fast, reliable, and compliant AI applications.

Responsible AI isn’t a checkbox. It’s a continuous practice. And in healthcare, it’s the difference between innovation that inspires trust and innovation that invites scrutiny.

2. Designing for Disruption: Resilience as a Competitive Advantage

Patient-led experiences must be grounded in a clear-eyed understanding that market disruption isn’t simply looming. It’s already here. To thrive, healthcare leaders must architect systems that flex under pressure and evolve with purpose. Resilience is more than operational; it’s also behavioral, cultural, and strategic.

Perficient’s Access to Care research reveals that friction in the care journey directly impacts health outcomes, loyalty, and revenue:

  • More than 50% of consumers who experienced scheduling friction took their care elsewhere, resulting in lost revenue, trust, and care continuity
  • 33% of respondents acted as caregivers, yet this persona is often overlooked in digital strategies
  • Nearly 1 in 4 respondents who experienced difficulty scheduling an appointment stated that the friction led to delayed care, and they believed their health declined as a result
  • More than 45% of consumers aged 18–64 have used digital-first care instead of their regular provider, and 92% of them believe the quality is equal or better

This sentiment should be a wakeup call for leaders. It clearly signals that consumers expect healthcare to meet both foundational needs (cost, access) and lifestyle standards (convenience, personalization, digital ease). When systems fail to deliver, patients disengage. And when caregivers—who often manage care for entire households—encounter barriers, the ripple effect is exponential.

To build resilience that drives retention and revenue, leaders must design systems that anticipate needs and remove barriers before they impact care. Resilient operations must therefore be designed to:

  • Reduce friction across the care journey, especially in scheduling and follow-up
  • Support caregivers with multi-profile tools, shared access, and streamlined coordination
  • Enable digital-first engagement that mirrors the ease of consumer platforms like Amazon and Uber

Consumers are blending survival needs with lifestyle demands. Intelligent healthcare organizations address both simultaneously.

Resilience also means preparing for the unexpected. Whether it’s regulatory shifts, staffing shortages, or competitive disruption, IHOs must be able to pivot quickly. That requires leaders to reimagine patient (and member) access as a strategic lever and prioritize digital transformation that eases the path to care.

3. Unified Innovation: Aligning Strategy, Tech, and Teams

Innovation without enterprise alignment is just noise—activity without impact. When digital initiatives are disconnected from business strategy, consumer needs, or operational realities, they create confusion, dilute resources, and fail to deliver meaningful outcomes. Fragmented innovation may look impressive in isolation, but without coordination, it lacks the momentum to drive true transformation.

To deliver real results, healthcare leaders must connect strategy, execution, and change readiness. In Forrester’s report, a quote from an interview with Priyal Patel emphasizes the importance of a shared strategic vision:

Priyal Patel“Today’s decisions should be guided by long-term thinking, envisioning your organization’s business needs five to 10 years into the future.” — Priyal Patel, Director, Perficient


Our approach begins with strategic clarity. Using our Envision Framework, we help healthcare organizations rapidly identify opportunities, define a consumer-centric vision, and develop a prioritized roadmap that aligns with business goals and stakeholder expectations. This framework blends real-world insights with pragmatic planning, ensuring that innovation is both visionary and executable.

We also recognize that transformation is not just technical—it’s human. Organizational change management (OCM) ensures that teams are ready, willing, and able to adopt new ways of working. Through structured engagement, training, and sustainment, we help clients navigate the behavioral shifts required to scale innovation across departments and disciplines.

This strategic rigor is especially critical in healthcare, where innovation must be resilient, compliant, and deeply empathetic. As highlighted in our 2025 Digital Healthcare Trends report, successful organizations are those that align innovation with measurable business outcomes, ethical AI adoption, and consumer trust.

Perficient’s strategy and transformation services connect vision to execution, ensuring that innovation is sustainable. We partner with healthcare leaders to identify friction points and quick wins, build a culture of continuous improvement, and empower change agents across the enterprise.

You May Enjoy: Driving Company Growth With a Product-Driven Mindset

4. Speed With Purpose and Strategic Precision

The ability to pivot, scale, and deliver quickly is becoming a defining trait of tomorrow’s healthcare leaders. The way forward requires a comprehensive digital strategy that builds the capabilities, agility, and alignment to stay ahead of evolving demands and deliver meaningful impact.

IHOs act quickly without sacrificing quality. But speed alone isn’t enough. Perficient’s strategic position emphasizes speed with purpose—where every acceleration is grounded in business value, ethical AI adoption, and measurable health outcomes.

Our experts help healthcare organizations move fast by:

This approach supports the Quintuple Aim: better outcomes, lower costs, improved experiences, clinician well-being, and health equity. It also ensures that innovation is not just fast. It’s focused, ethical, and sustainable.

Speed with purpose means:

  • Rapid prototyping that validates ideas before scaling
  • Real-time data visibility to inform decisions and interventions
  • Cross-functional collaboration that breaks down silos and accelerates execution
  • Outcome-driven KPIs that measure impact, not just activity

Healthcare leaders don’t need more tools. They need a strategy that connects business imperatives, consumer demands, and an empowered workforce to drive transformation forward. Perficient equips organizations to move with confidence, clarity, and control.

Collaborating to Build Intelligent Healthcare Organizations

We believe our inclusion in Forrester’s report underscores our role as a trusted advisor in intelligent healthcare transformation. From insight to impact, our healthcare expertise equips leaders to modernize, personalize, and scale care. We drive resilient, AI-powered transformation to shape the experiences and engagement of healthcare consumers, streamline operations, and improve the cost, quality, and equity of care.

We have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently ranks us as one of the largest healthcare consulting firms.

Our strategic partnerships with industry-leading technology innovators—including AWS, Microsoft, Salesforce, Adobe, and more—accelerate healthcare organizations’ ability to modernize infrastructure, integrate data, and deliver intelligent experiences. Together, we shatter boundaries so you have the AI-native solutions you need to boldly advance business.

Ready to advance your journey as an intelligent healthcare organization?

We’re here to help you move beyond disconnected systems and toward a unified, data-driven future—one that delivers better experiences for patients, caregivers, and communities. Let’s connect and explore how you can lead with empathy, intelligence, and impact.

]]>
https://blogs.perficient.com/2025/08/29/perficient-quoted-in-forrester-report-on-intelligent-healthcare-organizations/feed/ 0 386542
2025 Modern Healthcare Survey Ranks Perficient Among the 10 Largest Management Consulting Firms https://blogs.perficient.com/2025/08/28/modern-healthcare-ranks-perficient-among-the-10-largest-management-consulting-firms/ https://blogs.perficient.com/2025/08/28/modern-healthcare-ranks-perficient-among-the-10-largest-management-consulting-firms/#comments Thu, 28 Aug 2025 07:45:26 +0000 https://blogs.perficient.com/?p=296761

Modern Healthcare has once again recognized Perficient among the largest healthcare management consulting firms in the U.S., ranking us ninth in its 2025 survey. This honor reflects not only our growth but also our commitment to helping healthcare leaders navigate complexity with clarity, precision, and purpose.

What’s Driving Demand: Innovation with Intent

As provider, payer, and MedTech organizations face mounting pressure to modernize, our work is increasingly focused on connecting digital investments to measurable business and health outcomes. The challenges are real—and so are the opportunities.

Healthcare leaders are engaging our experts to tackle shifts from digital experimentation to enterprise alignment in business-critical areas, including:

  • Digital health transformation that eases access to care.
  • AI and data analytics that accelerate insight, guide clinical decisions, and personalize consumer experiences.
  • Workforce optimization that supports clinicians, streamlines operations, and restores time to focus on patients, members, brokers, and care teams.

These investments represent strategic maturity that reshapes how care is delivered, experienced, and sustained.

Operational Challenges: Strategy Meets Reality

Serving healthcare clients means working inside a system that resists simplicity. Our industry, technical, and change management experts help leaders address three persistent tensions:

  1. Aligning digital strategy with enterprise goals. Innovation often lacks a shared compass. We translate divergent priorities—clinical, operational, financial—into unified programs that drive outcomes.
  2. Controlling costs while preserving agility. Budgets are tight, but the need for speed and competitive relevancy remains. Our approach favors scalable roadmaps and solutions that deliver early wins and can flex as the health care marketplace and consumer expectations evolve.
  3. Preparing the enterprise for AI. Many of our clients have discovered that their AI readiness lags behind ambition. We help build the data foundations, governance frameworks, and workforce capabilities needed to operationalize intelligent systems.

Related Insights: Explore the Digital Trends in Healthcare

Consumer Expectations: Access Is the New Loyalty

Our Access to Care research, based on insights from more than 1,000 U.S. healthcare consumers, reveals a fundamental shift: if your healthcare organization isn’t delivering a seamless, personalized, and convenient experience, consumers will go elsewhere. And they won’t always come back.

Many healthcare leaders still view competition as other hospitals or clinics in their region. But today’s consumer has more options—and they’re exercising them. From digital-first health experiences to hyper-local disruptors and retail-style health providers focused on accessibility and immediacy, the competitive field is rapidly expanding.

  • Digital convenience is now a baseline. More than half of consumers who encountered friction while scheduling care went elsewhere.
  • Caregivers are underserved. One in three respondents manage care for a loved one, yet most digital strategies treat the patient as a single user.
  • Digital-first care is mainstream. 45% of respondents aged 18–64 have already used direct-to-consumer digital care, and 92% of those adopters believe the quality is equal or better to the care offered by their regular health care system.

These behaviors demand a rethinking of access, engagement, and loyalty. We help clients build experiences that are intuitive, inclusive, and aligned with how people actually live and seek care.

Looking Ahead: Complexity Accelerates

With intensified focus on modernization, data strategy, and responsible AI, healthcare leaders are asking harder questions. We’re helping them find and activate answers that deliver value now and build resilience for what’s next.

Our technology partnerships with Adobe, AWS, Microsoft, Salesforce, and other platform leaders allow us to move quickly, integrate deeply, and co-innovate with confidence. We bring cross-industry expertise from financial services, retail, and manufacturing—sectors where personalization and operational excellence are already table stakes. That perspective helps healthcare clients leapfrog legacy thinking and adopt proven strategies. And our fluency in HIPAA, HITRUST, and healthcare data governance ensures that our digital solutions are compliant, resilient, and future-ready.

Optimized, Agile Strategy and Outcomes for Health Insurers, Providers, and MedTech

Discover why we been trusted by the 10 largest U.S. health systems, 10 largest U.S. health insurers, and 14 of the 20 largest medical device firms. We are recognized in analyst reports and regularly awarded for our excellence in solution innovation, industry expertise, and being a great place to work.

Contact us to explore how we can help you forge a resilient, impactful future that delivers better experiences for patients, caregivers, and communities.

]]>
https://blogs.perficient.com/2025/08/28/modern-healthcare-ranks-perficient-among-the-10-largest-management-consulting-firms/feed/ 2 296761
Implementing Hybrid Search in Azure Cosmos DB: Combining Vectors and Keywords https://blogs.perficient.com/2025/08/26/implementing-hybrid-search-in-azure-cosmos-db-combining-vectors-and-keywords/ https://blogs.perficient.com/2025/08/26/implementing-hybrid-search-in-azure-cosmos-db-combining-vectors-and-keywords/#comments Tue, 26 Aug 2025 16:26:03 +0000 https://blogs.perficient.com/?p=386358

Azure Cosmos DB for NoSQL now supports hybrid search, it is a powerful feature that combines full-text search and vector search to deliver highly relevant and accurate results. This blog post provides a comprehensive guide for developers and architects to understand, implement, and leverage hybrid search capabilities in their applications.

  • What is hybrid search?
  • How hybrid search works in Cosmos DB
  • Vector embedding
  • Implementing hybrid search
    • Enable hybrid search.
    • Container set-up and indexing
    • Data Ingestion
    • Search Queries
  • Code Example

What is Hybrid Search?

Hybrid search is an advanced search technology that combines keyword search (also known as full-text search) and vector search to deliver more accurate and relevant search results. It leverages the strengths of both approaches to overcome the limitations of each when used in isolation.

Hybridsearch

Key Components

  • Full-Text Search: This traditional method matches the words you type in, using techniques like stemming, lemmatization, and fuzzy matching to find relevant documents. It excels at finding exact matches and is efficient for structured queries with specific terms. Employs the BM25 algorithm to evaluate and rank the relevance of records based on keyword matching and text relevance.
  • Vector Search: This method uses machine learning models to represent queries and documents as numerical embeddings in a multidimensional space, allowing the system to find items with similar characteristics and relationships, even if the exact keywords don’t match. Vector search is particularly useful for finding information that’s conceptually similar to the search query.
  • Reciprocal Rank Fusion (RRF): This algorithm merges the results from both keyword and vector search, creating a single, unified ranked list of documents. RRF ensures that relevant results from both search types are fairly represented.

Hybrid search is suitable for various use cases, such as:

  • Retrieval Augmented Generation (RAG) with LLMs
  • Knowledge management systems: Enabling employees to efficiently find pertinent information within an enterprise knowledge base.
  • Content Management: Efficiently search through articles, blogs, and documents.
  • AI-powered chatbots
  • E-commerce platforms: Helping customers find products based on descriptions, reviews, and other text attributes.
  • Streaming services: Helping users find content based on specific titles or themes.

Let’s understand vector search and full-text search before diving into hybrid search implementation.

Understanding of Vector Search

Vector search in Azure Cosmos DB for NoSQL is a powerful feature that allows you to find similar items based on their semantic meaning, rather than relying on exact matches of keywords or specific values. It is a fundamental component for building AI applications, semantic search, recommendation engines, and more.

Here’s how vector search works in Cosmos DB:

Vector embeddings

Vector embeddings are numerical representations of data in a high-dimensional space, capturing their semantic meaning. In this space, semantically similar items are represented by vectors that are closer to each other. The dimensionality of these vectors can be quite large. We have separate topics in this blog on how to generate vector embedding.

Storing and indexing vectors

Azure Cosmos DB allows you to store vector embeddings directly within your documents. You define a vector policy for your container to specify the vector data’s path, data type, and dimensions. Cosmos DB supports various vector index types to optimize search performance, accuracy, and cost:

  • Flat: Provides exact k-nearest neighbor (KNN) search.
  • Quantized Flat: Offers exact search on compressed vectors.
  • DiskANN: Enables highly scalable and accurate Approximate Nearest Neighbor (ANN) search.

Querying

  • Azure Cosmos DB provides the VectorDistance() system function, which can be used within SQL queries to perform vector similarity searches as part of vector search.

Understanding Full-Text Search

Azure Cosmos DB for NoSQL now offers full-text search functionality (feature is in preview at this time for certain Azure regions), allowing you to perform powerful and efficient text-based searches within your documents directly in the database. This significantly enhances your application’s search capabilities without the need for an external search service for basic full-text needs.

Indexing

To enable full-text search, you need to define a full-text policy specifying the paths for searching and add a full-text index to your container’s indexing policy. Without the index, full-text searches would perform a full scan. Indexing involves tokenization, stemming, and stop word removal, creating a data structure like an inverted index for fast retrieval. Multi-language support (beyond English) and stop word removal are in early preview.

Querying

Cosmos DB provides system functions for full-text search in the NoSQL query language. These include FullTextContains, FullTextContainsAll, and FullTextContainsAny for filtering in the WHERE clause. The FullTextScore function uses the BM25 algorithm to rank documents by their relevance.

How Hybrid Search works in Cosmos DB

  • Data Storage: Your documents in Cosmos DB include both text fields (for full-text search) and vector embedding fields (for vector search).
  • Indexing:
    • Full-Text Index: A full-text policy and index are configured on your text fields, enabling keyword-based searches.
    • Vector Index: A vector policy and index are configured on your vector embedding fields, allowing for efficient similarity searches based on semantic meaning.
  • Querying: A single query request is used to initiate hybrid search, including both full-text and vector search parameters.
  • Parallel Execution: The vector and full-text search components run in parallel.
    • VectorDistance() measures vector similarity.
    • FullTextContains() or similar functions find keyword matches, and `FullTextScore()` ranks results using BM25.
  • Result Fusion: The RRF function merges the rankings from both searches (vector & full text), creating a combined, ordered list based on overall relevance.
  • Enhanced Results: The final results are highly relevant, leveraging both semantic understanding and keyword precision.

Vector Embedding

Vector embedding refers to the process of transforming data (like text, images) into a series of numbers, or a vector, that captures its semantic meaning. In this n-dimensional space, similar data points are mapped closer together, allowing computers to understand and analyze relationships that would be difficult with raw data.

To support hybrid search in Azure Cosmos DB, enhance the data by generating vector embeddings from searchable text fields. Store these embeddings in dedicated vector fields alongside the original content to enable both semantic and keyword-based queries.

Steps to generate embeddings with Azure OpenAI models

Provision Azure OpenAI Resource

  • Sign in to the Azure portal: Go to https://portal.azure.com and log in.
  • Create a resource: Select “Create a resource” from the Azure dashboard and search for “Azure OpenAI”.

Cetateopenai

Deploy Embedding Model

  • Navigate to your newly created Azure OpenAI resource and click on “Explore Azure AI Foundry portal” in the overview page.
  • Go to the model catalog and search for embedding models.
  • Select embedding model:
    • From the embedding model list, choose an embedding model like text-embedding-ada-002, text-embedding-3-large, or text-embedding-3-small.

Accessing and utilizing embeddings

  • Endpoint and API Key: After deployment, navigate to your Azure OpenAI resource and find the “Keys and Endpoint” under “Resource Management”. Copy these values as they are needed for authenticating API calls.
  • Integration with applications: Use the Azure OpenAI SDK or REST APIs in your applications, referencing the deployment name and the retrieved endpoint and API key to generate embeddings.

Code example for .NET Core

Note: Ensure you have the .NET Core 8 SDK installed

using Azure;
using Azure.AI.OpenAI;
using System;
using System.Linq;

namespace AzureOpenAIAmbeddings
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // Set your Azure OpenAI endpoint and API key securely
            string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? "https://YOUR_RESOURCE_NAME.openai.azure.com/"; // Replace with OpenAI endpoint
            string apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY") ?? "YOUR_API_KEY"; // Replace with OpenAI API key

            // Create an AzureOpenAIAClient
            var credentials = new AzureKeyCredential(apiKey);
            var openaiClient = new OpenAIClient(new Uri(endpoint), credentials);

            // Create embedding options
            EmbeddingOptions embeddingOptions = new()
            {
                DeploymentName = "text-embedding-ada-002", // Replace with your deployment name
                Input = { "Your text for generating embedding" },  // Text that require to generate embedding 
            };

            // Generate embeddings
            var returnValue = await openaiClient.GetEmbeddingsAsync(embeddingOptions);

            //Store generated embedding data to Cosmos DB along with your text content
            var embedding = returnValue.Value.Data[0].Embedding.ToArray()
        }
    }
}

Implementing Hybrid search

Implementing hybrid search in Azure Cosmos DB for NoSQL involves several key steps to combine the power of vector search and full-text search. This diagram illustrates the architecture of Hybrid Search in Azure Cosmos DB, leveraging Azure OpenAI for generating embedding, combining both vector-based and keyword-based search:

Architecture

Step 1: Enable hybrid search in the Cosmos DB account

To implement hybrid search in Cosmos DB, begin by enabling both vector search and full-text search on the Azure Cosmos DB account.

  • Navigate to Your Azure Cosmos DB for NoSQL Resource Page
  • Access the Features Pane:

    • Select the “Features” pane under the “Settings” menu item.
  • Enable Vector Search:

    • Locate and select the “Vector Search for NoSQL.” Read the description to understand the feature.
    • Click “Enable” to activate vector indexing and search capabilities.
    • Enable Vector Search
  • Enable Full-Text Search:

    • Locate and select the “Preview Feature for Full-Text Search” (Full-Text Search for NoSQL API (preview)). Read the description to confirm your intention to enable it.
    • Click “Enable” to activate full-text indexing and search capabilities.
    • Enable Fulltext Search

                Notes:

      • Once these features are enabled, they cannot be disabled.
      • Full Text Search (preview) may not be available in all regions at this time.

Step 2: Container Setup and Indexing

  • Create a database and container or use an existing one.
    • Note: Adding a vector index policy to an existing container may not be supported. If so, you will need to create a new container.
  • Define the Vector embedding policy on the container
    • You need to specify a vector embedding policy for the container during its creation. This policy defines how vectors are treated at the container level.
    • Vector Policy
      {
         "vectorEmbeddings": [
             {
                 "path":"/contentvector",
                 "dataType":"float32",
                 "distanceFunction":"cosine",
                 "dimensions":1536
             },
      }
      
      • Path: Specify the JSON path to your vector embedding field (e.g., /contentvector).
      • Data type: Define the data type of the vector elements (e.g., float32).
      • Dimensions: Specify the dimensionality of your vectors (e.g., 1536 for text-embedding-ada-002).
      • Distance Function: Choose the distance metric for similarity calculation (e.g., cosine, dotProduct, or euclidean)
  • Add Vector Index: Add a vector index to your container’s indexing policy. This enables efficient vector similarity searches.
    • Vector Index
      • Path: Include the same vector path defined in your vector policy.
      • Type: Select the appropriate index type (flat, quantizedFlat, or diskANN).
  • Define Full-Text Policy: Define a container-level full-text policy. This policy specifies which paths in your documents contain the text content that you want to search.
    • Full Text Policy
      • Path: Specify the JSON path to your text search field
      • Language: content language
  • Add Full-Text Index: Add a full-text index to the indexing policy, making full-text searches efficient
    • Full Text Index

Hybrid search index (both Full-Text and Vector index)

{
  "indexingMode": "consistent",
  "automatic": true,
  "includedPaths": [
    {
      "path": "/*"
    }
  ],
  "excludedPaths": [
    {
      "path": "/_etag*/?"
    },
    {
      "path": "/contentvector/*"
    }
  ],
  "fullTextIndexes": [
    {
      "path": "/content"
    },
    {
      "path": "/description"
    }
  ],
  "vectorIndexes": [
    {
      "path": "/contentvector",
      "type": "diskANN"
    }
  ]
}

Exclude the Vector Path:

  • To optimize performance during data ingestion, you must add the vector path to the “excludedPaths” section of your indexing policy. This prevents the vector path from being indexed by the default range indexes, which can increase RU charges and latency.

Step 3: Data Ingestion

  • Generate Vector Embeddings: For every document, convert the text content (and potentially other data like images) into numerical vector embeddings using an embedding model (e.g., from Azure OpenAI Service). This topic is covered above.
  • Populate Documents: Insert documents into your container. Each document should have:
    • The text content in the fields specified in your full-text policy (e.g., content, description).
    • The corresponding vector embedding in the field is specified in your vector policy (e.g., /contentvector).
    • Example document
    • Data Example

Step 4: Search Queries

Hybrid search queries in Azure Cosmos DB for NoSQL combine the power of vector similarity search and full-text search within a single query using the Reciprocal Rank Fusion (RRF) function. This allows you to find documents that are both semantically similar and contain specific keywords.

SQL:  SELECT TOP 10 * FROM c ORDER BY RANK RRF(VectorDistance(c.contentvector, @queryVector), FullTextScore(c.content, @searchKeywords))

VectorDistance(c. contentvector, @queryVector):

  • VectorDistance(): This is a system function that calculates the similarity score between two vectors.
  • @queryVector: This is a parameter representing the vector embedding of your search query. You would generate this vector embedding using the same embedding model used to create document vector embeddings.
  • Return Value: Returns a similarity score based on the distance function defined in your vector policy (e.g., cosine, dot product, Euclidean).

FullTextScore(c.content, @searchKeywords):

  • FullTextScore(): This is a system function that calculates a BM25 score, which evaluates the relevance of a document to a given set of search terms. This function relies on a full-text index on the specified path.
  • @searchKeywords: This is a parameter representing the keywords or phrases you want to search for. You can provide multiple keywords separated by commas.
  • Return Value: Returns a BM25 score, indicating the relevance of the document to the search terms. Higher scores mean greater relevance.

ORDER BY RANK RRF(…):

  • RRF(…) (Reciprocal Rank Fusion): This is a system function that combines the ranked results from multiple scoring functions (like VectorDistance and FullTextScore) into a single, unified ranking. RRF ensures that documents that rank highly in either the vector search or the full-text search are prioritized in the final results.

Weighted hybrid search query:

SELECT TOP 10 * FROM c ORDER BY RANK RRF(VectorDistance(c.contentvector, @queryVector), FullTextScore(c.content, @searchKeywords), [2, 1]).

  • Optional Weights: You can optionally provide an array of weights as the last argument to RRF to control the relative importance of each component score. For example, to weight the vector search twice as important as the full-text search, you could use RRF(VectorDistance(c.contentvector, @queryVector), FullTextScore(c.content, @searchKeywords), [2,1]).

Multi-field hybrid search query:

SELECT TOP 10 * FROM c ORDER BY RANK RRF(VectorDistance(c.contentvector, @queryVector),VectorDistance(c.imagevector, @queryVector),

FullTextScore(c.content, @searchKeywords, FullTextScore(c.description, @searchKeywords,  [3,2,1,1]).

Code Example (.NET Core C#)

  • Add Cosmos DB and OpenAI SDKs
  • Get Cosmos DB connection string and create Cosmos DB client
  • Get the OpenAI endpoint and key to create an OpenAI client
  • Generate embedding for user query
  • A hybrid search query to do a vector and keyword search

 

using Microsoft.Azure.Cosmos;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace CosmosHybridSearch
{
    public class Product
    {
        public string Id { get; set; }
        public string Name { get; set; }
        public float[] DescriptionVector { get; set; } // Your vector embedding property
    }

    public class Program
    {
        private static readonly string EndpointUri = "YOUR_COSMOS_DB_ENDPOINT";
        private static readonly string PrimaryKey = "YOUR_COSMOS_DB_PRIMARY_KEY";
        private static readonly string DatabaseId = "YourDatabaseId";
        
        // Set your Azure OpenAI endpoint and API key securely.
        string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? "https://YOUR_RESOURCE_NAME.openai.azure.com/"; // Replace with your endpoint
        string apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY") ?? "YOUR_API_KEY"; // Replace with your API key

        public static async Task Main(string[] args)
        {
            using CosmosClient client = new(EndpointUri, PrimaryKey);
            Database database = await client.CreateDatabaseIfNotExistsAsync(DatabaseId);
            Container container = database.GetContainer(ContainerId);

            // Create an AzureOpenAiEmbeddings instance - not online :)
            var credentials = new ApiKeyServiceClientCredentials(apiKey);
            AzureOpenAiEmbeddings openAiClient = new(endpoint, credentials);

            // Example: search your actual query vector and search term.
            float[] queryVector;
            string searchTerm = "lamp";

            EmbeddingOptions embeddingOptions = new()
            {
                DeploymentName = "text-embedding-ada-002", // Replace with your deployment name
                Input = searchTerm,
            };

            var queryVectorResponse = await openAICient.GetEmbeddingsAsync(embeddingOptions);
            queryVector = returnValue.Value.Data[0].Embedding.ToArray()

            // Define the hybrid search query using KQL
            QueryDefinition queryDefinition = new QueryDefinition(
              "SELECT top 10 * " +
              "FROM myindex " +
              "ORDER BY _vectorScore(desc, @queryVector), FullTextScore(_description, @searchTerm)")
           .WithParameter("@queryVector", queryVector)
           .WithParameter("@searchTerm", searchTerm);

           List<Product> products = new List<Product>();

           using FeedIterator<Product> feedIterator = container.GetItemQueryIterator<Product>(queryDefinition);

           while (feedIterator.HasMoreResults)
           {
              FeedResponse<Product> response = await feedIterator.ReadNextAsync();
              foreach (Product product in response)
              {
                  products.Add(product);
              }
           }

           // Process your search results
           foreach (Product product in products)
           {
              Console.WriteLine($"Product Id: {product.Id}, Name: {product.Name}");
           }
        }
    }
}

 

]]>
https://blogs.perficient.com/2025/08/26/implementing-hybrid-search-in-azure-cosmos-db-combining-vectors-and-keywords/feed/ 1 386358
Automating Azure Key Vault Secret and Certificate Expiry Monitoring with Azure Function App https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/ https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/#respond Tue, 26 Aug 2025 14:15:25 +0000 https://blogs.perficient.com/?p=386349

How to monitor hundreds of Key Vaults across multiple subscriptions for just $15-25/month

The Challenge: Key Vault Sprawl in Enterprise Azure

If you’re managing Azure at enterprise scale, you’ve likely encountered this scenario: Key Vaults scattered across dozens of subscriptions, hundreds of certificates and secrets with different expiry dates, and the constant fear of unexpected outages due to expired certificates. Manual monitoring simply doesn’t scale when you’re dealing with:

  • Multiple Azure subscriptions (often 10-50+ in large organizations)
  • Hundreds of Key Vaults across different teams and environments
  • Thousands of certificates with varying renewal cycles
  • Critical secrets that applications depend on
  • Different time zones and rotation schedules

The traditional approach of spreadsheets, manual checks, or basic Azure Monitor alerts breaks down quickly. You need something that scales automatically, costs practically nothing, and provides real-time visibility across your entire Azure estate.

The Solution: Event-Driven Monitoring Architecture

Keyvaultautomation

Single Function App, Unlimited Key Vaults

Instead of deploying monitoring resources per Key Vault (expensive and complex), we use a centralized architecture:

Management Group (100+ Key Vaults)
           ↓
   Single Function App
           ↓
     Action Group
           ↓
    Notifications

This approach provides:

  • Unlimited scalability: Monitor 1 or 1000+ Key Vaults with the same infrastructure
  • Cross-subscription coverage: Works across your entire Azure estate
  • Real-time alerts: Sub-5-minute notification delivery
  • Cost optimization: $15-25/month total (not per Key Vault!)

How It Works: The Technical Deep Dive

1. Event Grid System Topics (The Sensors)

Azure Key Vault automatically generates events when certificates and secrets are about to expire. We create Event Grid System Topics for each Key Vault to capture these events:

Event Types Monitored:
• Microsoft.KeyVault.CertificateNearExpiry
• Microsoft.KeyVault.CertificateExpired  
• Microsoft.KeyVault.SecretNearExpiry
• Microsoft.KeyVault.SecretExpired

The beauty? These events are generated automatically by Azure – no polling, no manual checking, just real-time notifications when things are about to expire.

2. Centralized Processing (The Brain)

A single Azure Function App processes ALL events from across your organization:

// Simplified event processing flow
eventGridEvent → parseEvent() → extractMetadata() → 
formatAlert() → sendToActionGroup()

Example Alert Generated:
{
  severity: "Sev1",
  alertTitle: "Certificate Expired in Key Vault",
  description: "Certificate 'prod-ssl-cert' has expired in Key Vault 'prod-keyvault'",
  keyVaultName: "prod-keyvault",
  objectType: "Certificate",
  expiryDate: "2024-01-15T00:00:00.000Z"
}

3. Smart Notification Routing (The Messenger)

Azure Action Groups handle notification distribution with support for:

  • Email notifications (unlimited recipients)
  • SMS alerts for critical expiries
  • Webhook integration with ITSM tools (ServiceNow, Jira, etc.)
  • Voice calls for emergency situations.

Implementation: Infrastructure as Code

The entire solution is deployed using Terraform, making it repeatable and version-controlled. Here’s the high-level infrastructure:

Resource Architecture

# Single monitoring resource group
resource "azurerm_resource_group" "monitoring" {
  name     = "rg-kv-monitoring-${var.timestamp}"
  location = var.primary_location
}

# Function App (handles ALL Key Vaults)
resource "azurerm_linux_function_app" "kv_processor" {
  name                = "func-kv-monitoring-${var.timestamp}"
  service_plan_id     = azurerm_service_plan.function_plan.id
  # ... configuration
}

# Event Grid System Topics (one per Key Vault)
resource "azurerm_eventgrid_system_topic" "key_vault" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  name                   = "evgt-${each.key}"
  source_arm_resource_id = "/subscriptions/${each.value.subscriptionId}/resourceGroups/${each.value.resourceGroup}/providers/Microsoft.KeyVault/vaults/${each.key}"
  topic_type            = "Microsoft.KeyVault.vaults"
}

# Event Subscriptions (route events to Function App)
resource "azurerm_eventgrid_event_subscription" "certificate_expiry" {
  for_each = { for kv in var.key_vaults : kv.name => kv }
  
  azure_function_endpoint {
    function_id = "${azurerm_linux_function_app.kv_processor.id}/functions/EventGridTrigger"
  }
  
  included_event_types = [
    "Microsoft.KeyVault.CertificateNearExpiry",
    "Microsoft.KeyVault.CertificateExpired"
  ]
}

CI/CD Pipeline Integration

The solution includes an Azure DevOps pipeline that:

  1. Discovers Key Vaults across your management group automatically
  2. Generates Terraform variables with all discovered Key Vaults
  3. Deploys infrastructure using infrastructure as code
  4. Validates deployment to ensure everything works
# Simplified pipeline flow
stages:
  - stage: DiscoverKeyVaults
    # Scan management group for all Key Vaults
    
  - stage: DeployMonitoring  
    # Deploy Function App and Event Grid subscriptions
    
  - stage: ValidateDeployment
    # Ensure monitoring is working correctly

Cost Analysis: Why This Approach Wins

Traditional Approach (Per-Key Vault Monitoring)

100 Key Vaults × $20/month per KV = $2,000/month
Annual cost: $24,000

This Approach (Centralized Monitoring)

Base infrastructure: $15-25/month
Event Grid events: $2-5/month  
Total: $17-30/month
Annual cost: $204-360

Savings: 98%+ reduction in monitoring costs

Detailed Cost Breakdown

ComponentMonthly CostNotes
Function App (Basic B1)$13.14Handles unlimited Key Vaults
Storage Account$1-3Function runtime storage
Log Analytics$2-15Centralized logging
Event Grid$0.50-2$0.60 per million operations
Action Group$0Email notifications free
Total$17-33Scales to unlimited Key Vaults

Implementation Guide: Getting Started

Prerequisites

  1. Azure Management Group with Key Vaults to monitor
  2. Service Principal with appropriate permissions:
    • Reader on Management Group
    • Contributor on monitoring subscription
    • Event Grid Contributor on Key Vault subscriptions
  3. Azure DevOps or similar CI/CD platform

Step 1: Repository Setup

Create this folder structure:

keyvault-monitoring/
├── terraform/
│   ├── main.tf              # Infrastructure definitions
│   ├── variables.tf         # Configuration variables
│   ├── terraform.tfvars     # Your specific settings
│   └── function_code/       # Function App source code
├── azure-pipelines.yml      # CI/CD pipeline
└── docs/                    # Documentation

Step 2: Configuration

Update terraform.tfvars with your settings:

# Required configuration
notification_emails = [
  "your-team@company.com",
  "security@company.com"
]

primary_location = "East US"
log_retention_days = 90

# Optional: SMS for critical alerts
sms_notifications = [
  {
    country_code = "1"
    phone_number = "5551234567"
  }
]

# Optional: Webhook integration
webhook_url = "https://your-itsm-tool.com/api/alerts"

Step 3: Deployment

The pipeline automatically:

  1. Scans your management group for all Key Vaults
  2. Generates infrastructure code with discovered Key Vaults
  3. Deploys monitoring resources using Terraform
  4. Validates functionality with test events

Expected deployment time: 5-10 minutes

Step 4: Validation

Test the setup by creating a short-lived certificate:

# Create test certificate with 1-day expiry
az keyvault certificate create \
  --vault-name "your-test-keyvault" \
  --name "test-monitoring-cert" \
  --policy '{
    "issuerParameters": {"name": "Self"},
    "x509CertificateProperties": {
      "validityInMonths": 1,
      "subject": "CN=test-monitoring"
    }
  }'

# You should receive an alert within 5 minutes

Operational Excellence

Monitoring the Monitor

The solution includes comprehensive observability:

// Function App performance dashboard
FunctionAppLogs
| where TimeGenerated > ago(24h)
| summarize 
    ExecutionCount = count(),
    SuccessRate = (countif(Level != "Error") * 100.0) / count(),
    AvgDurationMs = avg(DurationMs)
| extend PerformanceScore = case(
    SuccessRate >= 99.5, "Excellent",
    SuccessRate >= 99.0, "Good", 
    "Needs Attention"
)

Advanced Features and Customizations

1. Integration with ITSM Tools

The webhook capability enables integration with enterprise tools:

// ServiceNow integration example
const serviceNowPayload = {
  short_description: `${objectType} '${objectName}' expiring in Key Vault '${keyVaultName}'`,
  urgency: severity === 'Sev1' ? '1' : '3',
  category: 'Security',
  subcategory: 'Certificate Management',
  caller_id: 'keyvault-monitoring-system'
};

2. Custom Alert Routing

Different Key Vaults can route to different teams:

// Route alerts based on Key Vault naming convention
const getNotificationGroup = (keyVaultName) => {
  if (keyVaultName.includes('prod-')) return 'production-team';
  if (keyVaultName.includes('dev-')) return 'development-team';
  return 'platform-team';
};

3. Business Hours Filtering

Critical alerts can bypass business hours, while informational alerts respect working hours:

const shouldSendImmediately = (severity, currentTime) => {
  if (severity === 'Sev1') return true; // Always send critical alerts
  
  const businessHours = isBusinessHours(currentTime);
  return businessHours || isNearBusinessHours(currentTime, 2); // 2 hours before business hours
};

Troubleshooting Common Issues

Issue: No Alerts Received

Symptoms:

Events are visible in Azure, but no notifications are arriving

Resolution Steps:

  1. Check the Action Group configuration in the Azure Portal
  2. Verify the Function App is running and healthy
  3. Review Function App logs for processing errors
  4. Validate Event Grid subscription is active

Issue: High Alert Volume

Symptoms:

Too many notifications, alert fatigue

Resolution:

// Implement intelligent batching
const batchAlerts = (alerts, timeWindow = '15m') => {
  return alerts.reduce((batches, alert) => {
    const key = `${alert.keyVaultName}-${alert.objectType}`;
    batches[key] = batches[key] || [];
    batches[key].push(alert);
    return batches;
  }, {});
};

Issue: Missing Key Vaults

Symptoms: Some Key Vaults are not included in monitoring

Resolution:

  1. Re-run the discovery pipeline to pick up new Key Vaults
  2. Verify service principal has Reader access to all subscriptions
  3. Check for Key Vaults in subscriptions outside the management group
]]>
https://blogs.perficient.com/2025/08/26/azure-keyvault-monitoring-automation/feed/ 0 386349
Part 2: Implementing Azure Virtual WAN – A Practical Walkthrough https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/ https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/#respond Thu, 21 Aug 2025 09:33:21 +0000 https://blogs.perficient.com/?p=386292

In Part 1 (Harnessing the Power of AWS Bedrock through CloudFormation / Blogs / Perficient), we discussed what Azure Virtual WAN is and why it’s a powerful solution for global networking. Now, let’s get hands-on and walk through the actual implementation—step by step, in a simple, conversational way.

Architecturediagram

1.     Creating the Virtual WAN – The Network’s Control Plane

Virtual WAN is the heart of a global network, not just another resource. It replaces: Isolated VPN gateways per region, Manual ExpressRoute configurations, and complex peering relationships.

Setting it up is easy:

  • Navigate to Azure Portal → Search “Virtual WAN”
  • Click Create and configure.
  • Name: Naming matters for enterprise environments
  • Resource Group: Create new rg-network-global (best practice for lifecycle management)
  • Type: Standard (Basic lacks critical features like ExpressRoute support)

Azure will set up the Virtual WAN in a few seconds. Now, the real fun begins.

2. Setting Up the Virtual WAN Hub – The Heart of The Network

The hub is where all connections converge. It’s like a major airport hub where traffic from different locations meets and gets efficiently routed. Without a hub, you’d need to configure individual gateways for every VPN and ExpressRoute connection, leading to higher costs and management overhead.

  • Navigate to the Virtual WAN resource → Click Hubs → New Hub.
  • Configure the Hub.
  • Region: Choose based on: Primary user locations & Azure service availability (some regions lack certain services)
  • Address Space: Assign a private IP range (e.g., 10.100.0.0/24).

Wait for Deployment, this takes about 30 minutes (Azure is building VPN gateways, ExpressRoute gateways, and more behind the scenes).

Once done, the hub is ready to connect everything: offices, cloud resources, and remote users.

3. Connecting Offices via Site-to-Site VPN – Building Secure Tunnels

Branches and data centres need a reliable, encrypted connection to Azure. Site-to-Site VPN provides this over the public internet while keeping data secure. Without VPN tunnels, branch offices would rely on slower, less secure internet connections to access cloud resources, increasing latency and security risks.

  • In the Virtual WAN Hub, go to VPN (Site-to-Site) → Create VPN Site.
  • Name: branch-nyc-01
  • Private Address Space: e.g., 192.168.100.0/24 (must match on-premises network)
  • Link Speed: Set accurately for Azure’s QoS calculations
  • Download VPN Configuration: Azure provides a config file—apply it to the office’s VPN device (like a Cisco or Fortinet firewall).
  • Lastly, connect the VPN Site to the Hub.
  • Navigate to VPN connections → Create connection → Link the office to the hub.

Now, the office and Azure are securely connected.

4. Adding ExpressRoute – The Private Superhighway

For critical applications (like databases or ERP systems), VPNs might not provide enough bandwidth or stability. ExpressRoute gives us a dedicated, high-speed connection that bypasses the public internet. Without ExpressRoute, latency-sensitive applications (like VoIP or real-time analytics) could suffer from internet congestion or unpredictable performance.

  • Order an ExpressRoute Circuit: We can do this via the Azure Portal or through an ISP (like AT&T or Verizon).
  • Authorize the Circuit in Azure
  • Navigate to the Virtual WAN Hub → ExpressRoute → Authorize.
  • Linking it to Hub: Once it is authorized, connect the ExpressRoute circuit to the hub.

Now, the on-premises network has a dedicated, high-speed connection to Azure—no internet required.

5. Enabling Point-to-Site VPN for Remote Workers – The Digital Commute

Employees working from home need secure access to internal apps without exposing them to the public internet. P2S VPN lets them “dial in” securely from anywhere. Without P2S VPN, remote workers might resort to risky workarounds like exposing RDP or databases to the internet.

  • Configure P2S in The Hub
  • Navigate to VPN (Point-to-Site) → Configure.
  • Set Up Authentication: Choose certificate-based auth (secure and easy to manage) and upload the root/issuer certificates.
  • Assign an IP Pool. e.g., 192.168.100.0/24 (this is where remote users will get their IPs).
  • Download & Distribute the VPN Client

Employees install this on their laptops to connect securely. Now, the team can access Azure resources from anywhere just like they’re in the office.

6. Linking Azure Virtual Networks (VNets) – The Cloud’s Backbone

Applications in one VNet (e.g., frontend servers) often need to talk to another (e.g., databases). Rather than complex peering, the Virtual WAN handles routing automatically. Without VNet integration, it needs manual peering and route tables for every connection, creating a management nightmare at scale.

  • VNets need to be attached.
  • Navigate to The Hub → Virtual Network Connections → Add Connection.
  • Select the VNets. e.g., Connect vnet-app (for applications) and vnet-db (for databases).
  • Azure handles the Routing: Traffic flows automatically through the hub-no manual route tables needed.

Now, the cloud resources communicate seamlessly.

Monitoring & Troubleshooting

Networks aren’t “set and forget.” We need visibility to prevent outages and quickly fix issues. We can use tools like Azure Monitor, which tracks VPN/ExpressRoute health—like a dashboard showing all trains (data packets) moving smoothly. Again, Network Watcher can help to diagnose why a branch can’t connect.

Common Problems & Fixes

  • When VPN connections fail, the problem is often a mismatched shared key—simply re-enter it on both ends.
  • If ExpressRoute goes down, check with your ISP—circuit issues usually require provider intervention.
  • When VNet traffic gets blocked, verify route tables in the hub—missing routes are a common culprit.
]]>
https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/feed/ 0 386292
Live Agent Transfer in Copilot Studio Using D365 Omnichannel – Step-by-Step Implementation https://blogs.perficient.com/2025/08/18/live-agent-transfer-in-copilot-studio-using-d365-omnichannel-step-by-step-implementation/ https://blogs.perficient.com/2025/08/18/live-agent-transfer-in-copilot-studio-using-d365-omnichannel-step-by-step-implementation/#respond Mon, 18 Aug 2025 09:29:53 +0000 https://blogs.perficient.com/?p=385924

Welcome to Part 2 of this blog series! In Part 1, we discussed the high-level architecture and use case for enabling live agent transfer from a chatbot.

In this post, I’ll walk you through the actual steps to build this feature using:

  • Copilot Studio
  • D365 Omnichannel for Customer Service
  • Customer Service Workspace
  • Customer Voice

Prerequisites

  • Dynamics 365 Customer Service license + Omnichannel Add-on
  • Admin access to D365 and Power Platform Admin Center
  • Agents added to your environment with proper roles

Step-by-Step Implementation

1: Set Up Omnichannel Workstream

  • Go to Customer Service Admin Center
  • Create a Workstream for live chat
  • Link it to a queue and assign agents

Customer Service Workspace Customer Service Workspace1

2: Create Chat Channel

  • In the same admin center, create a Chat Channel
  • Configure greeting, authentication (optional), timeouts
  • Copy the embed code to add to your portal or test site

Customer Service Chat Channel for Copilot Studio

3: Create a Bot in Copilot Studio

  • Create a bot and add core topics
  • Create a new topic: “Escalate to Agent”
  • Add trigger phrases like:
    • “Talk to someone.”
    • “Escalate to huma.n”
    • “Need real help”
  • Use the Transfer to Agent node
    • Select the Chat Channel
    • Add a fallback message in case agents are unavailable

Coplot Studio

4: Test the Flow

  • Open your bot via the portal or the embedded site
  • Trigger the escalation topic
  • Bot should say: “Transferring you to a live agent…”
  • An available agent receives the chat in the Customer Service Workspace
  • The agent sees the whole chat history and continues the conversation

Copilot Studio & Customer Service

5: [Optional] Post-Conversation Feedback Using Customer Voice

To collect feedback after the chat ends, enable the native post-conversation survey feature in Omnichannel.

Steps:

  1. Create a feedback survey in Microsoft Customer Voice
  2. Go to Customer Service Admin Center > Workstream > Behavior tab
  3. Enable post-conversation survey
  4. Select “Customer Voice.”

Customer Voice

That’s it – once the chat ends, users will be prompted with your feedback form automatically.

Real Scenarios Tested

  • User types “Speak to a human.”
  • Bot transfers to live agent
  • Agent sees the customer transcript and profile
  • No agent? Bot shows “All agents are currently busy.”

Final Outcome

This setup enables a production-ready escalation workflow with:

  • Low-code development
  • Reusable components
  • Smooth agent handoff
  • Agent empowerment with full context

Conclusion

This approach balances bot automation with human empathy by allowing live agent transfers when needed. Copilot Studio and D365 Omnichannel work well together for modern, scalable customer service solutions.

]]>
https://blogs.perficient.com/2025/08/18/live-agent-transfer-in-copilot-studio-using-d365-omnichannel-step-by-step-implementation/feed/ 0 385924
Live Agent Escalation in Copilot Studio Using D365 Omnichannel – Architecture and Use Case https://blogs.perficient.com/2025/08/13/live-agent-escalation-in-copilot-studio-using-d365-omnichannel-architecture-and-use-case/ https://blogs.perficient.com/2025/08/13/live-agent-escalation-in-copilot-studio-using-d365-omnichannel-architecture-and-use-case/#respond Wed, 13 Aug 2025 05:58:08 +0000 https://blogs.perficient.com/?p=385242

With the increasing use of AI chatbots, businesses often face one key challenge: when and how to seamlessly hand over the conversation from a bot to a human agent.

In this two-part series, I’ll walk you through how we used Microsoft Copilot Studio and Dynamics 365 Omnichannel to build a live agent escalation feature. Part 1 will focus on the why, what, and architecture, and Part 2 will deep dive into the actual implementation.

Problem Statement

Chatbots are great for handling FAQs and basic support, but they fall short when:

  • A customer is frustrated or confused

  • Complex or sensitive issues arise

  • Immediate human empathy or decision-making is needed

In such cases, a real-time live agent transfer becomes essential.

High-Level Use Case

We built a chatbot for a customer portal using Copilot Studio. While it handles common queries, we also needed to:

  • Escalate conversations to live agents if the user asks for it

  • Preserve chat context during handoff

  • Route to the correct agent or queue based on rules

  • Provide agents with complete chat history and customer info

Architecture Overview

Here’s how the components interact:

[User][Copilot Studio Bot][Transfer to Agent Node] [Omnichannel Workstream] [Queue with Available Agents] [Agent in Customer Service Workspace]
Architecture Copilot Live Agent

Tools Involved

  • Copilot Studio: Low-code chatbot builder

  • D365 Omnichannel for Customer Service: Real-time chat and routing

  • Customer Service Workspace: Where agents receive and respond to chats

  • Web Page: To host the bot on a public-facing portal

Benefits of This Integration

  • Bot handles everyday tasks, reducing agent load

  • Smooth escalation without losing chat context

  • Intelligent routing via workstreams and queues

  • Agent productivity improves with transcript visibility and customer profile.

Conclusion

In this first part of our blog series, we explored the high-level architecture and components involved in enabling a seamless live agent transfer from Copilot Studio to a real support agent via D365 Omnichannel.

By combining the conversational power of Copilot Studio with the robust routing and session management capabilities of Omnichannel for Customer Service, organizations can elevate their customer support experience by offering the best of both automation and human interaction.

What’s Next in Part 2?

In Part 2, I’ll walk you through:

  • Setting up Omnichannel in D365

  • Creating the bot in Copilot Studio

  • Configuring escalation logic

  • Testing the live agent transfer end-to-end

Stay tuned!

]]>
https://blogs.perficient.com/2025/08/13/live-agent-escalation-in-copilot-studio-using-d365-omnichannel-architecture-and-use-case/feed/ 0 385242
Perficient’s Pradeep Jain Named Microsoft FastTrack Solution Architect for Power Automate https://blogs.perficient.com/2025/08/12/perficients-pradeep-jain-named-microsoft-fasttrack-solution-architect-for-power-automate/ https://blogs.perficient.com/2025/08/12/perficients-pradeep-jain-named-microsoft-fasttrack-solution-architect-for-power-automate/#respond Tue, 12 Aug 2025 22:27:16 +0000 https://blogs.perficient.com/?p=386059

Perficient is delighted to announce that Pradeep Jain, Senior Solution Architect, has been honored by Microsoft as a FastTrack Recognized Solution Architect for Power Automate—a distinction reserved for only a select group of global experts.

This recognition, conferred by the Microsoft Power Platform product engineering team, celebrates architects who deliver high-quality, enterprise-scale Power Platform solutions with consistency and architectural excellence. Pradeep is one of only 31 architects worldwide to currently hold this title for Power Automate.

Pradeep’s distinction is built on an impressive track record:

  • 15+ enterprise-scale implementations across industries including customer service, retail, technology, and oil & gas.
  • Expertise spanning Power Automate, Power Apps, and Dynamics 365, with AI capabilities seamlessly embedded into solutions.
  • Solutions that streamline operations, empower frontline teams, and generate measurable business value.
  • A commitment to governance, scalable architecture, and human-centered design.

Over his 13 years as a solution architect, Pradeep has become a trusted advisor to clients, guiding them through their digital maturity journey and ensuring technology investments deliver real-world impact. Beyond delivering transformative projects, he’s known for mentoring emerging architects, contributing to centers of excellence, and fostering high-performing teams across Perficient.

Being named a Microsoft FastTrack Recognized Solution Architect is not just a title—it’s validation from Microsoft’s own engineering leaders that Pradeep consistently operates at the highest level of architectural excellence. It places him among an elite global community of experts who help shape the future of Microsoft Power Platform.

 

Perficient’s Power Platform Impact

Pradeep’s recognition also reflects the depth of expertise within Perficient’s Power Platform practice. Our team helps organizations accelerate innovation and maximize their Microsoft investments by combining Power Automate, Power Apps, Power BI, and Copilot Studio with enterprise-ready governance and integration across Microsoft 365, Dynamics 365, Azure, and other business systems. From strategy and proof-of-value engagements to full-scale deployments, we focus on delivering secure, scalable, and AI-enabled solutions that streamline processes, empower teams, and drive measurable outcomes.

 

Congratulations, Pradeep! Your leadership, innovation, and dedication to empowering clients with intelligent, scalable solutions embody the best of Perficient’s commitment to digital transformation.

Learn more about Microsoft’s FastTrack Recognized Solution Architect program here and read more about Perficient’s Power Platform practice here.

]]>
https://blogs.perficient.com/2025/08/12/perficients-pradeep-jain-named-microsoft-fasttrack-solution-architect-for-power-automate/feed/ 0 386059