Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Mon, 19 Jan 2026 21:00:31 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 Part 2: Building Mobile AI: A Developer’s Guide to On-Device Intelligence https://blogs.perficient.com/2026/01/19/part-2-building-mobile-ai-a-developers-guide-to-on-device-intelligence/ https://blogs.perficient.com/2026/01/19/part-2-building-mobile-ai-a-developers-guide-to-on-device-intelligence/#respond Mon, 19 Jan 2026 22:27:11 +0000 https://blogs.perficient.com/?p=389702

Subtitle: Side-by-side implementation of Secure AI on Android (Kotlin) and iOS (Swift).

In Part 1, we discussed why we need to move away from slow, cloud-dependent chatbots. Now, let’s look at how to build instant, on-device intelligence. While native code is powerful, managing two separate AI stacks can be overwhelming.

Before we jump into platform-specific code, we need to talk about the “Bridge” that connects them: Google ML Kit.

The Cross-Platform Solution: Google ML Kit

If you don’t want to maintain separate Core ML (iOS) and custom Android models, Google ML Kit is your best friend. It acts as a unified wrapper for on-device machine learning, supporting both Android and iOS.

It offers two massive advantages:

  1. Turnkey Solutions: Instant APIs for Face Detection, Barcode Scanning, and Text Recognition that work identically on both platforms.
  2. Custom Model Support: You can train a single TensorFlow Lite (.tflite) model and deploy it to both your Android and iOS apps using ML Kit’s custom model APIs.

For a deep dive on setting this up, bookmark the official ML Kit guide.


The Code: Side-by-Side Implementation

Below, we compare the implementation of two core features: Visual Intelligence (Generative AI) and Real-Time Inference (Computer Vision). You will see that despite the language differences, the architecture for the “One AI” future is remarkably similar.

Feature 1: The “Brain” (Generative AI & Inference)

On Android, we leverage Gemini Nano (via ML Kit’s Generative AI features). On iOS, we use a similar asynchronous pattern to feed data to the Neural Engine.

Android (Kotlin)

We check the model status and then run inference. The system manages the NPU access for us.

// GenAIImageDescriptionScreen.kt
val featureStatus = imageDescriber.checkFeatureStatus().await()

when (featureStatus) {
    FeatureStatus.AVAILABLE -> {
        // The model is ready on-device
        val request = ImageDescriptionRequest.builder(bitmap).build()
        val result = imageDescriber.runInference(request).await()
        onResult(result.description)
    }
    FeatureStatus.DOWNLOADABLE -> {
        // Silently download the model in the background
        imageDescriber.downloadFeature(callback).await()
    }
}

iOS (Swift)

We use an asynchronous loop to continuously pull frames and feed them to the Core ML model.

// DataModel.swift
func runModel() async {
    try! loadModel()
    
    while !Task.isCancelled {
        // Thread-safe access to the latest camera frame
        let image = lastImage.withLock({ $0 })
        
        if let pixelBuffer = image?.pixelBuffer {
            // Run inference on the Neural Engine
            try? await performInference(pixelBuffer)
        }
        // Yield to prevent UI freeze
        try? await Task.sleep(for: .milliseconds(50))
    }
}

Feature 2: The “Eyes” (Real-Time Vision)

For tasks like Face Detection or Object Tracking, speed is everything. We need 30+ frames per second to ensure the app feels responsive.

Android (Kotlin)

We use FaceDetection from ML Kit. The FaceAnalyzer runs on every frame, calculating probabilities for “liveness” (smiling, eyes open) instantly.

// FacialRecognitionScreen.kt
FaceInfo(
    confidence = 1.0f,
    // Detect micro-expressions for liveness check
    isSmiling = face.smilingProbability?.let { it > 0.5f } ?: false,
    eyesOpen = face.leftEyeOpenProbability?.let { left -> 
        face.rightEyeOpenProbability?.let { right ->
            left > 0.5f && right > 0.5f 
        }
    } ?: true
)

iOS (Swift)

We process the prediction result and update the UI immediately. Here, we even visualize the confidence level using color, providing instant feedback to the user.

// ViewfinderView.swift
private func updatePredictionLabel() {
    for result in prediction {
        // Dynamic feedback based on confidence
        let probability = result.probability
        let color = getColorForProbability(probability) // Red to Green transition
        
        let text = "\(result.label): \(String(format: "%.2f", probability))"
        // Update UI layer...
    }
}

Feature 3: Secure Document Scanning

Sometimes you just need a perfect scan without the cloud risk. Android provides a system-level intent that handles edge detection and perspective correction automatically.

Android (Kotlin)

// DocumentScanningScreen.kt
val options = GmsDocumentScannerOptions.Builder()
    .setGalleryImportAllowed(false) // Force live camera for security
    .setPageLimit(5)
    .setResultFormats(RESULT_FORMAT_PDF)
    .build()

scanner.getStartScanIntent(activity).addOnSuccessListener { intentSender ->
    scannerLauncher.launch(IntentSenderRequest.Builder(intentSender).build())
}

Conclusion: One Logic, Two Platforms

Whether you are writing Swift for an iPhone 17 pr0 or Kotlin for a medical Android tablet, the paradigm has shifted.

  1. Capture locally.
  2. Infer on the NPU.
  3. React instantly.

By building this architecture now, you are preparing your codebase for Spring 2026, where on-device intelligence will likely become the standard across both ecosystems.

Reference: Google ML Kit Documentation

]]>
https://blogs.perficient.com/2026/01/19/part-2-building-mobile-ai-a-developers-guide-to-on-device-intelligence/feed/ 0 389702
Part 1: Mobile AI 2026: Why On-Device Intelligence is the New Standard https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/ https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/#respond Mon, 19 Jan 2026 20:15:36 +0000 https://blogs.perficient.com/?p=389691

Subtitle: From Critical Medical Hardware to the Apple Ecosystem, the future of mobile intelligence is local, instant, and unified.

We are standing at a hardware tipping point. For the last decade, “AI” on mobile effectively meant one thing: sending data to the cloud and waiting for an answer. Especially for those chatbots, adding AI to an app meant integrating a slow, spinning loading indicator while data traveled to a server, waited in a queue, and eventually returned text. Users are tired of waiting. They are overwhelmed by generic bots that feel disconnected from the app they are actually using.

But as we move toward 2026, the script is flipping. Phone manufacturers are shipping devices with neural engines (NPUs) so powerful they rival the desktop GPUs of just a few years ago. This shift isn’t just about faster chatbots or smoother animations; it is reshaping critical industries like healthcare and unifying the mobile ecosystem under a single dominant model family: Google Gemini.

The Hardware Revolution: The “Brain” in Your Pocket

The defining trend of the 2025-2026 cycle is the explosion of Hardware Acceleration. Modern mobile processors—whether it’s the latest Snapdragons powering Android flagships or the A-series chips in iPhones—are no longer just Central Processing Units (CPUs). They are dedicated AI powerhouses capable of “always-on” generative tasks.

This hardware leap means we can now run massive models (like Gemini Nano) directly on the device. The benefits are immediate and transformative:

  • Zero Latency: No network round-trips. The intelligence feels instantaneous.
  • Total Privacy: Sensitive data never leaves the phone’s secure enclave.
  • Offline Reliability: Intelligence works in elevators, basements, and airplanes.

The Critical Use Case: Android in Healthcare

Nowhere is this shift more vital than in the rapidly expanding world of Medical Devices. Android has quietly become the operating system of choice for specialized medical hardware, from handheld ultrasound scanners to patient vitals monitors.

Why is the edge critical here? Because medical environments are unforgiving. A doctor in a rural clinic or a paramedic in a speeding ambulance cannot rely on spotty 5G connections to process a patient’s vitals or analyze an X-ray.

  • Privacy Compliance: Processing sensitive patient data (like facial analysis for pain detection) strictly on-device removes complex regulatory cloud compliance hurdles. The data stays with the patient.
  • Reliability: An Android-based diagnostic tool must work instantly, 100% of the time, regardless of Wi-Fi status.
  • Adoption: We are seeing a massive surge in smart, connected medical tools that rely on commodity Android hardware to deliver hospital-grade diagnostics at a fraction of the cost.

The “One AI” Future: Gemini on iOS & Android

Perhaps the most compelling reason to bet on Gemini is the upcoming unification of the mobile AI landscape. Reports indicate that Apple is partnering with Google to integrate Gemini models into iOS 18 and macOS Sequoia for complex reasoning tasks and summaries, a rollout expected to mature by Spring 2026.

While Apple will handle basic tasks with its own on-device models, it is leaning on Gemini’s superior reasoning for the “heavy lifting.” This creates a unique opportunity for developers:

  • Unified Intelligence: Learning to engineer prompts and integrations for Gemini means you are effectively targeting the entire mobile market—both the Android medical devices and the premium iPhone user base.
  • Cross-Platform Consistency: A feature built on Gemini’s logic will behave consistently whether it’s running on a Samsung Galaxy Tab in a hospital or an iPhone 17 in a consumer’s hand.
  • Future-Proofing: With these updates expected shortly, building expertise in Gemini now puts us ahead of the curve when the feature goes mainstream across billions of Apple devices.

In Part 2, we will leave the strategy behind and dive into the code to see how we are already building this future today on iOS and Android.

]]>
https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/feed/ 0 389691
Building Custom Search Vertical in SharePoint Online for List Items with Adaptive Cards https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/ https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/#respond Wed, 14 Jan 2026 06:25:15 +0000 https://blogs.perficient.com/?p=389614

This blog explains the process of building a custom search vertical in SharePoint Online that targets a specific list using a dedicated content type. It covers indexing important columns, and mapping them to managed properties for search. Afterward, a result type is configured with Adaptive Cards JSON to display metadata like title, category, author, and published date in a clear, modern format. Then we will have a new vertical on the hub site, giving users a focused tab for Article results. In last, the result is a streamlined search experience that highlights curated content with consistent metadata and an engaging presentation.

For example, we will start with the assumption that a custom content type is already in place. This content type includes the following columns:

  • Article Category – internal name article_category
  • Article Topic – internal name article_topic

We’ll also assume that a SharePoint list has been created which uses this content type, with the ContentTypeID: 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B

With the content type and list ready, the next steps focus on configuring search so these items can be surfaced effectively in a dedicated vertical.

Index Columns in the List

Indexing columns optimize frequently queried metadata, including category or topic, for faster search.. This improves performance and makes it easier to filter and refine results in a custom vertical.

  • Go to List Settings → Indexed Columns.
  • Ensure article_category and article_topic are indexed for faster search queries.

Create Managed Properties

First, check which RefinableString managed properties are available in your environment. After you identify them, configure them as shown below.:

Refinable stringField nameAlias nameCrawled property
RefinableString101article _topicArticleTopicows_article _topic
RefinableString102article_categoryArticleCategoryows_article_category
RefinableString103article_linkArticleLinkows_article_link

Tip: Creating an alias name for a managed property makes it easier to read and reference. This step is optional — you can also use the default RefinableString name directly.

To configure these fields, follow the steps below:

  • Go to the Microsoft Search Admin Center → Search schema.
  • Go to Search Schema → Crawled Properties
  • Look for the field (ex. article _topic or article_category),  find its crawled property (starts with ows_)
  • Click on property → Add mapping
  • Popup will open → Look for unused RefinableString properties (e.g., RefinableString101, RefinableString102) → click “Ok” button
  • Click “Save”
  • Likewise, create managed properties for all the required columns.

Once mapped, these managed properties can be searched, found, and defined. This means they can be used in search filters, result types, and areas.

Creating a Custom Search Vertical

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

Following the steps given below to create and configure a custom search vertical from the admin center:

  • In “Verticals” tab, add a new value as per following configuration:
    • Name = “Articles”
    • Content source = SharePoint and OneDrive
    • KQL query = It is the actual filter where we specify the filter for items from the specific list to display in search results. In our example, we will set it as: ContentTypeId:0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B*Verticalskql
    • Filters: Filters are an optional setting that allows users to narrow search results based on specific criteria. In our example, we can add a filter by category. To add “Categories” filter on search page, follow below steps:
      • Click on add filter
      • Select “RefinableString102” (This is a refinable string managed property for “article_category” column as setup in above steps)
      • Name = “Category” or other desired string to display on search

Set Vertical filter

Creating a Result Type

Creating a new result type in the Microsoft Search Admin Center lets you define how specific content (like items from a list or a content type) is displayed in search results. In this example, we set some rules and use Adaptive Card template to make search easier and more interesting.

Following are the steps to create a new result type in the admin center.

  • Go to admin center, https://admin.cloud.microsoft
  • Settings → Search & intelligence
  • In “Customizations”, go to “Result types”
  • Add new result types with the following configurations:
    • Name = “AarticlesResults” (Note: Specify any name you want to display in search vertical)
    • Content source = SharePoint and OneDrive
    • Rules
      • Type of content = SharePoint list item
      • ContentTypeId starts with 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B (Note: Content type Id created in above steps)Set Result type
      • Layout = Put the JSON string for Adaptive card to display search result. Following is the JSON for displaying the result:
        {
           "type": "AdaptiveCard",
          "version": "1.3",
          "body": [
            {
              "type": "ColumnSet",
              "columns": [
                {
                  "type": "Column",
                  "width": "auto",
                  "items": [
                    {
                    "type": "Image",
                    "url": <url of image/thumbnail to be displayed for each displayed item>,
                    "altText": "Thumbnail image",
                    "horizontalAlignment": "Center",
                    "size": "Small"
                    }
                  ],
                  "horizontalAlignment": "Center"
                },
                {
                  "type": "Column",
                  "width": 10,
                  "items": [
                    {
                      "type": "TextBlock",
                      "text": "[${ArticleTopic}](${first(split(ArticleLink, ','))})",
                      "weight": "Bolder",
                      "color": "Accent",
                      "size": "Medium",
                      "maxLines": 3
                    },
                    {
                      "type": "TextBlock",
                      "text": "**Category:** ${ArticleCategory}",
                      "spacing": "Small",
                      "maxLines": 3
                    }
                  ],
                  "spacing": "Medium"
                }
              ]
            }
          ],
          "$schema": "http://adaptivecards.io/schemas/adaptive-card.json"
        }

        Set Result type adaptive card

When you set up everything properly, the final output will look like this:

Final search results

Conclusion

Finally, we created a special search area in SharePoint Online for list items with adaptive cards. This changes how users use search. Important metadata becomes clearly visible when you index key columns, map them to managed properties, and design a tailored result type. Since we used Adaptive Card, it adds a modern, interesting presentation layer. It makes it easier to scan and more visually appealing. In the end, publishing a special section gives you a special tab that lets you access a special list of content. This makes it easier to work with and makes the user experience better.

]]>
https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/feed/ 0 389614
Building a Reliable Client-Side Token Management System in Flutter https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/ https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/#respond Fri, 09 Jan 2026 05:15:35 +0000 https://blogs.perficient.com/?p=389472

In one of my recent Flutter projects, I had to implement a session token mechanism that behaved very differently from standard JWT-based authentication systems.

The backend issued a 15-minute session token, but with strict constraints:

  • No expiry timestamp was provided
  • The server extended the session only when the app made an API call
  • Long-running user workflows depended entirely on session continuity

If the session expired unexpectedly, users could lose progress mid-flow, leading to inconsistent states and broken experiences. This meant the entire token lifecycle had to be controlled on the client, in a predictable and self-healing way.

This is the architecture I designed.


  1. The Core Challenge

The server provided the token but not its expiry. The only rule:

“Token is valid for 15 minutes, and any API call extends the session.”

To protect long-running user interactions, the application needed to:

  • Track token lifespan locally
  • Refresh or extend sessions automatically
  • Work uniformly across REST and GraphQL
  • Survive app backgrounding and resuming
  • Preserve in-progress workflows without UI disruption

This required a fully client-driven token lifecycle engine.


  1. Client-Side Countdown Timer

Since expiry data was not available from the server, I implemented a local countdown timer to represent session validity.

How it works:

  • When token is obtained → start a 15-minute timer
  • When any API call happens → reset the timer (because backend extends session)
  • If the timer is about to expire:
    • Active user flow → show a visible countdown
    • Passive or static screens → attempt silent refresh
  • If refresh fails → gracefully log out in case of logged-in users

This timer became the foundation of the entire system.

 

Blank Diagram (3)


  1. Handling App Lifecycle Transitions

Users frequently minimize or switch apps. To maintain session correctness:

  • On background: pause the timer and store timestamp
  • On resume: calculate elapsed background time
    • If still valid → refresh & restart timer
    • If expired → re-authenticate or log out

This prevented accidental session expiry just because the app was minimized.

Blank Diagram (4)

 


  1. REST Auto-Refresh with Dio Interceptors

For REST APIs, Dio interceptors provided a clean, centralized way to manage token refresh.

Interceptor Responsibilities:

  • If timer is null → start timer
  • If timer exists but is inactive,
    • token expired → refresh token
    • perform silent re-login if needed
  • If timer is active → reset the timer
  • Inject updated token into headers

Conceptual Implementation:

class SessionInterceptor extends Interceptor {

  @override

  Future<void> onRequest(

    RequestOptions options,

    RequestInterceptorHandler handler,

  ) async {

    if (sessionTimer == null) {

      startSessionTimer();

    } else if (!sessionTimer.isActive) {

      await refreshSession();

      if (isAuthenticatedUser) {

        await silentReauthentication();

      }

    }

    options.headers[‘Authorization’] = ‘Bearer $currentToken’;

    resetSessionTimer();

    handler.next(options);

  }

}

This made REST calls self-healing, with no manual checks in individual services.


  1. GraphQL Auto-Refresh with Custom AuthLink

GraphQL required custom handling because it doesn’t support interceptors.
I implemented a custom AuthLink where token management happened inside getToken().

AuthLink Responsibilities:

  • Timer null → start
  • Timer inactive,
    • refresh token
    • update storage
    • silently re-login if necessary
  • Timer active → reset timer and continue

GraphQL operations then behaved consistently with REST, including auto-refresh and retry.

Conceptual implementation:

class CustomAuthLink extends AuthLink {

  CustomAuthLink()

      : super(

          getToken: () async {

            if (sessionTimer == null) {

              startSessionTimer();

              return currentToken;

            }

            if (!sessionTimer.isActive) {

              await refreshSession();

              if (isAuthenticatedUser) {

                await silentReauthentication();

              }

              return currentToken;

            }

            resetSessionTimer();

            return currentToken;

          },

        );

}


  1. Silent Session Extension for Authenticated Users

When authenticated users’ sessions extended:

  • token refresh happened in background
  • user data was re-synced silently
  • no screens were reset
  • no interruptions were shown

This was essential for long-running user workflows.


Engineering Lessons Learned

  • When token expiry information is not provided by the backend, session management must be treated as a first-class client responsibility rather than an auxiliary concern. Deferring this logic to individual API calls or UI layers leads to fragmentation and unpredictable behavior.
  • A client-side timer, when treated as the authoritative representation of session validity, significantly simplifies the overall design. By anchoring all refresh, retry, and termination decisions to a single timing mechanism, the system becomes easier to reason about, test, and maintain.
  • Application lifecycle events have a direct and often underestimated impact on session correctness. Explicitly handling backgrounding and resumption prevents sessions from expiring due to inactivity that does not reflect actual user intent or engagement.
  • Centralizing session logic for REST interactions through a global interceptor reduces duplication and eliminates inconsistent implementations across services. This approach ensures that every network call adheres to the same session rules without requiring feature-level awareness.
  • GraphQL requires a different integration point, but achieving behavioral parity with REST is essential. Embedding session handling within a custom authorization link proved to be the most reliable way to enforce consistent session behavior across both communication models.
  • Silent session extension for authenticated users is critical for preserving continuity during long-running interactions. Refreshing sessions transparently avoids unnecessary interruptions and prevents loss of in-progress work.
  • In systems where backend constraints limit visibility into session expiry, a client-driven lifecycle model is not merely a workaround. It is a necessary architectural decision that improves reliability, protects user progress, and provides predictable behavior under real-world usage conditions.
]]>
https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/feed/ 0 389472
Model Context Protocol (MCP) – Simplified https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/ https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/#respond Thu, 08 Jan 2026 07:50:15 +0000 https://blogs.perficient.com/?p=389415

What is MCP?

Model Context Protocol (MCP) is an open-source standard for integrating AI applications to external systems. With AI use cases getting traction more and more, it becomes evident that AI applications tend to connect to multiple data sources to provide intelligent and relevant responses.

Earlier AI systems interacted with users through Large language Models (LLM) that leveraged pre-trained datasets. Then, in larger organizations, business users work with AI applications/agents expect more relevant responses from enterprise dataset, from where Retrieval Augmented Generation (RAG) came into play.

Now, AI applications/agents are expected to produce more accurate responses leveraging latest data, that requires AI systems to interact with multiple data sources and fetch accurate information. When multi-system interactions are established, it requires the communication protocol to be more standardized and scalable. That is where MCP enables a standardized way to connect AI applications to external systems.

 

Architecture

Mcp Architecture

Using MCP, AI applications can connect to data source (ex; local files, databases), tools and workflows – enabling them to access key information and perform tasks. In enterprises scenario, AI applications/agents can connect to multiple databases across organization, empowering users to analyze data using natural language chat.

Benefits of MCP

MCP serves a wide range of benefits

  • Development: MCP reduces development time and complexity when building, or integrating with AI application/agent. It makes integrating MCP host with multiple MCP servers simple by leveraging built-in capability discovery feature.
  • AI applications or agents: MCP provides access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.
  • End-users: MCP results in more capable AI applications or agents which can access your data and take actions on user behalf when necessary.

MCP – Concepts

At the top level of MCP concepts, there are three entities,

  • Participants
  • Layers
  • Data Layer Protocol

 

Participants

MCP follows a client-server architecture where an MCP host – an AI application like enterprise chatbot establishes connections to one or more MCP servers. The MCP host accomplishes this by creating a MCP client for each MCP server. Each MCP client maintains a dedicated connection with its MCP server.

The key participants of MCP architecture are:

  • MCP Host: AI application that coordinates and manages one or more MCP clients
  • MCP Client: A component that maintains a dedicated connection to an MCP server and obtains context from an MCP server for MCP host to interact
  • MCP Server: A program that provides context to MCP clients (i.e. generate responses or perform actions on user behalf)

Mcp Client Server

Layers

MCP consists of two layers:

  • Data layer – Defines JSON-RPC based protocol for client-server communication including,
    • lifecycle management – initiate connection, capability discovery & negotiation, connection termination
    • Core primitives – enabling server features like tools for AI actions, resources for context data, prompt templates for client-server interaction and client features like ask client to sample from host LLM, log messages to client
    • Utility features – Additional capabilities like real-time notifications, track progress for long-running operations
  • Transport Layer – Manages communication channels and authentication between clients and servers. It handles connection establishment, message framing and secure communication between MCP participants

Data Layer Protocol

The core part of MCP is defining the schema and semantics between MCP clients and MCP servers. It is the part of MCP that defines the ways developers can share context from MCP servers to MCP clients.

MCP uses JSON-RPC 2.0 as its underlying RPC protocol. Client and servers send requests to each other and respond accordingly. Notifications can be used when no response is required.

Life Cycle Management

MCP is a stateful protocol that requires lifecycle management. The purpose of lifecycle management is to negotiate the capabilities (i.e. functionalities) that both client and server support.

Primitives

Primitives define what clients and servers can offer each other. These primitives specify the types of contextual information that can be shared with AI applications and the range of actions that can be performed. MCP defines three core primitives that servers can expose:

  • Tools: Executable functions that AI applications can invoke to perform actions (e.g., API calls, database queries)
  • Resources: Data sources that provide contextual information to AI applications (e.g., file contents, API responses, database records)
  • Prompts: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)

 

Notifications

The protocol supports real-time notifications to enable dynamic updates between servers and clients. For example, when a server’s available tools change – such as when new functionalities are added or existing functionality is updated – the server can send tool update notifications to all its connected clients about these changes.

 

Security in Data Accessing

While AI applications communicate with multiple enterprise data sources thgrouch MCP and fetch real-time sensitive data like customer information, financial data to serve the users, data security becomes absolutely critical factor to be addressed.

MCP ensures secure access.

Authentication and Authorization

MCP implements server-side authentication where each MCP server validates who is making the request. The enterprise system controls access through:

  • User-specific credentials – Each user connecting through MCP has their own authentication tokens
  • Role-based access control (RBAC) – Users only access data that the role permits
  • Session management – Time-limited sessions that expire automatically

Data Access Controls

MCP server acts as a security gateway that enforces the same access policies as direct system access:

    • Users can only query data that they are authroized to access
    • The server validates every request against permission rules
    • Sensitive information can be masked or filtered based on user privileges

Secure Communication

      • Encrypted connections – All data transmissions uses TLS/HTTPS encryption
      • No data storage in AI – AI systems do not store the financial data it accesses; it only process it during the conversation session

Audit and Monitoring

MCP implementations in enterprise ecosystem should include:

      • Complete audit logs – Every data access request is logged with user, timestamp and data accessed
      • Anomaly detection – Engage mechanisms that monitor unusual access patterns and trigger alerts
      • Compliance tracking – All interactions meet regulatory requirements like GDPR, PCI-DSS

Architecture Isolation

Enterprises typically deploy MCP using:

      • Private network deployment – MCP servers stay within the enterprise secure firewall boundary
      • API gateway integration – Requests go through existing security infrastructure
      • No direct database access – MCP connects and access data through secure APIs, not direct access to database

The main idea is that MCP does not bypass existing security. It works within the same security as other enterprise applications, just showing a smarter interface.

 

MCP Implementation & Demonstration

In this section, I will demonstrate a simple use case where MCP client (Claude Desktop) interacts with “Finance Manager” MCP server that can fetch financial information from the database.

Financial data is maintained in Postgres database tables. MCP client (Claude Desktop app) will request information about customer account, MCP host will discover appropriate capability based on user prompt and invoke respective MCP tool function that can fetch data from the database table.

To make MCP client-server in action, there are three parts to be configured

      • Backend Database
      • MCP server implementation
      • MCP server registration in MCP Host

Backend Database

Postgres table “accounts” maintains accounts data with below information, “transactions” table maintains the transaction performed on the accounts

Accounts Table

Transactions Table

MCP server implementation

Mcp Server Implementation

FastMCP class implements MCP server components and creating an object of it initialize and enables access to those components to create enterprise MCP server capabilities.

The annotation “@mcp.tool()” defines the capability and the respective function will be recognized as MCP capability. These functions will be exposed to AI applications and will be invoked from MCP Host to perform designated actions.

In order to invoke MCP capabilities from client, MCP server should be up & running. In this example, there are two functions defined as MCP tool capabilities,

      • get_account_details – The function accept account number as input parameter, query “accounts” table and returns account information
      • add_transaction – The function accepts account number and transaction amount as parameters, make entry into “transactions” table

 

MCP Server Registration in MCP Host

For AI applications to invoke MCP server capability, MCP server should be registered in MCP host at client end. For this demonstration, I am using Claude Desktop as MCP client from where I interact with MCP server.

First, MCP server is registered with MCP host in Claude Desktop as below,

Claude Desktop -> Settings -> Developer -> Local MCP Servers -> Click “Edit Config”

Developer Settings

Open “claude_desktop_config” JSON file in Notepad. Add configurations in the JSON as below. The configurations define the path where MCP server implementation is located and instruct command to MCP host to run. Save the file and close.

Register Mcp Server

Restart “Claude Desktop” application, go to Settings -> Developer -> Local MCP servers tab. The newly added MCP server (finance-manager) will be in running state as below,

Mcp Server Running

Go to chat window in Claude Desktop. Issue a prompt to fetch details of an account in “accounts” table and review the response,

 

Claude Mcp Invocation

User Prompt: User issues a prompt to fetch details of an account.

MCP Discovery & Invoke: The client (Claude Desktop) processes the prompt, interacts with MCP host, automatically discover the relevant capability – get_account_details function in this case – without explicitly mention the function name and invoke the function with necessary parameter.

Response: MCP server process the request, fetch account details from the table and respond details to the client. The client formats the response and present it to the user.

Another example to add a transaction in the backend table for an account,

Mcp Server Add Transaction

Here, “add_transaction” capability has been invoked to add a transaction record in “transactions” table. In the chat window, you could notice that what MCP function is being invoked along with request & response body.

The record has been successfully added into the table,

Add Transaction Postgres Table

Impressive, isn’t it..!!

There are a wide range of use cases implementing MCP servers and integrate with enterprise AI systems that bring in intelligent layer to interact with enterprise data sources.

Here, you may also develop a thought that in what ways MCP (Model Context Protocol) is different from RAG (Retrieval Augmented Generation), as I did so. Based on my research, I just curated a comparison matrix of the features that would add more clarity,

 

Aspect RAG (Retrieval Augmented Generation) MCP (Model Context Protocol)
Purpose Retrieve unstructured docs to improve LLM responses AI agents access structured data/tools dynamically
Data Type Unstructured text (PDFs, docs, web pages) Structured data (JSON, APIs, databases)
Workflow Retrieve → Embed → Prompt injection → Generate AI requests context → Protocol delivers → AI reasons
Context Delivery Text chunks stuffed into prompt Structured objects via standardized interface
Token Usage High (full text in context) Low (references/structured data)
Action Capability Read-only (information retrieval) Read + Write (tools, APIs, actions)
Discovery Pre-indexed vector search Runtime tool/capability discovery
Latency Retrieval + embedding time Real-time protocol calls
Use Case Q&A over documents, chatbots AI agents, tool calling, enterprise systems
Maturity Widely adopted, mature ecosystem Emerging standard (2025+)
Complexity Vector DB + embedding pipeline Protocol implementation + AI agent

 

Conclusion

MCP Servers extend the capabilities of AI assistants by allowing them to interact with external services and data sources using natural language commands. Model Context Protocol (MCP) has a wide range of use cases and there are several enterprises already implemented and hosted MCP servers for AI clients to integrate and interact.

Some of the prominent MCP servers include:

GitHub MCP Server: Allows AI to manage repositories, issues, pull requests, and monitor CI/CD workflows directly within the development environment.

Azure DevOps MCP Server: Integrates AI with Azure DevOps services for managing pipelines, work items and repositories, ideal for teams withing the Microsoft ecosystem.

PostgreSQL MCP Server: bridges the gap between AI and databases, allowing natural language queries, schema exploration and data analysis without manual SQL scripting.

Slack MCP Server: Turns Slack into an AI-powered collaboration hub, enabling message posting, channel management

]]>
https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/feed/ 0 389415
Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an open-source, high-performance API client that puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No,” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why is Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads, your collections stay local. No hidden syncing; your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters, API testing without unnecessary features.

Bottom line: Bruno fits the way we work today, collaboratively, securely, and efficiently. It’s not trying to do everything; it’s just good at API testing.

Key Features

Bruno keeps it real with features that matter. Here are the highlights:

  1. Totally Open-Source

  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A group of developers is contributing to GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up

  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain-text files integrate well with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning

  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up quickly and zips through responses.
  • Great for solo endpoint tweaks or managing large workflows without your machine slowing.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux. Just choose your platform, and you’re good to go.

#3. Quick Install Guide

Windows

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy-to-build requests on the fly.
  • CLI: For the terminal lovers. Automate tests, integrate with CI/CD, or run collections: bruno run collection.bru –env dev.

Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen prompting you to create a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Enter the URL: https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Add the header: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it, and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, create folders for environments or APIs, and use variables to switch setups.

Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed

  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy

  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag

  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

Level up With Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

Community & Contribution

It’s community-driven:

Conclusion

Bruno isn’t just another API testing tool; it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
GitLab to GitHub Migration https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/ https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/#respond Mon, 29 Dec 2025 07:59:05 +0000 https://blogs.perficient.com/?p=389333

1. Why Modern Teams Choose GitHub

Migrating from GitLab to GitHub represents a strategic shift for many engineering teams. Organizations often move to leverage GitHub’s massive open-source community and superior third-party tool integrations. Moreover, GitHub Actions provides a powerful, modern ecosystem for automating complex developer workflows. Ultimately, this transition simplifies standardization across multiple teams while improving overall project visibility.

2. Prepare Your Migration Strategy

A successful transition requires more than just moving code. You must account for users, CI/CD pipelines, secrets, and governance to avoid data loss. Consequently, a comprehensive plan should cover the following ten phases:

  • Repository and Metadata Transfer

  • User Access Mapping

  • CI/CD Pipeline Conversion

  • Security and Secret Management

  • Validation and Final Cutover

3. Execute the Repository Transfer

The first step involves migrating your source code, including branches, tags, and full commit history.

  • Choose the Right Migration Tool

For straightforward transfers, the GitHub Importer works well. However, if you manage a large organization, the GitHub Enterprise Importer offers better scale. For maximum control, technical teams often prefer the Git CLI.

Command Line Instructions:

git clone –mirror gitlab_repo_url
cd repo.git
git push –mirror github_repo_url

Manage Large Files and History:

During this phase, audit your repository for large binary files. Specifically, you should use Git LFS (Large File Storage) for any assets that exceed GitHub’s standard limits.

4. Map Users and Recreate Secrets

GitLab and GitHub use distinct identity systems, so you cannot automatically migrate user accounts. Instead, you must map GitLab user emails to GitHub accounts and manually invite them to your new organization.

Secure Your Variables and Secrets:

For security reasons, GitLab prevents the export of secrets. Therefore, you must recreate them in GitHub using the following hierarchy:

  • Repository Secrets: Use these for project-level variables.

  • Organization Secrets: Use these for shared variables across multiple repos.

  • Environment Secrets: Use these to protect variables in specific deployment stages.

5.Migrating Variables and Secrets

Securing your environment requires a clear strategy for moving CI/CD variables and secrets. Specifically, GitLab project variables should move to GitHub Repository Secrets, while group variables should be placed in Organization Secrets. Notably, secrets must be recreated manually or via the GitHub API because they cannot be exported from GitLab for security reasons.

6. Convert GitLab CI to GitHub Actions

Translating your CI/CD pipelines often represents the most challenging part of the migration. While GitLab uses a single.gitlab-ci.yml file, GitHub Actions utilizes separate workflow files in the .github/workflows/ directory.

Syntax and Workflow Changes:

When converting, map your GitLab “stages” into GitHub “jobs”. Moreover, replace custom GitLab scripts with pre-built actions from the GitHub Marketplace to save time. Finally, ensure your new GitHub runners have the same permissions as your old GitLab runners.

7.Finalize the Metadata and Cutover

Metadata like Issues, Pull Requests (Merge Requests in GitLab), and Wikis require special handling because Git itself does not track them.

The Pre-Cutover Checklist:

Before the official switch, verify the following:

  1. Freeze all GitLab repositories to stop new pushes.

  2. Perform a final sync of code and metadata.

  3. Update webhooks for tools like Slack, Jira, or Jenkins.

  4. Verify that all CI/CD pipelines run successfully.

8. Post-Migration Best Practices

After completing the cutover, archive your old GitLab repositories to prevent accidental updates. Furthermore, enable GitHub’s built-in security features like Dependabot and Secret Scanning to protect your new environment. Finally, provide training sessions to help your team master the new GitHub-centric workflow.

.

9. Final Cutover and Post-Migration Best Practices

Ultimately, once all repositories are validated and secrets are verified, you can execute the final cutover. Specifically, you should freeze your GitLab repositories and perform a final sync before switching your DNS and webhooks. Finally, once the move is complete, remember to archive your old GitLab repositories and enable advanced security features like Dependabot and secret scanning.

10.Summary and Final Thoughts

In conclusion, a GitLab to GitHub migration is a significant but rewarding effort. By following a structured plan that includes proper validation and team training, organizations can achieve a smooth transition. Therefore, with the right tooling and preparation, you can successfully improve developer productivity and cross-team collaboration

]]>
https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/feed/ 0 389333
Unifying Hybrid and Multi-Cloud Environments with Azure Arc https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/ https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/#respond Mon, 22 Dec 2025 08:06:05 +0000 https://blogs.perficient.com/?p=389202

1. Introduction to Modern Cloud Architecture

In today’s world, architects generally prefer to keep their compute resources—such as virtual machines and Kubernetes servers—spread across multiple clouds and on-premises environments. Specifically, they do this to achieve the best possible resilience through high-availability and disaster recovery. Moreover, this approach allows for better cost efficiency and higher security.

2. The Challenge of Management Complexity

However, this distributed strategy brings additional challenges. Specifically, it increases the complexity of maintaining and managing resources from different consoles, such as Azure, AWS, and Google portals. Consequently, even for basic operations like restarts or updates, administrators often struggle with multiple disparate portals. As a result, basic administration tasks become too complex and cumbersome.

3. How Azure Arc Provides a Solution

Azure Arc solves this problem by providing a simple “pane of glass” to manage and monitor servers regardless of their location. In addition, it simplifies governance by delivering a consistent management platform for both multi-cloud and on-premises resources. Specifically, it provides a centralized way to project existing non-Azure resources directly into the Azure Resource Manager (ARM).

4. Understanding Key Capabilities

Currently, Azure Arc allows you to manage several resource types outside of Azure. For instance, it supports servers, Kubernetes clusters, and databases. Furthermore, it offers several specific functionalities:

  • Azure Arc-enabled Servers: Connects physical or virtual Windows and Linux servers to Azure for centralized visibility.

  • Azure Arc-enabled Kubernetes: Additionally, you can onboard any CNCF-conformant Kubernetes cluster to enable GitOps-based management.

  • Azure Arc-enabled SQL Server: This brings external SQL Server instances under Azure governance for advanced security.

5. Architectural Implementation Details

The Azure Arc architecture revolves primarily around the Azure Resource Manager. Specifically, when a resource is onboarded, it receives a unique resource ID and becomes part of Azure’s management plane. Consequently, each resource installs a local agent that communicates with Azure to receive policies and upload logs.

6. The Role of the Connected Machine Agent

The agent package contains several logical components bundled together. For instance, the Hybrid Instance Metadata service (HIMDS) manages the connection and the machine’s Azure identity. Moreover, the guest configuration agent assesses whether the machine complies with required policies. In addition, the Extension agent manages VM extensions, including their installation and upgrades.

7. Onboarding and Deployment Methods

Onboarding machines can be accomplished using different methods depending on your scale. For example, you might use interactive scripts for small deployments or service principals for large-scale automation. Specifically, the following options are available:

  • Interactive Deployment: Manually install the agent on a few machines.

  • At-Scale Deployment: Alternatively, connect machines using a service principal.

  • Automated Tooling: Furthermore, you can utilize Group Policy for Windows machines.

8. Strategic Benefits for Governance

Ultimately, Azure Arc provides numerous strategic benefits for modern enterprises. Specifically, organizations can leverage the following:

  • Governance and Compliance: Apply Azure Policy to ensure consistent configurations across all environments.

  • Enhanced Security: Moreover, use Defender for Cloud to detect threats and integrate vulnerability assessments.

  • DevOps Efficiency: Enable GitOps-based deployments for Kubernetes clusters.

9. Important Limitations to Consider

However, there are a few limitations to keep in mind before starting your deployment. First, continuous internet connectivity is required for full functionality. Secondly, some features may not be available for all operating systems. Finally, there are cost implications based on the data services and monitoring tools used.

10. Conclusion and Summary

In conclusion, Azure Arc empowers organizations to standardize and simplify operations across heterogeneous environments. Whether you are managing legacy infrastructure or edge devices, it brings everything under one governance model. Therefore, if you are looking to improve control and agility, Azure Arc is a tool worth exploring.

]]>
https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/feed/ 0 389202
How to Secure Applications During Modernization on AWS https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/ https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/#respond Fri, 19 Dec 2025 06:40:17 +0000 https://blogs.perficient.com/?p=389050

Why Do We Need to Secure Our Applications?  

Cloud environments are very dynamic and interconnected. A single misconfiguration or exposed API key can lead to:  

  • Data breaches 
  • Compliance violations 
  • Costly downtime 

Attackers often target application-level weaknesses, not just infrastructure gaps. If any application handles sensitive data, financial transactions, or user credentials, security is critical. 

Common Mistakes Made When Building Applications

  • Hardcoding API keys and credentials 
  • Ignoring dependency vulnerabilities 
  • Skipping encryption/decryption for sensitive data 

Essential Security Best Practices

1. Identity and Access Management (IAM)

  • Create dedicated IAM roles for your Lambda functions, EC2 instances, or ECS tasks instead of hardcoding access keys in your application. 
  • We must regularly review who has permissions using the IAM Access Analyzer. 
  • We must avoid using the root account for day-to-day operations/ any operations as a developer. 

Role Creation

 

Role Creation1

2. Don’t Store/Share Secrets in Your Code

Your appsettings.json is not the right place for secret keys. Storing API keys or database passwords. 

  • We must use AWS Secrets Manager or Parameter Store to keep secrets safe. 
  • Fetch keys at runtime by using AWS SDK for .NET or the AWSSDK.Extensions.NETCore.Setup configuration provider 

Secretmanager Creation2

Secretmanager Reading

3. Always Encrypt Data 

Encryption is one of the best practices to encrypt our sensitive data 

  • Enable HTTPS by default for all your endpoints.  
  • Use AWS Certificate Manager (ACM) to issue and manage SSL/TLS certificates. 
  • In your application, make sure that all traffic is redirected to HTTPS by adding app.UseHttpsRedirection(); 
  • AWS KMS to encrypt your S3 buckets, RDS databases, and EBS volumes.
  • If you’re using SQL Server on RDS, enable Transparent Data Encryption (TDE). 

 Encrypt & Decrypt API Key with KMS 

Encryption Steps

Encryption Decrypt Code

4. Build a Secure Network Foundation

  • Must use VPCs with private subnets for backend services. 
  • Control the traffic with Security Groups and Network ACLs. 
  • Use VPC Endpoints to keep traffic within AWS’s private network  
  • Use AWS WAF to protect your APIs, and enable AWS Shield to guard against DDoS attacks. 

Security Group

Vpc Creation

5. Keep Your Code and Dependencies Clean

Even the best infrastructure can’t save a vulnerable codebase. 

  • Update your .NET SDK and NuGet packages regularly. 
  • Use Amazon Inspector for runtime and AWS environment security, and tools like Dependabot for Development-time dependency security to find vulnerabilities early. 
  • Add code review analysis tools (like SonarQube) in your CI/CD pipeline. 

AWS Inspector

6. Log Everything and Watch

  • Enable Amazon AWS CloudWatch for all central logging and use AWS X-Ray to trace requests through the application. 
  • Turn on CloudTrail to track every API call across your account. 
  • Enable GuardDuty for continuous threat detection. 

 

]]>
https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/feed/ 0 389050
Deploy Microservices On AKS using GitHub Actions https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/ https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/#respond Thu, 18 Dec 2025 05:30:05 +0000 https://blogs.perficient.com/?p=389089

Deploying microservices in a cloud-native environment requires an efficient container orchestration platform and an automated CI/CD pipeline. Azure Kubernetes Service (AKS) is a Kubernetes solution that is managed by Azure. GitHub Actions makes it easy to automate your CI/CD processes from the source code repository.

Image (1)

Why Use GitHub Actions with AKS

Using GitHub Actions for AKS deployments provides:

  • Automated and consistent deployments
  • Faster release cycles
  • Reduced manual intervention
  • Easy Integration with GitHub repositories
  • Better visibility into build and deployment status

Architecture Overview

The deployment workflow follows a CI/CD approach:

  • Microservices packaged as Docker images
  • Images pushed to ACR
  • AKS pulls the image from ACR
  • GitHub Actions automates
  • Build & Push Docker Images
  • Deploy manifests to AKS

Image

Prerequisites

Before proceeding with the implementation, ensure the following   prerequisites are in place:

  • Azure Subscriptions
  • Azure CLI Installed and authenticated (AZ)
  • An existing Azure Kubernetes Service (AKS) cluster
  • Kubectl is installed and configured for your cluster
  • Azure Container Registry (ACR) associated with the AKS cluster
  • GitHub repository with microservices code

Repository Structure

Each microservice is maintained in a separate repository with the following structure:  .github/workflows/name.yml

CI/CD Pipeline Stages Overview

  • Source Code Checkout
  • Build Docker Images
  • Push images to ACR
  • Authenticate to AKS
  • Deploy Microservices using kubectl

Configure GitHub Secrets

Go to GitHub – repository – Settings – Secrets and Variables – Actions  

Add the following secrets:

  • ACR_LOGIN_SERVER
  • ACR_USERNAME
  • ACR_PASSWORD
  • KUBECONFIG

Stage 1: Source Code Checkout

The Pipeline starts by pulling the latest code from the GitHub repository

Stage 2: Build Docker Images

For each microservice:

  • A Docker image is built
  • A unique tag (commit ID and version) is assigned

Images are prepared for deployment

Stage 3: Push Images to Azure Container Registry

Once the images are built:

  • GitHub Actions authenticates to ACR
  • Images are pushed securely to the registry
  • After the initial setup, AKS subsequently pulls the images directly from ACR

Stage 4: Authenticate to AKS

GitHub Actions connects to the AKS cluster using kubeconfig

Stage 5: Deploy Microservices to AKS

In this stage:

  • Kubernetes manifests are applied
  • Services are exposed via the Load Balancer

Deployment Validation

After deployment:

  • Pods are verified to be in a running state
  • Check the service for external access

Best Practices

To make the pipeline production Ready:

  • Use commit-based image tagging
  • Separate environments (dev, stage, prod)
  • Use namespace in AKS
  • Store secrets securely using GitHub Secrets

Common Challenges and Solutions

  • Image pull failures: Verify ACR permission
  • Pipeline authentication errors: Validate Azure credentials
  • Pod crashes: Review container logs and resource limits

Benefits of CI/CD with AKS and GitHub Actions

  • Faster deployments
  • Improved reliability
  • Scalable microservices architecture
  • Better developer productivity
  • Reduced operational overhead

Conclusion

Deploying microservices on AKS using GitHub Actions provides a robust, scalable, and automated CI/CD solution. By integrating container builds, registry management, and Kubernetes deployments into a single pipeline, teams can deliver applications faster and more reliably.

CI/CD is not just about automation – it’s about confidence, consistency, and continuous improvement.

 

]]>
https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/feed/ 0 389089
Why Inter-Plan Collaboration Is the Competitive Edge for Health Insurers https://blogs.perficient.com/2025/12/05/why-inter-plan-collaboration-is-the-competitive-edge-for-health-insurers/ https://blogs.perficient.com/2025/12/05/why-inter-plan-collaboration-is-the-competitive-edge-for-health-insurers/#respond Fri, 05 Dec 2025 13:00:12 +0000 https://blogs.perficient.com/?p=387904

A health insurance model built for yesterday won’t meet the demands of today’s consumers. Expectations for seamless, intuitive experiences are accelerating, while fragmented systems continue to drive up costs, create blind spots, and erode trust.

Addressing these challenges takes more than incremental fixes. The path forward requires breaking down silos and creating synergy across plans, while aligning technology, strategy, and teams to deliver human-centered experiences at scale. This is more than operational; it’s strategic. It’s how health insurers build resilience, move with speed and purpose, and stay ahead of evolving demands.

Reflecting on recent industry conversations, we’re proud to have sponsored LeadersIgnite and the 2025 Inter-Plan Solutions Forum. As Hari Madamalla shared:

Hari Madamalla Headshot“When insurers share insights, build solutions together, and scale what works, they can cut costs, streamline prior authorization and pricing, and deliver the experiences members expect.”– Hari Madamalla, Senior Vice President, Healthcare + Life Sciences

To dig deeper into these challenges, we spoke with healthcare leaders Hari Madamalla, senior vice president, and directors Pavan Madhira and Priyal Patel about how health insurers can create a competitive edge by leveraging digital innovation with inter-plan collaboration.

The Complexity Challenge Health Insurers Can’t Ignore

Health insurance faces strain from every angle: slow authorizations, confusing pricing, fragmented data, and widening care gaps. The reality is, manual fixes won’t solve these challenges. Plans need smarter systems that deliver clarity and speed at scale. AI and automation make it possible to turn data into insight, reduce fragmentation, and meet mandates without adding complexity.

Headshot Pavan Madhira“Healthcare has long struggled with inefficiencies and slow tech adoption—but the AI revolution is changing that. We’re at a pivotal moment, similar to the digital shift of the 1990s, where AI is poised to disrupt outdated processes and drive real transformation.” – Pavan Madhira, Director, Healthcare + Life Sciences

But healthcare organizations face unique constraints, including HIPAA, PHI, and PII regulations that limit the utility of plug-and-play AI solutions. To meet these challenges, we apply our PACE framework—Policies, Advocacy, Controls, and Enablement—to ensure AI is not only innovative but also rooted in trust. This approach ensures AI is deployed with purpose, aligned to business goals, and embedded with safeguards that protect consumers and organizations.

Still, technology alone isn’t enough though. Staying relevant means designing human-centered experiences that reduce friction and build trust. Perficient’s award-winning Access to Care research study reveals that friction in the care journey directly impacts consumer loyalty and revenue.

More than 45% of consumers aged 18–64 have used digital-first care instead of their regular provider, and 92% of them believe the quality is equal to—or better.

That’s a signal healthcare leaders can’t afford to ignore. It tells us when experiences fall short, consumers go elsewhere, and they won’t always come back.

For health insurers, that shift creates issues. When members seek care outside your ecosystem, you risk losing visibility into care journeys, creating gaps in data and blind spots in member health management. The result? Higher costs, duplicative services, and missed opportunities for proactive coordination. Fragmented care journeys also undermine efforts to deliver a true 360-degree view of the member. The solution lies in intuitive digital transformation that turns complexity into clarity.

Explore More: Empathy, Resilience, Innovation, and Speed: The Blueprint for Intelligent Healthcare Transformation

Where Inter-Plan Collaboration Creates Real Momentum

When health plans work together, the payoff is significant. Collaboration moves the industry from silos to synergy, enabling human-centered experiences across networks that keep members engaged and revenue intact.

Building resilience is key to that success. Leaders need systems that anticipate member needs and remove barriers before they impact access to care. That means reducing friction in scheduling and follow-up, enabling seamless coordination across networks, and delivering digital experiences that feel as simple and intuitive as consumer platforms like Amazon or Uber. Resilience also means preparing for the unexpected and being able to pivot quickly.

When plans take this approach, the impact is clear:

  • Higher Quality Scores and Star Ratings: Shared strategies for closing gaps and improving provider data can help lift HEDIS scores and Star Ratings, unlocking higher reimbursement and bonus pools.
  • Faster Prior Authorizations: Coordinated rules and automation help reduce delays and meet new regulatory requirements like CMS Interoperability and Prior Authorization Final Rule (CMS-0057-F).
  • True Price Transparency: Consistent, easy-to-understand cost and quality information across plans helps consumers make confident choices and stay in-network.
  • Stronger Member Loyalty: Unified digital experiences across plans help improve satisfaction and engagement.
  • Lower Administrative Overhead: Cleaner member data means fewer errors, less duplication, and lower compliance risk.

Priyal Patel Headshot“When plans work together, they can better serve their vulnerable populations, reduce disparities, and really drive to value based care. It’s about building trust, sharing responsibility, and innovating with empathy.” – Priyal Patel, Director, Healthcare + Life Sciences

Resilience and speed go hand in hand though. Our experts help health insurers deliver both by:

This approach supports the Quintuple Aim: better outcomes, lower costs, improved experiences, clinician well-being, and health equity. It also ensures that innovation is not just fast, but focused, ethical, and sustainable.

You May Also Enjoy: Access to Care is Evolving: What Consumer Insights and Behavior Models Reveal

Accelerating Impact With Digital Innovation and Inter-Plan Collaboration

Beyond these outcomes, collaboration paired with digital innovation unlocks even greater opportunities to build a smarter, more connected future of healthcare. It starts with aligning consumer expectations, digital infrastructure, and data governance to strategic business goals.

Here’s how plans can accelerate impact:

  • Real-Time Data Sharing and Interoperability: Shared learning ensures insights aren’t siloed. By pooling knowledge across plans, leaders can identify patterns, anticipate emerging trends, and act faster on what works. Real-time interoperability, like FHIR-enabled solutions, gives plans the visibility needed for accurate risk adjustment and timely quality reporting. AI enhances this by predicting gaps and surface actionable insights, helping plans act faster and reduce costs.
  • Managing Coding Intensity in the AI Era: As provider AI tools capture more diagnoses, insurers can see risk scores and costs rise, creating audit risk and financial exposure. This challenge requires proactive oversight. Collaboration helps by establishing shared standards and applying predictive analytics to detect anomalies early, turning a potential cost driver into a managed risk.
  • Prior Authorization Modernization: Prior authorization delays drive up costs and erode member experience. Aligning on streamlined processes and leveraging intelligent automation can help meet mandates like CMS-0057-F, while predicting approval likelihood, flagging exceptions early, and accelerating turnaround times.
  • Joint Innovation Pilots: Co-development of innovation means plans can shape technology together. This approach balances unique needs with shared goals, creating solutions that cut costs, accelerate time to value, and ensure compliance stays front and center.
  • Engaging Member Experience Frameworks: Scaling proven approaches across plans amplifies impact. When plans collaborate on digital experience standards and successful capabilities are replicated, members enjoy seamless interactions across networks. Building these experiences on solid foundations with purpose-driven AI is key to delivering stronger engagement and loyalty at scale.
  • Shared Governance and Policy Alignment: Joint governance establishes accountability, aligns incentives for value-based care, and reduces compliance risk while protecting revenue.

Success in Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data

Make Inter-Plan Collaboration Your Strategic Advantage

Ready to move from insight to impact? Our healthcare expertise equips leaders to modernize, personalize, and scale care. We drive resilient, AI-powered transformation to shape the experiences and engagement of healthcare consumers, streamline operations, and improve the cost, quality, and equity of care.

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data + Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

We have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently ranks us as one of the largest healthcare consulting firms.

]]>
https://blogs.perficient.com/2025/12/05/why-inter-plan-collaboration-is-the-competitive-edge-for-health-insurers/feed/ 0 387904
Monitoring and Logging in Sitecore AI https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/ https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/#respond Mon, 24 Nov 2025 21:04:34 +0000 https://blogs.perficient.com/?p=388586

Why Observability Matters More Than Ever

Moving from traditional Sitecore deployments Sitecore AI means the infrastructure is abstracted away. That’s fantastic for agility, but it also changes how we troubleshoot. You can’t RDP onto a server and tail a file anymore; your lifeline is observability: clear signals from logs, metrics, and governed automation that tell you what’s happening across the platform and the front‑end.

What’s Different in Sitecore AI?

Logs and diagnostics are centralized. You access them via the Sitecore AI portal and the Sitecore CLI. They’re organized by environment and by role. Your front‑end application or rendering host, often a Next.js site deployed on Vercel, responsible for headless rendering and user experience has its own telemetry separate from the CMS.

So, your monitoring picture spans three surfaces: Sitecore AI logs for CMS and deployment activity, rendering host telemetry for front‑end performance, and Experience Edge signals for content delivery. Together, they describe the health of the experience, not just the servers.

 

Understanding the Logging Surfaces

In Sitecore AI, logs are grouped into three primary areas that each play a distinct role in diagnosing issues:

Content Management (CM) logs

  • These are your first stop for diagnosing publishing failures, broken workflows, template errors, and serialization mismatches. When a publish fails, CM logs help you separate permissions or workflow problems from data or serialization issues.

Rendering Host logs

  • Think front‑end behavior and performance. If personalization falls back, pages render slowly, or API responses seem sluggish, the rendering host logs surface cache misses, API latency, and rendering errors that directly impact Core Web Vitals and UX.

Deployment logs

  • The “narrative” of your CI/CD run. When a build fails or a promotion doesn’t complete, deployment logs pinpoint CLI command failures, artifact mismatches, or environment configuration issues. They also provide stage-by-stage visibility (provisioning, build, deploy, post‑actions), which speeds triage and supports audits.

Access these logs quickly in the Deploy app’s environment view or programmatically via the Sitecore CLI for listing, viewing, and downloading logs as part of your pipeline artifacts.

Integration Patterns for Enterprise Monitoring

Centralizing is helpful; correlating is essential. The pragmatic pattern I recommend is:

Sitecore AI → Azure Monitor/Application Insights

  • Forward CMS and deployment logs so you can correlate spikes in errors with deployments, content bursts, or traffic changes. KQL lets you slice by environment, role, and severity for root cause analysis.

Rendering Host → APM (Datadog/New Relic)

  • Use front‑end analytics to track TTFB, cache hit ratio, route errors, and API dependency health. Pair this with Vercel’s own analytics for global edge performance.

Experience Edge → Webhook Monitoring

  • Register webhooks so you can track publish‑to‑Edge latency and trigger alerts or redeploys when content propagation slows or fails.

SIEM Integration (today’s reality)

  • For unified audit across Sitecore SaaS, stream supported Common Audit Logs (CAL) via webhooks (Personalize/CDP/Connect) and, for Sitecore AI, pull environment and deployment logs via CLI on a schedule until broader CAL coverage lands.

Metrics That Matter

In a SaaS world, traditional “server up” checks don’t describe user experience. Focus on metrics that map directly to reliability and business impact:

Deployment success & promotion health

  • Failed builds or promotions block content and features. Tracking rates and mean time to recovery reveals pipeline reliability.

Publish‑to‑Edge latency

  • Authors expect content to reach Experience Edge quickly. Latency here affects real‑time campaigns, previews, and editorial confidence.

Rendering host performance

  • P95/P99 TTFB, cache hit ratio, and error rates impact Core Web Vitals, SEO, and conversion. They also help you spot regressions after releases.

Agent activity & governance

  • With Sitecore AI’s agentic capabilities, monitoring agent runs, approvals, and failures protects compliance and prevents unintended bulk changes.

Governance Signals in Sitecore AI

Sitecore AI introduces Agentic Studio: a governed workspace to design, run, and oversee automation. Work is organized around four building blocks, Agents, Flows, Spaces, and Signals. Practically, that means you can automate complex operations while maintaining human review and auditability.

  • Agents: Handle focused tasks (e.g., content migration, metadata updates).
  • Flows: Orchestrate agents into multi‑step workflows with visibility across stages.
  • Spaces: Provide shared context for teams to collaborate on active runs.

Signals surface trends and triggers that can start or adjust flows. Together, these give marketers and developers a safe frame to scale automation without losing control.

How Agent Flows Are Monitored

Monitoring agent flows blends product‑level visibility with enterprise analytics:

Run visibility in Agentic Studio:

  • Each flow run exposes status, participants (human and agent), timestamps, and outcomes. Because flows are orchestrated in a governed workspace, you get “full visibility” into progression from brief to publish/optimization, including approvals where human review is required.

Governance signals and audit trails:

  • Signals can trigger flows and also act as governance inputs (for example, trend alerts requiring approval). Capture audit trails of who initiated a run, which agents executed steps, and what content or configurations changed.

Alerting and dashboards:

  • Mirror key flow events into your monitoring plane: start, paused awaiting approval, failed step, completed. Route these into Azure Monitor or your SIEM so operations sees agentic activity alongside deployments and content events.

Integration approach:

  • Where Common Audit Logs (CAL) are available (Personalize/CDP/Connect), stream events via webhooks. For Sitecore AI and Agentic activity not yet covered by CAL, use scheduled CLI log exports and APIs the platform exposes to assemble a unified view. Normalize event schemas (runId, agentId, flowId, environment, severity) to enable cross‑product correlation.

The outcome: agent automation becomes observable. Teams can answer “what changed, when, by whom, and why” and tie those answers to performance and compliance dashboards.

Final Thoughts

Observability in Sitecore AI isn’t about servers; it’s about experience health and trusted automation. When you combine SaaS‑native logs, front‑end telemetry, Edge events, and agentic governance signals, you gain a single narrative across deployments, content, and automation, the narrative you need to keep teams fast, safe, and accountable.

]]>
https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/feed/ 0 388586