Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ Expert Digital Insights Fri, 20 Feb 2026 20:01:52 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ 32 32 30508587 2026 Regulatory Reporting for Asset Managers: Navigating the New Era of Transparency https://blogs.perficient.com/2026/02/20/2026-regulatory-reporting-for-asset-managers-navigating-the-new-era-of-transparency/ https://blogs.perficient.com/2026/02/20/2026-regulatory-reporting-for-asset-managers-navigating-the-new-era-of-transparency/#respond Fri, 20 Feb 2026 20:01:52 +0000 https://blogs.perficient.com/?p=390547

The regulatory landscape for asset managers is shifting beneath our feet. It’s no longer just about filing forms; it’s about data granularity, frequency, and the speed at which you can deliver it. As we move into 2026, the Securities and Exchange Commission (SEC) has made its intentions clear: they want more data, they want it faster, and they want it to be more transparent than ever before.

For financial services executives and compliance professionals, this isn’t just a compliance headache—it’s a data infrastructure challenge. The days of manual spreadsheets and last-minute scrambles are over. The new requirements demand a level of agility and precision that legacy systems simply cannot support. If you’re still relying on manual processes to meet these evolving standards, you’re not just risking non-compliance; you’re risking your firm’s operational resilience.

The Shifting Landscape: More Data, More Often

The theme for 2026 is “more.” More frequent filings, more detailed disclosures, and more scrutiny. The SEC’s push for modernization is driven by a desire to better monitor systemic risk and protect investors, but for asset managers, it translates to a significant operational burden.

Take Form N-PORT, for example. What was once a quarterly obligation with a 60-day lag is transitioning to a monthly filing requirement due within 30 days of month-end. This tripling of filing frequency doesn’t just mean three times the work; it means your data governance and reporting engines must be “always-on,” capable of aggregating and validating portfolio data on a continuous cycle.

The “Big Three” for 2026: Form PF, 13F, and N-PORT

While there are numerous reports to manage, three stand out as critical focus areas for 2026: Form PF, Form 13F, and Form N-PORT. Each has undergone significant changes or is subject to new scrutiny that demands your attention.

Form PF: The Private Fund Data Deep Dive

The amendments to Form PF, adopted in February 2024, represent a sea change for private fund advisers. With a compliance date of October 1, 2026, these changes require more granular reporting on fund structures, exposures, and performance. Large hedge fund advisers must now report within 60 days of quarter-end, and the scope of data required—from detailed asset class breakdowns to counterparty exposures—has expanded significantly. This isn’t just another new report. It’s a comprehensive audit of your fund’s risk profile, delivered quarterly.

Form 13F: The Institutional Standard

For institutional investment managers exercising discretion over $100 million or more in 13(f) securities, Form 13F remains a cornerstone of transparency. Filed quarterly within 45 days of quarter-end, this report now requires the companion filing of Form N-PX to disclose proxy votes on executive compensation. This linkage between holdings and voting records adds a new layer of complexity, requiring firms to seamlessly integrate data from their portfolio management and proxy voting systems.

Form N-PORT: The Monthly Sprint

A shift to monthly N-PORT filings is a game-changer for registered investment companies. The requirement to file within 30 days of month-end means that your month-end close process must be tighter than ever. Any delays in data reconciliation or validation will eat directly into your filing window, leaving little margin for error.

The Operational Burden: Hidden Costs of Manual Processes

It’s easy to underestimate the time and effort required to produce these reports. A “simple” quarterly update can easily consume a week or more of a compliance officer’s time when you factor in data gathering, reconciliation, and review.

For a large hedge fund adviser, we at Perficient have seen a full Form PF filing taking two weeks or more of dedicated effort from multiple teams. When you multiply this across all your reporting obligations, the cost of manual processing becomes staggering. And that’s before you consider the opportunity cost—time your team spends wrangling data is time they aren’t spending on strategic initiatives or risk management.

The Solution: Automation and Cloud Migration

The only viable path forward is automation. To meet the demands of 2026, asset managers must treat regulatory reporting as a data engineering problem, not just a compliance task. This means moving away from siloed spreadsheets and towards a centralized, cloud-native data platform.

By migrating your data infrastructure to the cloud, you gain the scalability and flexibility needed to handle large datasets and complex calculations. Automated data pipelines can ingest, validate, and format your data in real-time, reducing the “production time” from weeks to hours. This isn’t just about efficiency; it’s about accuracy and peace of mind. When your data is governed and your processes are automated, you can file with confidence, knowing that your numbers are right.

Key Regulatory Reports at a Glance

To help you navigate the 2026 reporting calendar, we’ve compiled a summary of the key reports, their purpose, and what it takes to get them across the finish line.

Sec Forms Asset Managers Must File

Your Next Move

If your firm would like assistance designing or adopting regulatory reporting processes or migrating your data infrastructure to the cloud with a consulting partner that has deep industry expertise – reach out to us here.

]]>
https://blogs.perficient.com/2026/02/20/2026-regulatory-reporting-for-asset-managers-navigating-the-new-era-of-transparency/feed/ 0 390547
Building a Marketing Cloud Custom Activity Powered by MuleSoft https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/ https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/#comments Thu, 12 Feb 2026 17:37:13 +0000 https://blogs.perficient.com/?p=390190

The Why…

Salesforce Marketing Cloud Engagement is incredibly powerful at orchestrating customer journeys, but it was never designed to be a system of record. Too often, teams work around that limitation by copying large volumes of data from source systems into Marketing Cloud data extensions—sometimes nightly, sometimes hourly—just in case the data might be needed in a journey. This approach works, but it comes at a cost: increased data movement, synchronization challenges, latency, and ongoing maintenance that grows over time.

Custom Activities, which are surfaced in Journey Builder, open the door to a different model. Instead of forcing all relevant data into Marketing Cloud ahead of time, a journey can request exactly what it needs at the moment it needs it. When you pair a Custom Activity with MuleSoft, Marketing Cloud can tap into real-time, orchestrated data across your enterprise—without becoming another place where that data has to live.

Example 1: Weather

Consider a simple example like weather-based messaging. Rather than pre-loading weather data for every subscriber into a data extension, a Custom Activity can call an API at decision time, retrieve the current conditions for a customer’s location, and immediately branch the journey or personalize content based on the response. The data is used once, in context, and never stored unnecessarily inside Marketing Cloud.

Example 2: Enterprise Data

The same pattern becomes even more compelling with enterprise data. Imagine a post-purchase journey that needs to know the current status of an order, a shipment, or a service case stored in a system like Data 360. Instead of replicating that operational data into Marketing Cloud—and keeping it in sync—a Custom Activity can call MuleSoft, which in turn retrieves and aggregates the data from the appropriate back-end systems and returns only what the journey needs to proceed.

Example 3: URL Shortener for SMS (Real-Time)

While Marketing Cloud Engagement does provide it own form of a URL shortener, some companies want to use Bitly.  Typically in order to use a Bitly URL we would have to move our logic to Server Side Javascript (SSJS) so the API call to Bitly could be made in the SSJS, and then we could use the URL in our text message.  SSJS forces us to use Automation Studio which cannot be run in real-time and must be scheduled.  This is very important to note, that being able to do API calls within the flow of a Journey is very powerful and helps to meet more real-time use cases. With these Custom Activities we can ask Mulesoft to call the Bitly API which returns the shortened URL so then it can be used in the email or SMS message.

This is where MuleSoft truly shines. It acts as a clean abstraction layer between Marketing Cloud and your enterprise landscape, handling authentication, transformation, orchestration, and governance. Marketing Cloud stays focused on customer engagement, while MuleSoft owns the complexity of integrating with source systems. The result is a more scalable, real-time, and maintainable architecture—one that reduces data duplication, respects system boundaries, and enables richer, more contextual customer experiences.

The How….

So how does this actually work in practice? In the next section, we’ll walk through how a Marketing Cloud Custom Activity can call a MuleSoft API in the middle of a Journey, receive a response in real time, and use that data to drive decisions or personalization. We’ll focus on the key building blocks—what lives in Marketing Cloud, what belongs in MuleSoft, and how the two communicate—so you can see how this pattern comes together without turning Marketing Cloud into yet another integration layer.

Part 1 – Hosted Files

Every Marketing Cloud Custom Activity starts with hosted files. These files provide the user interface and configuration that Journey Builder interacts with, making them the foundation of the entire solution. At a minimum, this includes five main files/folders.

  1. index.html – This is what you see in Journey Builder when you click on the Custom Activity to configure it.
  2. config.json – This holds the Mulesoft endpoint to call and what output arguments will be used.
  3. customactivity.js – The javascript that is running behind the index.html page.
  4. postmonger.js – More javascript to support the index.html page
  5. A folder called images must exist and a single icon.png image should exist in it.  This image is shown within Journey Builder.

Blog Ca Files

These files tell Marketing Cloud how the activity behaves, what endpoints it uses, and how it appears to users when they drag it onto a journey. While the business logic ultimately lives elsewhere, within Mulesoft in our example, hosted files are what make the Custom Activity feel native inside Journey Builder.

In this pattern, hosted files are intentionally lightweight. Their primary responsibility is to capture configuration input from the marketer—such as which API operation to call, optional parameters, or behavior flags—and pass that information along when the journey executes. They are not responsible for complex transformations, orchestration, or direct system-to-system integrations. By keeping the hosted files focused on presentation and configuration, you reduce coupling with backend systems and make the Custom Activity easier to maintain, update, and reuse across different journeys.

A place to do a simple proof of concept is on GitHub if you want to try this yourself.  You can easily create these four files and one folder in a repo.  If you use GitHub, then you do have to use the Pages functionality in GitHub to make that repo public.  This public URL will then be used when we configure the ‘Installed App’ in Marketing Cloud Engagement later.

In production, Custom Activity config.json and UI assets should be hosted on an enterprise‑grade HTTPS platform like Azure App Service, AWS CloudFront/S3, or Heroku—not GitHub.

One thing I had to overcome is that the config.json gets cached at the Marketing Cloud server level as talked about in this post.  So when I had to make changes to my config.json, I would create a new folder (v2 / v3) in my repository and then use that path in my Installed Package in the Component added in Journey Builder.

Part 2 – API Server – Mulesoft

This is really the beauty here.  Instead of building API calls in SSJS that are hard to debug, difficult to scale and hard to secure, we get to pass all of that off to an enterprise API platform like Mulesoft.  It really is the best of both worlds.  There are basically two main pieces on the Mulesoft side: A) Five endpoints to develop and B) security.

The Five Endpoints.

Journey Builder uses four lifecycle endpoints to manage the activity and one execute endpoint to process each contact and return outArguments used for decisioning and personalization.

The five endpoints that have to be developed in Mulesoft are…

Endpoint Called When Per Contact? Returns outArguments?
/save User saves config ❌ ❌
/validate User publishes ❌ ❌
/publish Journey goes live ❌ ❌
/execute Contact hits activity ✅ ✅
/stop Journey stops ❌ ❌

For the save, validate, publish and stop in Mulesoft they need to return a 200 status code and can return an empty JSON string of {} in the most basic example.

For the execute method, it should also return a 200 status code and simple json that looks like this for any outArguments…  { status: “myStatus” }

The Security.

The first piece of security is configured in the config.json file.   There is a useJwt key that can either be true of false for each of the endpoint.   If it is true, then Mulesoft will receive an encoded string based on the JWT Signing Secret that was created from the Installed Package in Marketing Cloud.  If jwt is false then Mulesoft will just receive the plain JSON.  For production level work we should make sure jwt is true.
We can also use an OAuth 2.0 Bearer Token.  We want to make sure that our Mulesoft endpoints are only responding to calls coming from Marketing Cloud Engagement.

Part 3 – Journey Builder – Custom Activities

Once the configuration details are setup in the app described in step 2, then creating the custom activity and adding it to the Journey is pretty quick.
  1. Go to the ‘Installed Package’ in setup and create a new app following these steps.
    1. When you add your ‘Component’ to the Installed App selecting ‘Customer Updates’ in the ‘Category’ drop-down worked for me.
    2. My ‘Endpoint URL’ had a format like this:  https://myname.github.io/my_repo_name/v3/
      Blog Ca Package
  2. Create a new Journey
  3. Your new Custom Activity will show up in the Components panel on the left-hand side.  Since we selected ‘Customer Updates’ in step 1 above, our ‘Send to Mulesoft V3a’ Custom Activity shows in that section.   The name under the icon comes from the config.json file.  The image is the icon.png from the images folder.
    Blog Jb View
  4. Once you drag your Custom Activity onto the Journey Builder page you will be able to click on it to configure it.
  5. The user interface from the index.html will display when you click on it so you can configure your Custom Activity.  Note that this user interface could be changed to collect whatever configuration needs to be collected.
    Blog Ca Indexpage
  6. When the ‘Done’ buttons are clicked on the page, then the javascript runs and saves the configuration details into the Journey Builder itself.  In my example the gray and blue ‘Done’ buttons are hooked to the same javascript and really do the same thing.

Part 4 – How to use the Custom Activity

outArguments

Now that we have our Custom Activity configured and in our journey, now the integration with Mulesoft becomes a configuration detail which is so great for admins.  In the config.json file there are two places where the outArguments are placed.
The first is in the arguments section towards the top.  Here I can provide a default value for my status field, which is this case is the very intuitive “DefaultStatus”.  🙂
"arguments": {
   "execute": {
     "inArguments": [],
     "outArguments": [
       {
         "status": "DefaultStatus"
       }
     ],
     "url": "https://mymuleAPI.partofurl.usa-e1.cloudhub.io/api/marketingCloud/execute",
     "useJwt": false,
     "timeout": 60000,
     "retryCount": 3,
     "retryDelay": 3000,
     "concurrentRequests": 5
   }
 },

The second place is lower in the config.json file in the schema section and describes the actual data type for my output variable.  We can see the status variable is a ‘Text’ field, that has access = visible and direction = out.

"schema":{
      "arguments":{
          "execute":{
              "inArguments": [],
              "outArguments":[
                  {
                      "status":{
                          "dataType":"Text",
                          "isNullable":true,
                          "access":"visible",
                          "direction":"out"
                      }
                  }
              ]
          }
      }
  }

Note in the example below that I did not use a typical status value like ‘Not Started’, ‘In Progress ‘ and ‘Done’.  That would have made more sense. 🙂  Instead I was running five records through my journey with various versions of my last name: Luschen, Luschen2, Luschen3, Luschen4 and Luschen5.  So Mulesoft was basically received these different spellings through the json being passed over, parsed it out of the incoming json and then injected it into the response json in the status field.  This is what the incoming data extension looked like.

Blog De

An important part of javascript turned out to be setting the isConfigured flag to true in the customActivity.js file.  This makes sure Journey Builder understands that node has been configured when the journey is ‘Validated’ before it is ‘Activated’.

activity.metaData = activity.metaData || {};
activity.metaData.isConfigured = true;

Now that we have our ‘status’ field as an output from Mulesoft via the Custom Activity, I will describe how it can be used in either a Decision Split or some AmpScript.

Decision Split

The outArguments show up under the ‘Journey Data’ portion of the configuration screen.  Once you select the ‘status’ outArgument you configure the rest of the decision split like any other one you have built before.
Blog Ca Decision Split
Blog Ca Decision Split2

AmpScript

These outArguments are also available as send context attributes so they are easy to use in any manner you want within your AmpScript for either email or SMS personalization.
%%[
SET @status = AttributeValue(“status”)
]%%
%%=v(@status)=%%

The Wrap-up…

As you let the flexibility of these Custom Activities sink in, it really creates a lot of flexible patterns.  The more data we can surface to our marketing team, the more dynamic, personalized and engaging the content will become.  While we all see more campaigns and use cases being developed on the new Agentforce Marketing, we all know that Marketing Cloud Engagement has some legs to it yet.  I hope this post has given you some ideas to make your Marketing team look like heros as they use Journey Builder to its fullest potential!

I want to thank my Mulesoft experts Anusha Danda and Jana Pagadala for all of their help!

Please connect with me on LinkedIn for more conversations!  I am here to help make you a hero with your next Salesforce project.

Example Files…

Config.JSON

{  
  "workflowApiVersion": "1.1",
  "metaData": {
    "icon": "images/icon.png",
    "category": "customer",
    "isConfigured": true,
    "configOnDrop": false
  },
  "type": "REST",
  "lang": {
    "en-US": {
      "name": "Send to MuleSoft V3a",
      "description": "Calls MuleSoft to orchestrate downstream systems V3a."
    }
  },
  "arguments": {
    "execute": {
      "inArguments": [],
      "outArguments": [
        {
          "status": "DefaultStatus"
        }
      ],
      "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/execute",
      "useJwt": true,
      "timeout": 60000,
      "retryCount": 3,
      "retryDelay": 3000,
      "concurrentRequests": 5
    }
  },
  "configurationArguments": {
    "applicationExtensionKey": "MY_KEY_ANYTHING_I_WANT_MULESOFT_TEST",
    "save":    { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/save",    "useJwt": true },
    "publish": { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/publish", "useJwt": true },
    "validate":{ "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/validate","useJwt": true },
    "stop":    { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/stop",    "useJwt": true }
  },
  "userInterfaces": {
    "configModal": { "height": 480, "width": 480 }
  },
  "schema":{
      "arguments":{
          "execute":{
              "inArguments": [],
              "outArguments":[
                  {
                      "status":{
                          "dataType":"Text",
                          "isNullable":true,
                          "access":"visible",
                          "direction":"out"
                      }
                  }
              ]
          }
      }
  }
}

Index.html

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <title>Terry – JB → Mule Custom Activity</title>
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <style>
    body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; margin: 24px; }
    label { display:block; margin-top: 16px; font-weight:600; }
    input, select, button { padding: 8px; font-size: 14px; }
    button { margin-top: 20px; }
    .hint { color:#666; font-size:12px; }
  </style>
</head>
<body>
  <h2>Send to MuleSoft – Custom Activity</h2>
  <p class="hint">Configure the API URL and (optionally) bind a Journey field3.</p>

  <label for="apiUrl">MuleSoft API URL</label>
  <input id="apiUrl" type="url" placeholder="https://api.example.com/journey/execute2" style="width:100%" />

  <label for="fieldPicker">Bind a field from Entry Source (optional)</label>
  <select id="fieldPicker">
    <option value="">— none —</option>
  </select>

  <button id="done">Done</button>

  <!-- Postmonger must be local in your repo - ADD BEGIN AND CLOSE BRACKETS BELOW-->
  script src="./postmonger.js"></script
  <!-- Your Postmonger client logic - ADD BEGIN AND CLOSE BRACKETS BELOW-->
  script src="./customActivity.js?v=2026-02-02v1"></script
</body>
</html>

 

CustomActivity.js

/* global Postmonger */
(function () {
  'use strict';

  // Create the Postmonger session (bridge to Journey Builder)
  const connection = new Postmonger.Session();

  // Journey Builder supplies this payload when we call 'ready'
  let activity = {};
  let schema = [];
  let pendingSelectedField = null;  // holds saved token until options exist

  document.addEventListener('DOMContentLoaded', () => {
    // Listen to JB lifecycle events
    connection.on('initActivity', onInitActivity);
    connection.on('requestedTokens', onTokens);
    connection.on('requestedEndpoints', onEndpoints);
    connection.on('requestedSchema', onRequestedSchema); // common pattern in field pickers
    connection.on('clickedNext', onDone);

    // Signal readiness and request useful context
    connection.trigger('ready');
    connection.trigger('requestTokens');
    connection.trigger('requestEndpoints');

    // Optionally, ask for Entry Source schema (undocumented but widely used in the field)
    connection.trigger('requestSchema');

    // Bind UI
    document.getElementById('done').addEventListener('click', onDone);
  });

  function onInitActivity (payload) {
    activity = payload || {};
    // Re-hydrate UI if the activity is being edited
    try {
      const args = (activity.arguments?.execute?.inArguments || [])[0] || {};
      if (args.apiUrl) document.getElementById('apiUrl').value = args.apiUrl;
      if (args.selectedField) document.getElementById('fieldPicker').value = args.selectedField;
      pendingSelectedField = args.selectedField;
    } catch (e) {}
  }

  function onTokens (tokens) {
    // If you ever need REST/SOAP tokens, they arrive here
    // console.log('JB tokens:', tokens);
  }

  function onEndpoints (endpoints) {
    // REST base URL for BU, if you need it
    // console.log('JB endpoints:', endpoints);
  }

  function onRequestedSchema (payload) {
    schema = payload?.schema || [];
    const select = document.getElementById('fieldPicker');

    // Keep current value if re-opening
    const current = select.value;
    // Reset options (leave the first '— none —')
    select.length = 1;

    // Populate with Entry Source keys (e.g., {{Event.APIEvent-UUID.Email}})
    schema.forEach(col => {
      const opt = document.createElement('option');
      opt.value = `{{${col.key}}}`;
      opt.textContent = col.key.split('.').pop();
      select.appendChild(opt);
    });

    if (current) select.value = current;
    if (pendingSelectedField) select.value = pendingSelectedField;
    
  }

  function onDone () {
    const apiUrl = document.getElementById('apiUrl').value?.trim() || '';
    const selectedField = document.getElementById('fieldPicker').value || '';

    // Validate minimal config
    if (!apiUrl) {
      alert('Please provide a MuleSoft API URL.10');
      return;
    }
    // alert(selectedField);

    // Build inArguments that JB will POST to /execute at run time
    const inArguments = [{
      apiUrl,            // static value from UI
      selectedField      // optional mustache ref to Journey Data
    }];

    // Mutate the activity payload we received and hand back to JB
    activity.arguments = activity.arguments || {};
    activity.arguments.execute = activity.arguments.execute || {};
    activity.arguments.execute.inArguments = inArguments;

    activity.metaData = activity.metaData || {};
    activity.metaData.isConfigured = true;

    // Tell Journey Builder to save this configuration
    connection.trigger('updateActivity', activity);
  }
})();

 

]]>
https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/feed/ 3 390190
The Missing Layer: How On-Device AI Agents Could Revolutionize Enterprise Learning https://blogs.perficient.com/2026/02/06/the-missing-layer-how-on-device-ai-agents-could-revolutionize-enterprise-learning/ https://blogs.perficient.com/2026/02/06/the-missing-layer-how-on-device-ai-agents-could-revolutionize-enterprise-learning/#comments Fri, 06 Feb 2026 13:29:58 +0000 https://blogs.perficient.com/?p=390162

A federated architecture for self-improving skills — from every employee’s laptop to the company brain.


Every enterprise has the same problem hiding in plain sight. Somewhere between the onboarding wiki that nobody reads, the Slack threads that disappear after a week, and the senior engineer who carries half the team’s knowledge in their head — institutional knowledge is dying. Not because companies don’t try to preserve it, but because the systems we’ve built to capture it are fundamentally passive. They wait for someone to write a doc. They wait for someone to search. They never learn on their own.

What if every employee’s computer had an AI agent that watched, learned, and guided — and every night, those agents pooled what they’d learned into something smarter than any of them alone?

The State of Enterprise AI Assistants: Smart But Shallow

Today’s enterprise AI tools — Google Agentspace, Microsoft Copilot, Moveworks, Atomicwork — follow the same pattern. A large language model sits in the cloud, connected to your company’s knowledge base. Employees ask questions, the model retrieves answers. It works. But it has three fundamental limitations.

First, all intelligence is centralized. The model only knows what’s been explicitly fed into the knowledge base. It doesn’t learn from the thousands of micro-interactions employees have daily — the workarounds they discover, the mistakes they make, the shortcuts they invent.

Second, there’s no feedback loop from the edge. When a new hire spends 40 minutes figuring out that the VPN must be connected before accessing the PTO portal, that hard-won knowledge dies in their browser history. The next new hire will spend the same 40 minutes. The system never improves from use.

Third, one model serves everyone the same way. A junior developer and a senior architect get the same answers, in the same depth, with the same assumptions about what they already know.

A Different Architecture: Agents That Learn at the Edge

Imagine a three-tier system where intelligence lives at every level — on the employee’s device, on the department server, and at the company core. Each tier runs a different class of model, owns a different scope of knowledge, and communicates on a defined rhythm.

Tier 1: The On-Device Agent (7B–14B Parameters)

Every employee’s workstation runs a small but capable language model — something in the 7B to 14B parameter range, like Llama 3 8B or Qwen 2.5 14B. This model is paired with two things that make it useful: skills and memory.

Skills are structured instructions — think of them as markdown playbooks that tell the agent how to guide the user through specific tasks. A “setup-dev-environment” skill walks a new developer through installing dependencies, configuring their IDE, and running the test suite. A “code-review-checklist” skill ensures PRs meet team standards. These aren’t hardcoded — they’re living documents that the agent reads and follows, and they can be updated without retraining the model.

Memory comes in two layers. Short-term memory captures the day’s interactions: what the user asked, where they got stuck, what worked, what corrections they made. This is append-only, timestamped, and stored locally. Long-term memory is a curated set of facts about the user — their role, expertise level, preferred tools, recurring tasks — that persists across sessions and personalizes every interaction.

The on-device agent is always available, even offline. It responds instantly because there’s no round-trip to a server. And critically, sensitive information — proprietary code, internal discussions, personal struggles — never leaves the machine during the workday.

Tier 2: The Department Server (40B Parameters)

Each department — Engineering, Operations, Sales — runs its own server with a more powerful model in the 40B parameter range. This server has three jobs.

Collecting learnings. On a configurable schedule — real-time, hourly, or nightly depending on the organization’s needs — each device pushes its short-term memory deltas to the department server. Not the raw conversation logs, but distilled learnings: “User discovered that the staging deploy requires flag --skip-cache after the recent infrastructure migration.” A privacy filter strips personally identifiable information before anything leaves the device.

Semantic merging. This is where the 40B model earns its keep. When Device A reports “Docker builds fail on M-series Macs without Rosetta” and Device B reports “ARM architecture causes container build errors on Apple Silicon,” the server recognizes these as the same insight expressed differently. It merges them into a single, authoritative entry in the department’s golden copy — the canonical knowledge base for that team.

Conflict resolution with authority. Not all learnings are equal. The system uses an authority model inspired by API authentication scopes. Each device agent carries a token encoding the user’s role and trust level. A junior developer’s correction gets queued for review. A senior engineer’s correction is auto-merged. A team lead can approve or reject queued items. This prevents the golden copy from being polluted by well-intentioned but incorrect contributions while ensuring high-confidence knowledge flows freely.

After merging, the department server pushes updated skills back to all devices. Tomorrow morning, when a new hire boots up, their agent already knows about the --skip-cache flag — because someone else discovered it yesterday.

Tier 3: The Company Master Server (70B Parameters)

At the top sits the most powerful model — 70B parameters — responsible for the company-wide knowledge layer. This server doesn’t communicate with individual devices. It only syncs with department servers, exchanging golden copies on a daily or weekly cadence.

The key constraint: departments don’t share raw learnings with each other. Engineering doesn’t see Sales’ objection-handling patterns; Sales doesn’t see Engineering’s debugging workflows. This is both a privacy boundary and a relevance filter — most departmental knowledge is only useful within that department.

But the master server can synthesize cross-cutting insights that no single department would discover alone. If Engineering’s golden copy contains “API response times increased 3x after the v2.4 release” and Sales’ golden copy contains “customer complaints about dashboard loading times spiked this week,” the 70B model connects the dots. It pushes a unified advisory to both departments: Engineering gets “customer-facing impact confirmed — prioritize the performance regression,” and Sales gets “engineering is aware of the dashboard slowdown — expected resolution timeline: 48 hours.”
Company Master Server

Each Device Runs

The Daily Rhythm

The system operates on a natural cycle:

Morning. Department servers push updated skills to all devices. Each agent loads the latest golden copy fragments relevant to its user’s role. A new developer gets the freshly refined “setup-dev-environment” skill. A senior engineer gets the latest “production-incident-response” playbook with patterns learned from last week’s outage.

Workday. Each on-device agent guides its user, answers questions, and logs everything to short-term memory. When a user corrects the agent — “No, that’s wrong, you need to run migrations before starting the server” — the agent captures the correction with the user’s authority level.

Sync interval. Based on organizational preference, devices push their learnings to the department server. This could be real-time streaming for fast-moving teams, hourly batches for a balance of freshness and bandwidth, or nightly bulk uploads for organizations prioritizing minimal disruption.

Server processing. The department’s 40B model performs semantic merging — deduplicating, resolving conflicts, filtering PII, and distilling raw observations into authoritative skill updates. High-trust contributions go straight to the golden copy. Lower-trust contributions are queued for review.

Company sync. On a separate, slower cadence, department servers exchange golden copies with the company master. The 70B model looks for cross-departmental patterns and pushes synthesized insights back down.

The Interface: A Chatbot and Coding Agent on Every Machine

The three-tier architecture is the brain. But what the employee actually interacts with is a local chatbot and coding agent running on their machine — powered by the on-device model and grounded in the golden copy that was pushed down that morning.

This isn’t a generic AI assistant. It’s an agent that knows the company’s way of doing things, because the golden copy is the company’s accumulated, distilled operational knowledge. Every answer, every suggestion, every code change it proposes is informed by the patterns, standards, and hard-won lessons that the entire department has contributed to.

For Developers: A Coding Agent That Knows Your Codebase Standards

A developer opens their IDE and the on-device coding agent is available inline — similar to how tools like GitHub Copilot or Cursor work today, but backed by the department’s golden copy rather than a generic training corpus. When the developer writes a new API endpoint, the agent doesn’t just autocomplete syntax. It suggests the error handling pattern that the team standardized last quarter. It flags that the developer is about to use a deprecated internal library that three other engineers already migrated away from. It proposes the exact test structure that passed code review most consistently, based on patterns the department server distilled from hundreds of merged PRs.

If the developer asks “how do I connect to the staging database?” the agent doesn’t give a generic PostgreSQL tutorial. It gives the team’s specific connection string format, reminds them to use the read-only replica for queries, and mentions the VPN requirement — all because those details were learned by other developers’ agents, merged into the golden copy, and pushed down as part of this morning’s skill update.

For New Hires: A Conversational Onboarding Guide

A new operations hire opens the chatbot on day one and simply asks: “What should I do first?” The agent responds with a structured onboarding path tailored to their role — not from a static wiki, but from a living skill that has been refined by the struggles and discoveries of every previous new hire. It walks them through account setup, tool installation, and first tasks step by step, answering follow-up questions in context.

When the new hire asks a question the agent can’t answer confidently, it says so — and logs the gap. That gap becomes a learning signal: if three new hires in a row ask the same unanswered question, the department server flags it as a missing skill that needs to be authored by a senior team member. The system doesn’t just answer questions. It discovers which questions should have answers but don’t yet.

For Everyone: A Knowledge Q&A Layer

Beyond coding and onboarding, the chatbot serves as a universal knowledge interface. “What’s the process for requesting a new AWS account?” “Who owns the billing microservice?” “What changed in the deployment pipeline last week?” These questions get answered instantly from the golden copy, with the confidence that the answers reflect the department’s current, collectively validated understanding — not a stale Confluence page from 2023.

The agent can also proactively surface relevant knowledge. If it detects that a developer is working on the authentication module (based on file context), it might surface a note from the golden copy: “Reminder: the auth module has a known race condition under high concurrency. See the workaround documented after the January incident.” This isn’t the agent being clever — it’s the golden copy doing its job, putting the right knowledge in front of the right person at the right time.

Why On-Device Matters

Running a model on every employee’s machine isn’t just an architectural choice — it unlocks capabilities that cloud-only systems can’t match.

Privacy by design. Code, internal communications, and personal context never leave the device during work hours. Only distilled, anonymized learnings sync to the server. This matters enormously for regulated industries and for employee trust.

Zero-latency guidance. The agent responds in milliseconds, not seconds. For a developer in flow state, the difference between an instant inline suggestion and a 2-second cloud round-trip is the difference between staying focused and being interrupted.

Personalization without centralization. The on-device agent knows this user’s preferences, skill level, and work patterns. It adapts its explanations, adjusts its depth, and remembers past conversations — all locally, without the server needing to maintain per-user state.

Offline resilience. The agent works on airplanes, in server rooms with restricted connectivity, and during cloud outages. The skills it loaded that morning are sufficient for most guidance tasks.

The Federated Learning Parallel

This architecture mirrors a well-established pattern in machine learning: federated learning. Google uses it to improve phone keyboards — each device trains locally on your typing patterns, sends only model weight updates (not your texts) to a central server, and the server aggregates improvements that benefit all users.

The difference is that traditional federated learning operates on model weights — opaque numerical tensors. This system operates on natural-language skills and memories — human-readable markdown that can be version-controlled, audited, and manually edited. An engineering manager can open the golden copy, read every skill in plain English, and decide whether a particular learning should be promoted, revised, or rejected. This transparency is critical for enterprise adoption where auditability and human oversight are non-negotiable.

There’s also a conceptual parallel to knowledge distillation in ML research, where a large “teacher” model’s knowledge is compressed into a smaller “student” model for edge deployment. Here, the 70B company model’s synthesized insights are distilled into skill updates that the 7B device models can act on — not through weight transfer, but through updated natural-language instructions.

Concrete Scenarios

New Developer Onboarding (Week 1)

Monday morning. The developer’s laptop has a 7B model loaded with the Engineering department’s latest skills. The “new-hire-onboarding” skill activates automatically.

The agent walks through environment setup step by step. At step 4, the developer hits an error: node-gyp fails on their specific macOS version. They spend 15 minutes finding the fix on Stack Overflow and tell the agent: “I needed to install Xcode Command Line Tools first — add that as a prerequisite.”

The agent logs this to short-term memory with the user’s authority level (junior). At the next sync cycle, the department server receives this learning. Since three other new hires hit the same issue last month (already in the golden copy as a known friction point), the server’s 40B model upgrades the severity and adds the prerequisite to the onboarding skill.

Tuesday morning, the next new hire’s agent already includes: “Before proceeding, verify Xcode Command Line Tools are installed: xcode-select --install.”

Cross-Department Insight Discovery

The Engineering golden copy contains: “API latency P99 increased from 200ms to 800ms after deploying service mesh v3.2.”

The Sales golden copy contains: “Three enterprise prospects paused contract negotiations citing ‘platform performance concerns’ this quarter.”

Neither department connected these. During the weekly company sync, the master 70B model identifies the correlation and pushes an advisory to both: Engineering receives a business-impact escalation, and Sales receives a technical context update with an estimated resolution timeline sourced from Engineering’s incident tracking.

Open Questions and Honest Limitations

This architecture is a synthesis of existing building blocks — on-device models, skill-based agent systems, federated sync patterns, semantic merging — assembled in a way that doesn’t exist as a product today. Several hard problems remain.

Merge quality at scale. Semantic merging works well with 10 devices. With 500, the volume of daily learnings could overwhelm even a 40B model’s ability to meaningfully synthesize. Hierarchical sub-teams within departments — team leads running intermediate merges — may be necessary.

Skill drift. If the golden copy evolves continuously, skills from six months ago might be unrecognizable. Version control and the ability to diff skill changes over time are essential. Treating the golden copy as a git repository with commit history is one approach.

Model capability at the edge. A 7B model can follow instructions and log observations, but its reasoning is limited. It might misinterpret a user’s correction or log a false insight. The authority system mitigates this — low-trust contributions get reviewed — but it doesn’t eliminate the risk.

Adoption friction. Employees need to trust that their on-device agent isn’t surveillance. The system must be transparently opt-in for the learning cycle, with clear boundaries between what stays local and what syncs. The privacy filter must be verifiable, not just promised.

Hardware cost. Running a 7B model on every employee’s laptop requires machines with sufficient RAM and ideally a capable GPU. For many knowledge workers with modern laptops, this is already feasible. For organizations with aging hardware fleets, it may require phased rollout.

What Exists Today

The building blocks are real and available now:

  • On-device models in the 7B–14B range run comfortably on Apple Silicon Macs and modern workstations using tools like Ollama, llama.cpp, and LM Studio.
  • Skill-based agent frameworks — notably the AgentSkills open standard developed by Anthropic and adopted by multiple platforms — define exactly how to package instructions as markdown files that agents can discover and follow.
  • Memory architectures with short-term daily logs and long-term curated knowledge are production-tested in platforms like OpenClaw, which uses MEMORY.md for persistent facts and memory/YYYY-MM-DD.md for daily context.
  • Self-improving agent patterns exist in the wild — OpenClaw’s community has published skills that capture corrections and learnings automatically, and the Foundry plugin demonstrates a full observe-learn-write-deploy loop on a single device.
  • Federated learning is a mature field in ML research, with frameworks like NVIDIA FLARE and Flower enabling distributed training across devices.
  • Hierarchical multi-agent architectures — supervisor agents coordinating specialist agents across departments — are in production at companies like BASF (via Databricks) and documented extensively by Microsoft and Salesforce.

What nobody has assembled is the specific combination: on-device small models learning from daily use, syncing through department servers with semantic merging and authority-based trust, rolling up to a company-wide master that discovers cross-departmental patterns — all operating on human-readable, version-controllable, natural-language skills rather than opaque model weights.

The Bet

The bet is simple. Today’s enterprise AI is a library — it holds knowledge and waits for you to ask. The architecture described here is a living organism — it learns from every employee, improves overnight, and wakes up smarter each morning.

Every company already has the knowledge it needs to onboard faster, debug quicker, and operate more efficiently. That knowledge just lives in the wrong places: in people’s heads, in forgotten Slack threads, in tribal rituals passed from senior to junior. An on-device AI agent that captures this knowledge as it’s created — and a federated system that distills it into something the whole organization can benefit from — doesn’t require any breakthrough in AI capability. It requires assembling pieces that already exist into a system that nobody has built yet.

The pieces are on the table. Someone just needs to put them together.


This post explores a conceptual architecture for federated, on-device AI agents in enterprise settings. The building blocks referenced — AgentSkills, OpenClaw, federated learning frameworks — are real, production-available technologies. The specific three-tier system described is a proposed design, not an existing product.

]]>
https://blogs.perficient.com/2026/02/06/the-missing-layer-how-on-device-ai-agents-could-revolutionize-enterprise-learning/feed/ 4 390162
Just what exactly is Visual Builder Studio anyway? https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/ https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/#respond Thu, 29 Jan 2026 15:40:45 +0000 https://blogs.perficient.com/?p=389750

If you’re in the world of Oracle Cloud, you are most likely busy planning your big switch to Redwood. While it’s easy to get excited about a new look and a plethora of AI features, I want to take some time to talk about a tool that’s new (at least to me) that comes along with Redwood. Functional users will come to know VB Studio as the new method for delivering page customizations, but I’ve learned it’s much more.

VB Studio has been around since 2020, but I only started learning about it recently. At its core, VB Studio is Oracle’s extension platform. It provides users with a safe way to customize by building around their systems instead of inside of it. Since changes to the core code are not allowed, upgrades are much less problematic and time consuming.  Let’s look at how users of different expertise might use VB Studio.

Oracle Cloud Application Developers

I wouldn’t call myself a developer, but this is the area I fit into. Moving forward, I will not be using Page Composer or HCM Experience Design Studio…and I’m pretty happy about that. Every client I work with wants customization, so having a one-stop shop with Redwood is a game-changer after years of juggling tools.

Sandboxes are gone. VB Studio uses Git repositories with branches to track and log every change. Branches let multiple people work on different features without conflict, and teams review and merge changes into the main branch in a controlled process.

And what about when these changes are ready for production? By setting up a pipeline from your development environment to your production environment, these changes can be pushed straight into production. This is huge for me! It reduces the time needed to implement new Oracle modules. It also helps with updating or changing existing systems as well. I’ve spent countless hours on video calls instructing system administrators on how to perform requested changes in their production environment because their policy did not allow me to have access. Now, I can make these changes in a development instance and push them to production. The sys admin can then view these changes and approve or reject them for production. Simple!

Maxresdefault

Low-Code Developers

 

Customizations to existing features are great, but what about building entirely new functionality and embedding it right into your system?  VB Studio simplifies building applications, letting low-code developers move quickly without getting bogged down in traditional coding. With VB Studio’s visual designer, developers can drag and drop components, arrange them the way they want, and preview changes instantly. This is exciting for me because I feel like it is accessible for someone who does very little coding. Of course, for those who need more flexibility, you can still add custom logic using familiar web technologies like JavaScript and HTML (also accessible with the help of AI). Once your app is ready, deployment is easy. This approach means quicker turnaround, less complexity, and applications that fit your business needs perfectly.

 

Experienced Programmers

Okay, now we’re getting way out of my league here, so I’ll be brief. If you really want to get your hands dirty by modifying the code of an application created by others, you can do that. If you prefer building a completely custom application using the web programming language of your choice, you can also do that. Oracle offers users a wide range of tools and stays flexible in how they use them. Organizations need tailored systems, and Oracle keeps evolving to make that possible.

 

https://www.oracle.com/application-development/visual-builder-studio/

]]>
https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/feed/ 0 389750
Building Custom Search Vertical in SharePoint Online for List Items with Adaptive Cards https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/ https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/#respond Wed, 14 Jan 2026 06:25:15 +0000 https://blogs.perficient.com/?p=389614

This blog explains the process of building a custom search vertical in SharePoint Online that targets a specific list using a dedicated content type. It covers indexing important columns, and mapping them to managed properties for search. Afterward, a result type is configured with Adaptive Cards JSON to display metadata like title, category, author, and published date in a clear, modern format. Then we will have a new vertical on the hub site, giving users a focused tab for Article results. In last, the result is a streamlined search experience that highlights curated content with consistent metadata and an engaging presentation.

For example, we will start with the assumption that a custom content type is already in place. This content type includes the following columns:

  • Article Category – internal name article_category
  • Article Topic – internal name article_topic

We’ll also assume that a SharePoint list has been created which uses this content type, with the ContentTypeID: 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B

With the content type and list ready, the next steps focus on configuring search so these items can be surfaced effectively in a dedicated vertical.

Index Columns in the List

Indexing columns optimize frequently queried metadata, including category or topic, for faster search.. This improves performance and makes it easier to filter and refine results in a custom vertical.

  • Go to List Settings → Indexed Columns.
  • Ensure article_category and article_topic are indexed for faster search queries.

Create Managed Properties

First, check which RefinableString managed properties are available in your environment. After you identify them, configure them as shown below.:

Refinable stringField nameAlias nameCrawled property
RefinableString101article _topicArticleTopicows_article _topic
RefinableString102article_categoryArticleCategoryows_article_category
RefinableString103article_linkArticleLinkows_article_link

Tip: Creating an alias name for a managed property makes it easier to read and reference. This step is optional — you can also use the default RefinableString name directly.

To configure these fields, follow the steps below:

  • Go to the Microsoft Search Admin Center → Search schema.
  • Go to Search Schema → Crawled Properties
  • Look for the field (ex. article _topic or article_category),  find its crawled property (starts with ows_)
  • Click on property → Add mapping
  • Popup will open → Look for unused RefinableString properties (e.g., RefinableString101, RefinableString102) → click “Ok” button
  • Click “Save”
  • Likewise, create managed properties for all the required columns.

Once mapped, these managed properties can be searched, found, and defined. This means they can be used in search filters, result types, and areas.

Creating a Custom Search Vertical

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

This lets you add a dedicated tab that filters results to specific content, improving findability and user experience. It ensures users quickly access targeted items like lists, libraries, or content types without sifting through all search results. In this example, we will set the filter for a specific articles list.

Following the steps given below to create and configure a custom search vertical from the admin center:

  • In “Verticals” tab, add a new value as per following configuration:
    • Name = “Articles”
    • Content source = SharePoint and OneDrive
    • KQL query = It is the actual filter where we specify the filter for items from the specific list to display in search results. In our example, we will set it as: ContentTypeId:0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B*Verticalskql
    • Filters: Filters are an optional setting that allows users to narrow search results based on specific criteria. In our example, we can add a filter by category. To add “Categories” filter on search page, follow below steps:
      • Click on add filter
      • Select “RefinableString102” (This is a refinable string managed property for “article_category” column as setup in above steps)
      • Name = “Category” or other desired string to display on search

Set Vertical filter

Creating a Result Type

Creating a new result type in the Microsoft Search Admin Center lets you define how specific content (like items from a list or a content type) is displayed in search results. In this example, we set some rules and use Adaptive Card template to make search easier and more interesting.

Following are the steps to create a new result type in the admin center.

  • Go to admin center, https://admin.cloud.microsoft
  • Settings → Search & intelligence
  • In “Customizations”, go to “Result types”
  • Add new result types with the following configurations:
    • Name = “AarticlesResults” (Note: Specify any name you want to display in search vertical)
    • Content source = SharePoint and OneDrive
    • Rules
      • Type of content = SharePoint list item
      • ContentTypeId starts with 0x0101009189AB5D4FBA4A9C9BFD5F3F9F6C3B (Note: Content type Id created in above steps)Set Result type
      • Layout = Put the JSON string for Adaptive card to display search result. Following is the JSON for displaying the result:
        {
           "type": "AdaptiveCard",
          "version": "1.3",
          "body": [
            {
              "type": "ColumnSet",
              "columns": [
                {
                  "type": "Column",
                  "width": "auto",
                  "items": [
                    {
                    "type": "Image",
                    "url": <url of image/thumbnail to be displayed for each displayed item>,
                    "altText": "Thumbnail image",
                    "horizontalAlignment": "Center",
                    "size": "Small"
                    }
                  ],
                  "horizontalAlignment": "Center"
                },
                {
                  "type": "Column",
                  "width": 10,
                  "items": [
                    {
                      "type": "TextBlock",
                      "text": "[${ArticleTopic}](${first(split(ArticleLink, ','))})",
                      "weight": "Bolder",
                      "color": "Accent",
                      "size": "Medium",
                      "maxLines": 3
                    },
                    {
                      "type": "TextBlock",
                      "text": "**Category:** ${ArticleCategory}",
                      "spacing": "Small",
                      "maxLines": 3
                    }
                  ],
                  "spacing": "Medium"
                }
              ]
            }
          ],
          "$schema": "http://adaptivecards.io/schemas/adaptive-card.json"
        }

        Set Result type adaptive card

When you set up everything properly, the final output will look like this:

Final search results

Conclusion

Finally, we created a special search area in SharePoint Online for list items with adaptive cards. This changes how users use search. Important metadata becomes clearly visible when you index key columns, map them to managed properties, and design a tailored result type. Since we used Adaptive Card, it adds a modern, interesting presentation layer. It makes it easier to scan and more visually appealing. In the end, publishing a special section gives you a special tab that lets you access a special list of content. This makes it easier to work with and makes the user experience better.

]]>
https://blogs.perficient.com/2026/01/14/build-custom-search-vertical-in-sharepoint-for-list-items-with-adaptive-cards/feed/ 0 389614
Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an open-source, high-performance API client that puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No,” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why is Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads, your collections stay local. No hidden syncing; your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters, API testing without unnecessary features.

Bottom line: Bruno fits the way we work today, collaboratively, securely, and efficiently. It’s not trying to do everything; it’s just good at API testing.

Key Features

Bruno keeps it real with features that matter. Here are the highlights:

  1. Totally Open-Source

  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A group of developers is contributing to GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up

  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain-text files integrate well with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning

  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up quickly and zips through responses.
  • Great for solo endpoint tweaks or managing large workflows without your machine slowing.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux. Just choose your platform, and you’re good to go.

#3. Quick Install Guide

Windows

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy-to-build requests on the fly.
  • CLI: For the terminal lovers. Automate tests, integrate with CI/CD, or run collections: bruno run collection.bru –env dev.

Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen prompting you to create a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Enter the URL: https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Add the header: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it, and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, create folders for environments or APIs, and use variables to switch setups.

Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed

  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy

  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag

  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

Level up With Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

Community & Contribution

It’s community-driven:

Conclusion

Bruno isn’t just another API testing tool; it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
GitLab to GitHub Migration https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/ https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/#respond Mon, 29 Dec 2025 07:59:05 +0000 https://blogs.perficient.com/?p=389333

1. Why Modern Teams Choose GitHub

Migrating from GitLab to GitHub represents a strategic shift for many engineering teams. Organizations often move to leverage GitHub’s massive open-source community and superior third-party tool integrations. Moreover, GitHub Actions provides a powerful, modern ecosystem for automating complex developer workflows. Ultimately, this transition simplifies standardization across multiple teams while improving overall project visibility.

2. Prepare Your Migration Strategy

A successful transition requires more than just moving code. You must account for users, CI/CD pipelines, secrets, and governance to avoid data loss. Consequently, a comprehensive plan should cover the following ten phases:

  • Repository and Metadata Transfer

  • User Access Mapping

  • CI/CD Pipeline Conversion

  • Security and Secret Management

  • Validation and Final Cutover

3. Execute the Repository Transfer

The first step involves migrating your source code, including branches, tags, and full commit history.

  • Choose the Right Migration Tool

For straightforward transfers, the GitHub Importer works well. However, if you manage a large organization, the GitHub Enterprise Importer offers better scale. For maximum control, technical teams often prefer the Git CLI.

Command Line Instructions:

git clone –mirror gitlab_repo_url
cd repo.git
git push –mirror github_repo_url

Manage Large Files and History:

During this phase, audit your repository for large binary files. Specifically, you should use Git LFS (Large File Storage) for any assets that exceed GitHub’s standard limits.

4. Map Users and Recreate Secrets

GitLab and GitHub use distinct identity systems, so you cannot automatically migrate user accounts. Instead, you must map GitLab user emails to GitHub accounts and manually invite them to your new organization.

Secure Your Variables and Secrets:

For security reasons, GitLab prevents the export of secrets. Therefore, you must recreate them in GitHub using the following hierarchy:

  • Repository Secrets: Use these for project-level variables.

  • Organization Secrets: Use these for shared variables across multiple repos.

  • Environment Secrets: Use these to protect variables in specific deployment stages.

5.Migrating Variables and Secrets

Securing your environment requires a clear strategy for moving CI/CD variables and secrets. Specifically, GitLab project variables should move to GitHub Repository Secrets, while group variables should be placed in Organization Secrets. Notably, secrets must be recreated manually or via the GitHub API because they cannot be exported from GitLab for security reasons.

6. Convert GitLab CI to GitHub Actions

Translating your CI/CD pipelines often represents the most challenging part of the migration. While GitLab uses a single.gitlab-ci.yml file, GitHub Actions utilizes separate workflow files in the .github/workflows/ directory.

Syntax and Workflow Changes:

When converting, map your GitLab “stages” into GitHub “jobs”. Moreover, replace custom GitLab scripts with pre-built actions from the GitHub Marketplace to save time. Finally, ensure your new GitHub runners have the same permissions as your old GitLab runners.

7.Finalize the Metadata and Cutover

Metadata like Issues, Pull Requests (Merge Requests in GitLab), and Wikis require special handling because Git itself does not track them.

The Pre-Cutover Checklist:

Before the official switch, verify the following:

  1. Freeze all GitLab repositories to stop new pushes.

  2. Perform a final sync of code and metadata.

  3. Update webhooks for tools like Slack, Jira, or Jenkins.

  4. Verify that all CI/CD pipelines run successfully.

8. Post-Migration Best Practices

After completing the cutover, archive your old GitLab repositories to prevent accidental updates. Furthermore, enable GitHub’s built-in security features like Dependabot and Secret Scanning to protect your new environment. Finally, provide training sessions to help your team master the new GitHub-centric workflow.

.

9. Final Cutover and Post-Migration Best Practices

Ultimately, once all repositories are validated and secrets are verified, you can execute the final cutover. Specifically, you should freeze your GitLab repositories and perform a final sync before switching your DNS and webhooks. Finally, once the move is complete, remember to archive your old GitLab repositories and enable advanced security features like Dependabot and secret scanning.

10.Summary and Final Thoughts

In conclusion, a GitLab to GitHub migration is a significant but rewarding effort. By following a structured plan that includes proper validation and team training, organizations can achieve a smooth transition. Therefore, with the right tooling and preparation, you can successfully improve developer productivity and cross-team collaboration

]]>
https://blogs.perficient.com/2025/12/29/gitlab-to-github-migration/feed/ 0 389333
Unifying Hybrid and Multi-Cloud Environments with Azure Arc https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/ https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/#respond Mon, 22 Dec 2025 08:06:05 +0000 https://blogs.perficient.com/?p=389202

1. Introduction to Modern Cloud Architecture

In today’s world, architects generally prefer to keep their compute resources—such as virtual machines and Kubernetes servers—spread across multiple clouds and on-premises environments. Specifically, they do this to achieve the best possible resilience through high-availability and disaster recovery. Moreover, this approach allows for better cost efficiency and higher security.

2. The Challenge of Management Complexity

However, this distributed strategy brings additional challenges. Specifically, it increases the complexity of maintaining and managing resources from different consoles, such as Azure, AWS, and Google portals. Consequently, even for basic operations like restarts or updates, administrators often struggle with multiple disparate portals. As a result, basic administration tasks become too complex and cumbersome.

3. How Azure Arc Provides a Solution

Azure Arc solves this problem by providing a simple “pane of glass” to manage and monitor servers regardless of their location. In addition, it simplifies governance by delivering a consistent management platform for both multi-cloud and on-premises resources. Specifically, it provides a centralized way to project existing non-Azure resources directly into the Azure Resource Manager (ARM).

4. Understanding Key Capabilities

Currently, Azure Arc allows you to manage several resource types outside of Azure. For instance, it supports servers, Kubernetes clusters, and databases. Furthermore, it offers several specific functionalities:

  • Azure Arc-enabled Servers: Connects physical or virtual Windows and Linux servers to Azure for centralized visibility.

  • Azure Arc-enabled Kubernetes: Additionally, you can onboard any CNCF-conformant Kubernetes cluster to enable GitOps-based management.

  • Azure Arc-enabled SQL Server: This brings external SQL Server instances under Azure governance for advanced security.

5. Architectural Implementation Details

The Azure Arc architecture revolves primarily around the Azure Resource Manager. Specifically, when a resource is onboarded, it receives a unique resource ID and becomes part of Azure’s management plane. Consequently, each resource installs a local agent that communicates with Azure to receive policies and upload logs.

6. The Role of the Connected Machine Agent

The agent package contains several logical components bundled together. For instance, the Hybrid Instance Metadata service (HIMDS) manages the connection and the machine’s Azure identity. Moreover, the guest configuration agent assesses whether the machine complies with required policies. In addition, the Extension agent manages VM extensions, including their installation and upgrades.

7. Onboarding and Deployment Methods

Onboarding machines can be accomplished using different methods depending on your scale. For example, you might use interactive scripts for small deployments or service principals for large-scale automation. Specifically, the following options are available:

  • Interactive Deployment: Manually install the agent on a few machines.

  • At-Scale Deployment: Alternatively, connect machines using a service principal.

  • Automated Tooling: Furthermore, you can utilize Group Policy for Windows machines.

8. Strategic Benefits for Governance

Ultimately, Azure Arc provides numerous strategic benefits for modern enterprises. Specifically, organizations can leverage the following:

  • Governance and Compliance: Apply Azure Policy to ensure consistent configurations across all environments.

  • Enhanced Security: Moreover, use Defender for Cloud to detect threats and integrate vulnerability assessments.

  • DevOps Efficiency: Enable GitOps-based deployments for Kubernetes clusters.

9. Important Limitations to Consider

However, there are a few limitations to keep in mind before starting your deployment. First, continuous internet connectivity is required for full functionality. Secondly, some features may not be available for all operating systems. Finally, there are cost implications based on the data services and monitoring tools used.

10. Conclusion and Summary

In conclusion, Azure Arc empowers organizations to standardize and simplify operations across heterogeneous environments. Whether you are managing legacy infrastructure or edge devices, it brings everything under one governance model. Therefore, if you are looking to improve control and agility, Azure Arc is a tool worth exploring.

]]>
https://blogs.perficient.com/2025/12/22/unifying-hybrid-and-multi-cloud-environments-with-azure-arc/feed/ 0 389202
How to Secure Applications During Modernization on AWS https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/ https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/#respond Fri, 19 Dec 2025 06:40:17 +0000 https://blogs.perficient.com/?p=389050

Why Do We Need to Secure Our Applications?  

Cloud environments are very dynamic and interconnected. A single misconfiguration or exposed API key can lead to:  

  • Data breaches 
  • Compliance violations 
  • Costly downtime 

Attackers often target application-level weaknesses, not just infrastructure gaps. If any application handles sensitive data, financial transactions, or user credentials, security is critical. 

Common Mistakes Made When Building Applications

  • Hardcoding API keys and credentials 
  • Ignoring dependency vulnerabilities 
  • Skipping encryption/decryption for sensitive data 

Essential Security Best Practices

1. Identity and Access Management (IAM)

  • Create dedicated IAM roles for your Lambda functions, EC2 instances, or ECS tasks instead of hardcoding access keys in your application. 
  • We must regularly review who has permissions using the IAM Access Analyzer. 
  • We must avoid using the root account for day-to-day operations/ any operations as a developer. 

Role Creation

 

Role Creation1

2. Don’t Store/Share Secrets in Your Code

Your appsettings.json is not the right place for secret keys. Storing API keys or database passwords. 

  • We must use AWS Secrets Manager or Parameter Store to keep secrets safe. 
  • Fetch keys at runtime by using AWS SDK for .NET or the AWSSDK.Extensions.NETCore.Setup configuration provider 

Secretmanager Creation2

Secretmanager Reading

3. Always Encrypt Data 

Encryption is one of the best practices to encrypt our sensitive data 

  • Enable HTTPS by default for all your endpoints.  
  • Use AWS Certificate Manager (ACM) to issue and manage SSL/TLS certificates. 
  • In your application, make sure that all traffic is redirected to HTTPS by adding app.UseHttpsRedirection(); 
  • AWS KMS to encrypt your S3 buckets, RDS databases, and EBS volumes.
  • If you’re using SQL Server on RDS, enable Transparent Data Encryption (TDE). 

 Encrypt & Decrypt API Key with KMS 

Encryption Steps

Encryption Decrypt Code

4. Build a Secure Network Foundation

  • Must use VPCs with private subnets for backend services. 
  • Control the traffic with Security Groups and Network ACLs. 
  • Use VPC Endpoints to keep traffic within AWS’s private network  
  • Use AWS WAF to protect your APIs, and enable AWS Shield to guard against DDoS attacks. 

Security Group

Vpc Creation

5. Keep Your Code and Dependencies Clean

Even the best infrastructure can’t save a vulnerable codebase. 

  • Update your .NET SDK and NuGet packages regularly. 
  • Use Amazon Inspector for runtime and AWS environment security, and tools like Dependabot for Development-time dependency security to find vulnerabilities early. 
  • Add code review analysis tools (like SonarQube) in your CI/CD pipeline. 

AWS Inspector

6. Log Everything and Watch

  • Enable Amazon AWS CloudWatch for all central logging and use AWS X-Ray to trace requests through the application. 
  • Turn on CloudTrail to track every API call across your account. 
  • Enable GuardDuty for continuous threat detection. 

 

]]>
https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/feed/ 0 389050
Deploy Microservices On AKS using GitHub Actions https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/ https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/#respond Thu, 18 Dec 2025 05:30:05 +0000 https://blogs.perficient.com/?p=389089

Deploying microservices in a cloud-native environment requires an efficient container orchestration platform and an automated CI/CD pipeline. Azure Kubernetes Service (AKS) is a Kubernetes solution that is managed by Azure. GitHub Actions makes it easy to automate your CI/CD processes from the source code repository.

Image (1)

Why Use GitHub Actions with AKS

Using GitHub Actions for AKS deployments provides:

  • Automated and consistent deployments
  • Faster release cycles
  • Reduced manual intervention
  • Easy Integration with GitHub repositories
  • Better visibility into build and deployment status

Architecture Overview

The deployment workflow follows a CI/CD approach:

  • Microservices packaged as Docker images
  • Images pushed to ACR
  • AKS pulls the image from ACR
  • GitHub Actions automates
  • Build & Push Docker Images
  • Deploy manifests to AKS

Image

Prerequisites

Before proceeding with the implementation, ensure the following   prerequisites are in place:

  • Azure Subscriptions
  • Azure CLI Installed and authenticated (AZ)
  • An existing Azure Kubernetes Service (AKS) cluster
  • Kubectl is installed and configured for your cluster
  • Azure Container Registry (ACR) associated with the AKS cluster
  • GitHub repository with microservices code

Repository Structure

Each microservice is maintained in a separate repository with the following structure:  .github/workflows/name.yml

CI/CD Pipeline Stages Overview

  • Source Code Checkout
  • Build Docker Images
  • Push images to ACR
  • Authenticate to AKS
  • Deploy Microservices using kubectl

Configure GitHub Secrets

Go to GitHub – repository – Settings – Secrets and Variables – Actions  

Add the following secrets:

  • ACR_LOGIN_SERVER
  • ACR_USERNAME
  • ACR_PASSWORD
  • KUBECONFIG

Stage 1: Source Code Checkout

The Pipeline starts by pulling the latest code from the GitHub repository

Stage 2: Build Docker Images

For each microservice:

  • A Docker image is built
  • A unique tag (commit ID and version) is assigned

Images are prepared for deployment

Stage 3: Push Images to Azure Container Registry

Once the images are built:

  • GitHub Actions authenticates to ACR
  • Images are pushed securely to the registry
  • After the initial setup, AKS subsequently pulls the images directly from ACR

Stage 4: Authenticate to AKS

GitHub Actions connects to the AKS cluster using kubeconfig

Stage 5: Deploy Microservices to AKS

In this stage:

  • Kubernetes manifests are applied
  • Services are exposed via the Load Balancer

Deployment Validation

After deployment:

  • Pods are verified to be in a running state
  • Check the service for external access

Best Practices

To make the pipeline production Ready:

  • Use commit-based image tagging
  • Separate environments (dev, stage, prod)
  • Use namespace in AKS
  • Store secrets securely using GitHub Secrets

Common Challenges and Solutions

  • Image pull failures: Verify ACR permission
  • Pipeline authentication errors: Validate Azure credentials
  • Pod crashes: Review container logs and resource limits

Benefits of CI/CD with AKS and GitHub Actions

  • Faster deployments
  • Improved reliability
  • Scalable microservices architecture
  • Better developer productivity
  • Reduced operational overhead

Conclusion

Deploying microservices on AKS using GitHub Actions provides a robust, scalable, and automated CI/CD solution. By integrating container builds, registry management, and Kubernetes deployments into a single pipeline, teams can deliver applications faster and more reliably.

CI/CD is not just about automation – it’s about confidence, consistency, and continuous improvement.

 

]]>
https://blogs.perficient.com/2025/12/17/deploy-microservices-on-aks-using-github-actions/feed/ 0 389089
Monitoring and Logging in Sitecore AI https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/ https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/#respond Mon, 24 Nov 2025 21:04:34 +0000 https://blogs.perficient.com/?p=388586

Why Observability Matters More Than Ever

Moving from traditional Sitecore deployments Sitecore AI means the infrastructure is abstracted away. That’s fantastic for agility, but it also changes how we troubleshoot. You can’t RDP onto a server and tail a file anymore; your lifeline is observability: clear signals from logs, metrics, and governed automation that tell you what’s happening across the platform and the front‑end.

What’s Different in Sitecore AI?

Logs and diagnostics are centralized. You access them via the Sitecore AI portal and the Sitecore CLI. They’re organized by environment and by role. Your front‑end application or rendering host, often a Next.js site deployed on Vercel, responsible for headless rendering and user experience has its own telemetry separate from the CMS.

So, your monitoring picture spans three surfaces: Sitecore AI logs for CMS and deployment activity, rendering host telemetry for front‑end performance, and Experience Edge signals for content delivery. Together, they describe the health of the experience, not just the servers.

 

Understanding the Logging Surfaces

In Sitecore AI, logs are grouped into three primary areas that each play a distinct role in diagnosing issues:

Content Management (CM) logs

  • These are your first stop for diagnosing publishing failures, broken workflows, template errors, and serialization mismatches. When a publish fails, CM logs help you separate permissions or workflow problems from data or serialization issues.

Rendering Host logs

  • Think front‑end behavior and performance. If personalization falls back, pages render slowly, or API responses seem sluggish, the rendering host logs surface cache misses, API latency, and rendering errors that directly impact Core Web Vitals and UX.

Deployment logs

  • The “narrative” of your CI/CD run. When a build fails or a promotion doesn’t complete, deployment logs pinpoint CLI command failures, artifact mismatches, or environment configuration issues. They also provide stage-by-stage visibility (provisioning, build, deploy, post‑actions), which speeds triage and supports audits.

Access these logs quickly in the Deploy app’s environment view or programmatically via the Sitecore CLI for listing, viewing, and downloading logs as part of your pipeline artifacts.

Integration Patterns for Enterprise Monitoring

Centralizing is helpful; correlating is essential. The pragmatic pattern I recommend is:

Sitecore AI → Azure Monitor/Application Insights

  • Forward CMS and deployment logs so you can correlate spikes in errors with deployments, content bursts, or traffic changes. KQL lets you slice by environment, role, and severity for root cause analysis.

Rendering Host → APM (Datadog/New Relic)

  • Use front‑end analytics to track TTFB, cache hit ratio, route errors, and API dependency health. Pair this with Vercel’s own analytics for global edge performance.

Experience Edge → Webhook Monitoring

  • Register webhooks so you can track publish‑to‑Edge latency and trigger alerts or redeploys when content propagation slows or fails.

SIEM Integration (today’s reality)

  • For unified audit across Sitecore SaaS, stream supported Common Audit Logs (CAL) via webhooks (Personalize/CDP/Connect) and, for Sitecore AI, pull environment and deployment logs via CLI on a schedule until broader CAL coverage lands.

Metrics That Matter

In a SaaS world, traditional “server up” checks don’t describe user experience. Focus on metrics that map directly to reliability and business impact:

Deployment success & promotion health

  • Failed builds or promotions block content and features. Tracking rates and mean time to recovery reveals pipeline reliability.

Publish‑to‑Edge latency

  • Authors expect content to reach Experience Edge quickly. Latency here affects real‑time campaigns, previews, and editorial confidence.

Rendering host performance

  • P95/P99 TTFB, cache hit ratio, and error rates impact Core Web Vitals, SEO, and conversion. They also help you spot regressions after releases.

Agent activity & governance

  • With Sitecore AI’s agentic capabilities, monitoring agent runs, approvals, and failures protects compliance and prevents unintended bulk changes.

Governance Signals in Sitecore AI

Sitecore AI introduces Agentic Studio: a governed workspace to design, run, and oversee automation. Work is organized around four building blocks, Agents, Flows, Spaces, and Signals. Practically, that means you can automate complex operations while maintaining human review and auditability.

  • Agents: Handle focused tasks (e.g., content migration, metadata updates).
  • Flows: Orchestrate agents into multi‑step workflows with visibility across stages.
  • Spaces: Provide shared context for teams to collaborate on active runs.

Signals surface trends and triggers that can start or adjust flows. Together, these give marketers and developers a safe frame to scale automation without losing control.

How Agent Flows Are Monitored

Monitoring agent flows blends product‑level visibility with enterprise analytics:

Run visibility in Agentic Studio:

  • Each flow run exposes status, participants (human and agent), timestamps, and outcomes. Because flows are orchestrated in a governed workspace, you get “full visibility” into progression from brief to publish/optimization, including approvals where human review is required.

Governance signals and audit trails:

  • Signals can trigger flows and also act as governance inputs (for example, trend alerts requiring approval). Capture audit trails of who initiated a run, which agents executed steps, and what content or configurations changed.

Alerting and dashboards:

  • Mirror key flow events into your monitoring plane: start, paused awaiting approval, failed step, completed. Route these into Azure Monitor or your SIEM so operations sees agentic activity alongside deployments and content events.

Integration approach:

  • Where Common Audit Logs (CAL) are available (Personalize/CDP/Connect), stream events via webhooks. For Sitecore AI and Agentic activity not yet covered by CAL, use scheduled CLI log exports and APIs the platform exposes to assemble a unified view. Normalize event schemas (runId, agentId, flowId, environment, severity) to enable cross‑product correlation.

The outcome: agent automation becomes observable. Teams can answer “what changed, when, by whom, and why” and tie those answers to performance and compliance dashboards.

Final Thoughts

Observability in Sitecore AI isn’t about servers; it’s about experience health and trusted automation. When you combine SaaS‑native logs, front‑end telemetry, Edge events, and agentic governance signals, you gain a single narrative across deployments, content, and automation, the narrative you need to keep teams fast, safe, and accountable.

]]>
https://blogs.perficient.com/2025/11/24/monitoring-and-logging-in-sitecore-ai/feed/ 0 388586
A Tool For CDOs to Keep Their Cloud Secure: AWS GuardDuty Is the Saw and Perficient Is the Craftsman https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/ https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/#respond Tue, 18 Nov 2025 13:20:08 +0000 https://blogs.perficient.com/?p=388374

In the rapidly expanding realm of cloud computing, Amazon Web Services (AWS) provides the infrastructure for countless businesses to operate and innovate. But with an ever-increasing amount of data, applications, and workloads on the cloud protecting this data poses significant security challenges. As a firm’s data, applications, and workloads migrate to the cloud, protecting them from both sophisticated threats as well as brute force digital attacks is of paramount importance. This is where Amazon GuardDuty enters as a powerful, vigilant sentinel.

What is Amazon GuardDuty?

At its core, Amazon GuardDuty is a continuous security monitoring service designed to protect your AWS accounts and workloads. The software serves as a 24/7 security guard for your entire AWS environment, not just individual applications, and is constantly scanning for malicious activity and unauthorized behavior.

The software works by analyzing a wide variety of data sources within your firm’s AWS account—including AWS CloudTrail event logs, VPC flow logs, and DNS query logs—using machine learning, threat intelligence feeds, and anomaly detection techniques.

If an external party tries a brute-force login, a compromised instance is communicating with a known malicious IP address, or an unusual API call is made, GuardDuty is there to spot it and can be configured to trigger automated actions through services can trigger automated actions through services like Amazon CloudWatch Events and AWS Lambda when a threat is found as well as alert human administrators to take action.

When a threat is detected, GuardDuty generates a finding with a severity level (high, medium, or low) and a score. The severity and score both help minimize time spent on more routine exceptions while highlighting significant events to your data security team.

Why is GuardDuty So Important?

In today’s digital landscape, relying solely on traditional, static security measures is not sufficient. Cybercriminals are constantly evolving their tactics, which is why GuardDuty is an essential component of your AWS security strategy:

  1. Proactive, Intelligent Threat Detection

GuardDuty moves beyond simple rule-based systems. Its use of machine learning allows it to detect anomalies that human security administrators might miss, identifying zero-day threats and subtle changes in behavior that indicate a compromise. It continuously learns and adapts to new threats without requiring manual updates from human security administrators.

  1. Near Real-Time Monitoring and Alerting

Speed is critical in incident response. GuardDuty provides findings in near real-time, delivering detailed security alerts directly to the AWS Management Console, Amazon EventBridge, and Amazon Security Hub. This immediate notification allows your firm’s security teams to investigate and remediate potential issues quickly, minimizing potential damage and alerting your firm’s management.

  1. Broad Protection Across AWS Services

GuardDuty doesn’t just watch over your firm’s Elastic Compute Cloud (“EC2”) instances. GuardDuty also protects a wide array of AWS services, including:

  • Simple Storage Service (“S3”) Buckets: Detecting potential data exfiltration or policy changes that expose sensitive data.
  • EKS/Kubernetes: Monitoring for threats to your container workloads.  No more running malware or mining bitcoin in your firm’s containers.
  • Databases (Aurora; RDS – MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server; and Redshift): Identifying potential compromise or unauthorized access to data.

Conclusion:

In the cloud, security is a shared responsibility. While AWS manages the security of the cloud infrastructure itself, you are responsible for security in the cloud—protecting your data, accounts, and workloads. Amazon GuardDuty is an indispensable tool in fulfilling that responsibility. It provides an automated, intelligent, and scalable layer of defense that empowers you to stay ahead of malicious actors.

To enable Amazon GuardDuty, consider contacting Perficient to help enable, configure, and train staff. Perficient is an AWS partner and has achieved Premier Tier Services Partner status, the highest tier in the Amazon Web Services (AWS) Partner Network. This elevated status reflects Perficient’s expertise, long-term investment, and commitment to delivering customer solutions on AWS.

Besides the firm’s Partner Status, Perficient has demonstrated significant expertise in areas like cloud migration, modernization, and AI-driven solutions, with a large team of AWS-certified professionals.

In addition to these competencies, Perficient has been designated for specific service deliveries, such as AWS Glue Service Delivery, and also has available Amazon-approved software in the AWS Marketplace.

Our financial services experts continuously monitor the financial services landscape and deliver pragmatic, scalable solutions that meet the required mandate and more. Reach out to Perficient’s Director and Head of Payments Practice Amanda Estiverne-Colas to discover why Perficient has been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and 25+ leading payment + card processing companies.

 

]]>
https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/feed/ 0 388374