Ravind Budhiraja, Author at Perficient Blogs https://blogs.perficient.com/author/rbudhiraja/ Expert Digital Insights Fri, 18 Jul 2025 16:19:07 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Ravind Budhiraja, Author at Perficient Blogs https://blogs.perficient.com/author/rbudhiraja/ 32 32 30508587 What is the Model Context Protocol? https://blogs.perficient.com/2025/07/18/the-model-context-protocol/ https://blogs.perficient.com/2025/07/18/the-model-context-protocol/#comments Fri, 18 Jul 2025 16:19:07 +0000 https://blogs.perficient.com/?p=384681

You can think of the Model Context Protocol (MCP) as USB for large language models (LLMs), allowing an LLM to interact with a variety of external systems in a uniform manner. It was developed by Anthropic to solve the fundamental problem of getting a text generator (LLM) to perform real-world actions on a user’s behalf.

Solving the M x N Problem

With a growing number of powerful LLMs (M) and a vast universe of applications and APIs (N) for them to interact with, one would need to develop M x N custom connections to link every LLM with every application.

By offering a standardized solution, MCP converts the M x N headache into a much more manageable M + N problem. As long as each LLM can use an MCP client, it can communicate with every application for which an MCP server has been created.

The Architecture

Depiction of the relationship between the host application, MCP Client, MCP Server, and external APIs.

At its heart, MCP operates on a client-server architectural model. The main components, connecting an LLM to an external application are:

  • MCP Host – The application which communicates with the LLM, such as an AI-chat interface (LM Studio) or a coding assistant (GitHub Copilot).
  • MCP Client – A component within the host application which communicates with an MCP server. There’s a one-to-one relationship between a client and a server. A host can run multiple clients to access multiple services (e.g. file system, email, and database).
  • MCP Servers – The bridge that translates client requests into actions on the external application. An MCP server exposes a set of “primitives” that categorize the different types of interactions an LLM can have with an external resource.

The Primitives

Exposing server functionality through the standard primitives allows for dynamic interaction. The MCP client is able to query a server to discover the capabilities offered, and then pass this information on to the LLM so it can make requests accordingly.

  • Resources – Represent sources of data or content that the LLM can access. Examples include files, database entries, or the content of a webpage. Accessing a resource is a read-only operation.
  • Tools – Executable functions that allow an LLM to take action in the external environment. Examples include sending an email, creating a calendar event, or updating a database record.
  • Prompts – Pre-defined reusable templates or instructions that can guide the LLM in its interaction with the tools and resources.

The Interactions

Depiction of message flow allowing a user to control external systems via an LLM, using the MCP protocol.

The above diagram outlines the high level flow that allows a user to request an LLM to perform an action, in this case sending an email. Let’s walk through the steps:

Step 0: Tool Discovery (Occurs in the Background)

Before any user request, the MCP Client communicates with the MCP Server to learn what tools it offers. The server might respond, “I have a tool named send_email that requires recipient_email, subject, and body”. The MCP Client passes this information to the LLM’s context, making it aware of its email-sending capability.

Step 1: User Initiates a Request

The user tells the LLM, “Hey, can you send an email to Jane Doe and let her know the project proposal has been uploaded to the shared drive?” The LLM reasons about the user’s intent, decides which tool to use, and seeing that the tool requires a “subject” might ask the user for more information.

Step 2: LLM Proposes Tool Call

With all necessary arguments, the LLM generates the JSON required for a structured tool call:

{
  "tool_calls": [
    {
      "name": "send_email",
      "arguments": {
        "recipient": "Jane.Doe@email.com",
        "subject": "Project Proposal Uploaded",
        "body": "Hi Jane, Just letting you know that..."
      }
    }
  ]
}

Step 3: MCP Client Makes JSON-RPC Request

Having detected the structured tool call in the LLM’s output, the host application passes this to the MCP Client, which then:

  • Parses the LLM’s output to extract the structured content.
  • Validates the extracted data against the tool’s schema.
  • Attaches any necessary security/context information (E.g. auth tokens)
  • Generates the final JSON-RPC payload and handles network communication to the MCP Server

Step 4: MCP Server Executes Action

The MCP server validates the received request, translates it into the specific API call required by a real email service, and executes the action. The email service’s response is then translated into a standardized MCP response and sent back to the MCP Client.

Step 5: LLM Conveys Status to the User

The MCP client receives the result and passes it to the host application. The host injects this result into the LLM’s context, typically as a new message with a special “tool” role. Finally, the LLM generates a natural language response to inform the user, “The email has been sent to Jane Doe.”

Summary

The Model Context Protocol provides us with a simple yet elegant means for connecting an LLM to an unlimited number of external systems. This converts LLMs from useful but limited chat-bots to truly powerful agents that can act on our behalf. With adoption by major players in the AI field, like Anthropic, OpenAI, Google, and Microsoft it is fast becoming the de-facto standard that all agents will converge upon.

What are some of the most useful MCP servers that you have used?

]]>
https://blogs.perficient.com/2025/07/18/the-model-context-protocol/feed/ 1 384681
Three Options for Automated Lighthouse Testing https://blogs.perficient.com/2022/06/30/three-options-for-automated-lighthouse-testing/ https://blogs.perficient.com/2022/06/30/three-options-for-automated-lighthouse-testing/#respond Thu, 30 Jun 2022 15:03:16 +0000 https://blogs.perficient.com/?p=312127

Benefits of Automation

If you’re reading this, you’re probably already aware of the importance of front-end page performance to the end-user experience. You might even be working on improving the performance of your pages and using Lighthouse to track your progress.

While it is quite easy to run tests via the Lighthouse tab under Chrome’s developer tools, there are a few important points to keep in mind:

  • Installed browser extensions can affect the score, so we need to remember to always run Lighthouse in an Incognito tab,
  • Anything rendered on the page, such as cookies, user selections or notifications, can alter the final page score. So, it is important to ensure identical page state across different runs to make accurate comparisons.
  • Finally, even after controlling for page state, you may find the score fluctuates significantly every time you run the test. In which case it can be useful to average the score across several runs, rather than rely on a single sample.

Automation can help address all the above concerns by ensuring repeatability and allowing you to quickly run multiple tests. We can also use scripts to calculate the average score from multiple reports.

Finally, you can use automation to create a dashboard that tracks the score of specific pages over time, to ensure your website is moving in the right direction. This will also allow you to catch any changes that inadvertently affect page performance.

Automation Options

I experimented with three different approaches, for automating Lighthouse testing. Outlined below is a high-level overview of each, along with some of the main pros and cons.

Sitespeed.io

Sitespeed.io is an open source tool for both testing and monitoring your website. You can install it either as a Docker container or as an npm module. It is very powerful, with the ability to test page performance in both Chrome and Firefox. The generated reports not only detail page metrics, but also include a video recording of the Browser session.

Going over all the features of Sitespeed.io could fill an entire series of blog posts but, for our purposes, I will just mention the Lighthouse plugin which adds Lighthouse to the suite of tests that are run.

Pros

  • Powerful tool for everything web performance related
  • Easy to setup performance dashboard via provided Docker compose file
  • Excellent documentation
  • Custom page interactions / browser automation possible via Selenium

Cons

  • May be overkill for a developer looking for quick Lighthouse test results
  • An open bug with the Lighthouse plugin at the time of my testing prevented me from using it to generate Lighthouse scores

Using a Docker Container

For this option, we use an existing Docker container, which can be fetched via: docker pull femtopixel/google-lighthouse

Tests are then run via the command line. The following example will test Google’s homepage and write the test results to a json file:

docker run --rm --name lighthouse -it -v "/path/to/your/report:/home/chrome/reports" femtopixel/google-lighthouse http://www.google.com –output=json --chrome-flags="--ignore-certificate-errors"

Additional documentation is available on the GitHub page.

Pros

  • Nothing to install (other than Docker)
  • Fastest to set up
  • Can script with PowerShell

Cons

  • No support for custom page interactions / browser automation
  • Docker networking errors reported by some users

Using Node.js

As per Google’s own documentation, the “Node CLI provides the most flexibility in how Lighthouse runs can be configured and reported”. Lighthouse can be installed as a global npm module via: npm install -g lighthouse

Tests can then be run via the command line. The syntax is very similar to that for the Docker container above. The following command will run an equivalent test:

lighthouse https://www.google.com --output=json --output-path=./test.json --chrome-flags="--ignore-certificate-errors"

The chief advantage of this approach, over the Docker Container, is the ability to use Puppeteer for page interactions like authentication.

Pros

  • Approach recommended by Google
  • Custom page interactions / browser automation possible via Puppeteer
  • Can script with JavaScript

Cons

  • Is only compatible with the latest versions of Node.js. This may need you to switch between versions for running Lighthouse and site development

In Conclusion

While it doesn’t matter how you test your site, it is important to incorporate Lighthouse testing both into your development process, as well as to track your site’s performance trends over time.
Hopefully you can adopt one of the above approaches to meet your needs. We should also be able to combine the best of both the Docker and Node.js approaches, by creating a container that includes Lighthouse and Puppeteer. Please let me know if I have overlooked an existing container that does this.

]]>
https://blogs.perficient.com/2022/06/30/three-options-for-automated-lighthouse-testing/feed/ 0 312127