Quality Assurance Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/quality-assurance/ Expert Digital Insights Tue, 13 Jan 2026 20:49:25 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Quality Assurance Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/quality-assurance/ 32 32 30508587 Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an open-source, high-performance API client that puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No,” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why is Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads, your collections stay local. No hidden syncing; your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters, API testing without unnecessary features.

Bottom line: Bruno fits the way we work today, collaboratively, securely, and efficiently. It’s not trying to do everything; it’s just good at API testing.

Key Features

Bruno keeps it real with features that matter. Here are the highlights:

  1. Totally Open-Source

  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A group of developers is contributing to GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up

  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain-text files integrate well with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning

  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up quickly and zips through responses.
  • Great for solo endpoint tweaks or managing large workflows without your machine slowing.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux. Just choose your platform, and you’re good to go.

#3. Quick Install Guide

Windows

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy-to-build requests on the fly.
  • CLI: For the terminal lovers. Automate tests, integrate with CI/CD, or run collections: bruno run collection.bru –env dev.

Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen prompting you to create a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Enter the URL: https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Add the header: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it, and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, create folders for environments or APIs, and use variables to switch setups.

Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed

  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy

  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag

  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

Level up With Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

Community & Contribution

It’s community-driven:

Conclusion

Bruno isn’t just another API testing tool; it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828
Vibium – The Next Evolution of Test Automation: AI, Intent, and a Global Device Network https://blogs.perficient.com/2025/10/14/vibium-the-next-evolution-of-test-automation-ai-intent-and-a-global-device-network/ https://blogs.perficient.com/2025/10/14/vibium-the-next-evolution-of-test-automation-ai-intent-and-a-global-device-network/#respond Tue, 14 Oct 2025 14:32:21 +0000 https://blogs.perficient.com/?p=387783

“What Selenium did for browsers, Vibium aims to do for the age of AI.”
— Jason Huggins (Creator of Selenium and Appium)

A New Chapter in Test Automation

Selenium, Appium, and Playwright have revolutionized QA testing automation over the last two decades. But even today, automation engineers spend hours maintaining brittle scripts, chasing broken locators, and rewriting tests with every UI change. Enter Vibium — an ambitious, AI-powered testing platform designed to eliminate these pain points and redefine how software testing works.

What Exactly Is Vibium?

Vibium (also called Vibium AI) is a next-generation test automation system developed by Jason Huggins, the creator of Selenium and Appium.
Instead of relying on locator-based scripts, Vibium lets you describe your test cases in plain English — and the AI interprets, executes, and adapts them intelligently.

In simple words:
You tell Vibium what to test, not how to do it. This intent-driven approach ensures your test scripts remain intact even when the code changes — the system learns and adapts dynamically.

Vibium

vibium.ai is “a demo site with demo data.”

Key Features and Innovations

1. Intent-Based Test Automation

No need for XPath, CSS selectors, or brittle locators.
Vibium understands natural language commands like:

“Click the Login button and verify the Dashboard loads.”

It uses AI models to interpret your intent and execute corresponding UI actions.

2. Self-Healing Tests

Every automation engineer has faced the nightmare of a minor UI change breaking hundreds of tests. Vibium’s self-healing engine automatically reroutes test flows when elements move, rename, or restyle — keeping your suite resilient and adaptive.

3. Decentralized Testing Network

Here’s where Vibium breaks all conventions. It envisions a global network of devices — phones, browsers, and desktops — that act as distributed test nodes. Test jobs can be broadcast to this network, leveraging real-world devices, networks, and locations for realistic end-to-end testing. This approach could render “device farms” obsolete by replacing them with a community-powered grid.

4. AI-Powered Stability

Instead of simple pass/fail results, Vibium’s AI analyzes context — screenshots, logs, error patterns — to understand why a test failed, not just that it did. This could lead to test systems that learn from failures and self-optimize over time.

The Architecture (Vision Stage)

While the platform is still in development, the envisioned architecture includes:

  • AI Core: Parses natural language into executable steps.
  • WebDriver BiDi Layer: Enables modern, bidirectional browser communication.
  • Distributed Node System: Executes test commands across devices globally.
  • Reputation & Security Layer: Ensures sandboxing, trust, and data isolation.

It’s a hybrid of AI reasoning and peer-to-peer testing infrastructure — something never done before in QA tooling.

Current Status (as of 2025)

  • Stage: Early proof-of-concept / conceptual development.
  • Website: vibium.ai (demo only).
  • Creator: Jason Huggins (Selenium/Appium creator).
  • Public Release: Not yet available (no GitHub repo or SDKs).
  • PyPI Stub: A placeholder vibium==0.0.1 may exist, but is non-functional.

That means while Vibium isn’t ready for production, its vision has already ignited massive excitement in the testing community — much like Selenium did before its first stable release.

Why Vibium Matters

Vibium represents more than a new testing tool — it’s a paradigm shift.
If it succeeds, we could see:

  • Tests written by product managers and analysts, not just automation engineers.
  • AI maintains test suites automatically.
  • Crowdsourced, global device grids replacing centralized labs.
  • Testing that mirrors how users actually behave, not just how developers expect them to.

For enterprises struggling with flaky, expensive automation, Vibium’s promise could be transformative.

The Caveats

Every bold vision carries challenges. Vibium’s biggest ones include:

  • Security & Privacy – Safely executing tests on third-party devices.
  • Result Trustworthiness – Ensuring authenticity and isolation of test runs.
  • Infrastructure Complexity – Orchestrating global tests in real time.
  • Transparency of AI Decisions – Understanding why the AI took specific steps.
  • Adoption Curve – Convincing large enterprises to shift from proven Selenium/Playwright stacks.

The Future: “Selenium for the AI Age”

Vibium is still emerging — but the concept alone has already started a new conversation in the testing world. We may soon see a world where automation frameworks don’t just run tests — they understand them.

For now, staying close to Vibium’s development (through vibium.ai and Jason Huggins’s updates) could give early adopters a serious advantage when it officially launches.

Final Thoughts

The evolution of test automation mirrors the evolution of software itself — from manual scripts to declarative frameworks to AI-driven intent systems. Vibium might bridge today’s rigid test automation and tomorrow’s autonomous testing intelligence. If it fulfills even half of its vision, it could be as revolutionary as Selenium was 15 years ago.

 

]]>
https://blogs.perficient.com/2025/10/14/vibium-the-next-evolution-of-test-automation-ai-intent-and-a-global-device-network/feed/ 0 387783
Cypress Automation: Tag-Based Parallel Execution with Custom Configuration https://blogs.perficient.com/2025/07/30/cypress-automation-tag-based-parallel-execution-with-custom-configuration/ https://blogs.perficient.com/2025/07/30/cypress-automation-tag-based-parallel-execution-with-custom-configuration/#respond Wed, 30 Jul 2025 07:52:32 +0000 https://blogs.perficient.com/?p=385318

Custom Parallel Execution Using Tags:

To enhance the performance of Cypress tests, running them in parallel is a proven approach. While Cypress offers a built-in parallel execution feature, a more flexible and powerful method is tag-based parallel execution using a custom configuration. This method allows to fine-tune which tests are executed concurrently, based on tags in. feature files.

 


What Is Tag-Based Parallel Execution?

Tag-based execution filters test scenarios using custom tags (e.g., @login, @checkout) defined in you. feature files. Instead of running all tests or manually selecting files, this method dynamically identifies and runs only the tagged scenarios. It’s particularly useful for CI/CD pipelines and large test suites.

Key Components:

This approach uses several cores Node.js modules: 

  • child process – To execute terminal commands. 
  • glob – To search. feature files based on patterns. 
  • fs – To read file content for tag matching. 
  • crypto – To generate unique hashes for port management. 

Execution Strategy:

1. Set Tags and Config via Environment Variables:

You define which tests to run by setting environment variables:

  • TAGS='@db' → runs only tests with @db tag
  • <strong>THREADS=2 → number of parallel threads
  • SPEC='cypress/support/feature/*.feature' → file location pattern
    These variables help dynamically control test selection and concurrency.

2. Collect All Matching Feature Files:

Using the glob package, the script searches for all . feature files that match the provided pattern (e.g., *. feature). This gives a complete list of potential test files before filtering by tag.

3. Filter Feature Files by Tag:

Each . feature file is opened and scanned using fs.readFileSync(). If it contains the specified tag (like @db or @smoke), it gets added to the list for execution. This ensures only relevant tests run.

4. Assign Unique Ports for Each File:

To avoid port conflicts during parallel execution, the script uses crypto.createHash('md5') on the file path + tag combination. A slice of the hash becomes the unique port number. This is crucial when running UI-based tests in parallel.

5. Run Cypress Tests in Parallel:

The script spawns multiple Cypress instances using child_process.exec or spawn, one per tagged test file. Each command is built with its own spec file and unique port, and all are run simultaneously using Promises.

6. Error Handling and Logging:

If no files match the tag, the script logs a warning and exits cleanly. If any Cypress test fails, the corresponding error is caught, logged, and the overall process exits early to prevent false positives in CI pipelines.

7. Trigger the Execution from Terminal:

The full command is triggered from the terminal via a script in package.json:
"cy:parallel-tag-exec": "cross-env TAGS='@db' THREADS=2 SPEC='cypress/support/feature/*.feature' ts-node parallel-tag-config.ts"

8. Run the below command:

npm run cy:parallel-tag-exec

This executes the full workflow with just one command.


Complete TypeScript Code

The code handles the entire logic: matching tags, assigning ports, and running Cypress commands in parallel. Refer to the image below for the full implementation.

P1

 

P2

 


Benefits of This Approach:

  • Greatly reduces overall test runtime.
  • Offers flexibility with test selection using tags.
  • Avoids port conflict issues through dynamic assignment.
  • Works well with CI pipelines and large-scale projects.

 

Final Thoughts:

This custom configuration allows you to harness the full power of parallel testing with Cypress in a tag-specific, efficient manner. It’s scalable, highly customizable, and especially suitable for complex projects where targeted test runs are required.

For more information, you can refer to this website: https://testgrid.io/blog/cypress-parallel-testing/

 

Similar Approach for Cypress Testing:

  1. Cypress Grep Plugin – https://github.com/cypress-io/cypress-grep

  2. Nx Dev Tools (Monorepo) – https://nx.dev/technologies/test-tools/cypress/api


 

]]>
https://blogs.perficient.com/2025/07/30/cypress-automation-tag-based-parallel-execution-with-custom-configuration/feed/ 0 385318
Unlocking the power of Data Enrichment in Collibra for effective Data Governance https://blogs.perficient.com/2025/07/28/unlocking-the-power-of-data-enrichment-in-collibra-a-key-element-in-effective-data-governance/ https://blogs.perficient.com/2025/07/28/unlocking-the-power-of-data-enrichment-in-collibra-a-key-element-in-effective-data-governance/#respond Mon, 28 Jul 2025 09:19:30 +0000 https://blogs.perficient.com/?p=385103

In today’s data-driven landscape, Organizations are not just collecting the Data, they are striving to understand, trust, and maximize its value. One of the critical capabilities that helps achieve the goal is data enrichment, especially when implemented through enterprise-grade governance tools like Collibra.

In this blog, we will explore how Collibra enables data enrichment, why it is essential for effective data governance, and how organizations can leverage it to drive better decision-making.

What is Data Enrichment in Collibra?

Data enrichment enhances the dataset within the Collibra data governance tool by adding business context, metadata, correcting inaccuracies, and governance attributes that help users to understand the data’s meaning, usage, quality, and lineage.

Rather than simply documenting tables/columns, data enrichment enables organizations to transform technical metadata into meaningful, actionable insights, in which this enriched context empowers business and technical users alike to trust the data they are working with and use it confidently for analysis, reporting, and compliance.

How does Data Enrichment work in Collibra?

Data Enrichment

How We Use Data Enrichment in the Banking Domain

In today’s digital landscape, banks manage various data formats (such as CSV, JSON, XML, and tables) with vast volumes of data originating from internal and external sources like file systems, cloud platforms, and databases. Collibra automatically catalogs these data assets and generates metadata.

But simply cataloging data isn’t enough. The next step is data enrichment, where we link technical metadata with business-glossary terms to give metadata meaningful business context and ensure consistent description and understanding across the organization. Business terms clarify what each data element represents from a business perspective, making it accessible not just to IT teams but also to business users.

In addition, each data asset is tagged with data classification labels such as PCI (Payment Card Information), PII (Personally Identifiable Information), and confidential. This classification plays a critical role in data security, compliance, and risk management, especially in a regulated industry like banking.

To further enhance the trustworthiness of data, Collibra integrates data profiling capabilities. This process analyzes the actual content of datasets to assess their structure and quality. Based on profiling results, we link data to data‑quality rules that monitor completeness, accuracy, and conformity. These rules help enforce high-quality standards and ensure that the data aligns with both internal expectations and external regulatory requirements.

An essential feature in Collibra is data lineage, which provides a visual representation of the data flow from its source to its destination. This visibility helps stakeholders understand how data transforms and moves through various systems, which is essential for impact analysis, audits, and regulatory reporting.

Finally, the enriched metadata undergoes a structured workflow-driven review process. This involves all relevant stakeholders, including data owners, application owners, and technical managers. The workflow ensures that we not only produce accurate and complete metadata but also review and approve it before publishing or using it for decision-making.

Example: Enriching the customer data table

  • Database: Vertica Datalake
  • Table: Customer_Details
  • Column: Customer_MailID
  • Business Term: Customer Mail Identification
  • Classification:P II (Personally Identifiable Information)
  • Quality rule: There are no null values in Customer mailID. (Completeness)
  • Linked Polity: GDPR policy for the EU Region
  • Lineage: Salesforce à ETL pipeline à Vertica

Data enrichment in Collibra is a cornerstone of a mature Data Governance Framework; it helps transform raw technical metadata into a living knowledge asset, fueling trust, compliance, and business value. By investing time in enriching your data assets, you are not just cataloging them; you are empowering your organization to make smarter, faster, and more compliant data-driven decisions.

]]>
https://blogs.perficient.com/2025/07/28/unlocking-the-power-of-data-enrichment-in-collibra-a-key-element-in-effective-data-governance/feed/ 0 385103
Technical Deep Dive: File Structure and Best Practices in Karate DSL https://blogs.perficient.com/2025/07/16/technical-deep-dive-file-structure-and-best-practices-in-karate-dsl/ https://blogs.perficient.com/2025/07/16/technical-deep-dive-file-structure-and-best-practices-in-karate-dsl/#respond Wed, 16 Jul 2025 07:15:02 +0000 https://blogs.perficient.com/?p=367934

In the world of API test automation, Karate DSL stands out for its simplicity, power, and ability to handle both HTTP requests and validations in a readable format. At the heart of every Karate test lies the feature file — a neatly structured script that combines Gherkin syntax with Karate’s DSL capabilities.

Whether you’re a beginner writing your first test or an experienced tester looking to optimize your scripts, understanding the structure of a Karate feature file is key to writing clean, maintainable, and scalable tests.

In this post, we’ll walk through the core building blocks of a Karate test script, explore best practices, and share tips to help you avoid common pitfalls.

The Karate framework adopts Cucumber-style Gherkin syntax to support a BDD approach, organizing tests into three core sections: Feature, Background, and Scenario.

Let’s understand the structure of the KARATE Framework

Step 1: Create a new MAVEN Project

  • Choose a Workspace location.
  • Select the Archetype
  • Provide the Group ID and the Artifact ID
  • Click on Finish to complete the setup.

Step 2: The following will be the structure

A Karate test script typically includes:

  1. Feature: to describe a test suite
  2. Background: for shared setup (optional)
  3. Scenario(s): with Given‑When‑Then steps
  4. DSL steps like def, request, match
  5. Use of Scenario Outline + Examples or JS loops for data tests
  6. Runner in Java (JUnit 4/5) to execute feature files

Karatestruture

A Karate test script uses the .feature extension, a convention inherited from Cucumber. You can organize your files according to the Java package structure if you prefer.

While Maven guidelines suggest keeping non-Java files separately in the src/test/resources directory and Java files in src/main/java, the creators of the Karate Framework recommend a different approach. They advocate for storing both Java and .feature files together, as it makes it easier to locate related files, rather than following the typical Maven file structure.

1. Feature File (*.feature)

These are your .feature files where you write Karate tests using Gherkin syntax. They serve as the entry point for your API testing, allowing you to describe test behavior in plain English. Feature files contain:

  • Feature: – a high-level description of what’s being tested.

  • Background: (optional) – shared setup that runs before each scenario (e.g., setting base URL, headers).

  • Multiple Scenarios (or Scenario Outline) with steps to test specific API endpoints.

Karate uses Gherkin-style .feature files. These include:

a. Feature:

A high-level description that groups related scenarios.

Feature: User API tests

         Validate creation, retrieval, and deletion of users

b. Background: (optional)

Steps common to all scenarios in the file, like setting the base URL or the auth token.

Background:

       Given url baseUrl
       And header Authorization = 'Bearer xyz'

c. Scenario: / Scenario Outline:

Each Scenario: describes a test case.

Scenario Outline: allows data-driven tests combined with an Examples: table.

Scenario: Create a new user 
          When method post 
          And request { name: 'Alice', email: 'alice@example.com' } 
          Then status 201 
          And match response.id != null


Scenario Outline: Test multiple user creation 
  Given request { name: <name>, age: <age> }
  When method post
  Then status 201
  Examples:
    | name  | age |
    | Alice | 25  |
    | Bob   | 30  |

2. Gherkin Steps with Karate DSL

Karate supports the standard Given‑When‑Then keywords, but the logic is built into its DSL (no Java step definitions needed). Karate DSL is a powerful, open-source, domain-specific language designed to streamline API testing. Unlike frameworks that require extensive coding, Karate employs a human-readable syntax—built on Gherkin-style keywords like Given, When, Then— so you can express tests in plain English.

Common step types:

  • Given – setup (e.g., url, header, param)
  • When – action (e.g., method get/post)
  • Then / And – assertions (e.g., status, match)
  • * def – define variables or functions
  • * call read(...) – reuse other feature files
  • * print, * eval, JSON manipulation

Example:

Scenario: Search for user 
      * def searchQuery = {term: 'example'}
      Given url baseUrl + '/users' 
      And param q = searchQuery.term 
      When method get 
      Then status 200 
      And match response.users[0].name contains 'example'

 

3. Reusability & Data-Driven

  • Call features/functions: * call read('helpers/common.feature')

  • Inline JS: * def calc = function(x,y){ return x+y }

  • Data-driven: For more complex looping/data nesting, Karate supports * def data = [...] and multiple scenario calls.

Karateseries Part2

4. Java Runner

Since the Karate Framework uses a Runner file (similar to Cucumber) to execute feature files, much of the structure follows Cucumber standards.

However, unlike Cucumber, Karate doesn’t require step definitions, which provides greater flexibility and simplifies the process. You don’t need to add the additional “glue” code that’s typically required in Cucumber-based setups.

The Runner class is often named TestRunner.java.

import com.intuit.karate.junit4.Karate;
import com.intuit.karate.KarateOptions;
import org.junit.runner.RunWith;

@RunWith(Karate.class)
@KarateOptions(
  features = "classpath:features",
  tags = "~@ignore"
)
public class TestRunner { }
  • @RunWith(Karate.class): tells JUnit to use Karate as the test runner.

  • features = "classpath:features"Directs Karate on where to find .feature files.

  • tags = "~@ignore": the ~ operator excludes all scenarios/features annotated with @ignore

By default, Karate auto-skips any scenario tagged with the special @ignore, so you don’t even need ~@ignore unless you’re explicitly overriding tags.

When You Run Karate Runner File

  • Karate scans the features directory in the classpath.

  • It includes all scenarios/features except those tagged @ignore.

  • The test suite runs them using JUnit 4 via @RunWith(Karate.class).

Common Mistakes to Avoid

  • Overusing print instead of assertions

  • Hardcoding values that should be parameterized

  • Mixing multiple test flows in a single scenario

Watchouts & Best Practices

  • Keep scenarios small and focused

  • Use tags like @Smoke, @Regression for grouping

  • Don’t mix @CucumberOptions (deprecated in Karate) with @KarateOptions – stick to Karate’s own annotation only

  • Limit tags and runner configs to the top-level feature files. Tags on “called” features don’t apply to entry-point filtering

  • If you upgrade to JUnit 5 (recommended from Karate 0.9.5 onward), move to @Karate.Test instead of @RunWith, often eliminating @KarateOptions.

Conclusion

A well-structured Karate feature file not only makes your tests easier to read and debug but also ensures they remain maintainable as your project scales. By following best practices like modularizing test data, using tags effectively, and keeping scenarios focused, you lay the foundation for a robust and reusable API test suite.

As you continue exploring Karate DSL, adopting these practices will help you write cleaner, more efficient scripts that are easy for your team to understand and extend.

]]>
https://blogs.perficient.com/2025/07/16/technical-deep-dive-file-structure-and-best-practices-in-karate-dsl/feed/ 0 367934
Implementing End-to-End Testing Using Playwright within Jenkins CI/CD Pipelines https://blogs.perficient.com/2025/07/15/implementing-end-to-end-testing-using-playwright-within-jenkins-ci-cd-pipelines/ https://blogs.perficient.com/2025/07/15/implementing-end-to-end-testing-using-playwright-within-jenkins-ci-cd-pipelines/#respond Tue, 15 Jul 2025 10:39:27 +0000 https://blogs.perficient.com/?p=384021

In today’s fast-paced software development world, delivering high-quality web applications quickly and reliably is more critical than ever. Continuous Integration and Continuous Deployment (CI/CD) pipelines streamline the processes of building, testing, and deploying software, enabling teams to deliver updates more quickly and with fewer errors. One crucial piece of this puzzle is automated end-to-end (E2E) testing, which simulates real user interactions to ensure your application works correctly across all supported browsers and devices.

Among the many testing frameworks available, Playwright has rapidly become a favorite for its speed, reliability, and cross-browser capabilities. In this blog, we’ll explore how to seamlessly integrate Playwright E2E tests into a Jenkins CI/CD pipeline, enabling your team to catch bugs early and maintain confidence in every release.

Why Use Playwright for End-to-End Testing?

Playwright is an open-source testing library developed by Microsoft that supports automation across the three major browser engines: Chromium (Google Chrome, Edge), Firefox, and WebKit (Safari). Its unified API lets you write tests once and run them across all browsers, ensuring your app behaves consistently everywhere.

Key advantages include:

  • Cross-browser support without changing your test code.
  • Ability to run tests in headless mode (without a visible UI) for speed or headed mode for debugging.
  • Support for parallel test execution, speeding up large test suites.
  • Advanced features like network request interception, mocking, and screenshot comparisons.
  • Built-in generation of HTML test reports that are easy to share and analyze.

These features make Playwright an excellent choice for modern E2E testing workflows integrated into CI/CD.

Setting Up Playwright in Your Project

To get started, install Playwright and its dependencies using npm:

npm install -D @playwright/test

npx playwright install

Create a simple test file, e.g., example.spec.ts:

import { test, expect } from '@playwright/test';

test('verify homepage title is correct', async ({ page }) => {

await page.goto('https://example.com');

  await expect(page).toHaveTitle(/Example Domain/);

});

Run the tests locally to ensure everything is working:

npx playwright test

Integrating Playwright Tests into Jenkins Pipelines

To automate testing in Jenkins, you’ll add Playwright test execution to your pipeline configuration. A typical Jenkins pipeline (using a Jenkinsfile) for running Playwright tests looks like this:

pipeline {

    agent any

stages {

    // Stage 1: Checkout the source code from the SCM repository configured for this job

    stage('Checkout') {

        steps {

            checkout scm

        }

    }

Stage 2: Install all project dependencies and Playwright browsers.

    stage('Install Dependencies') {

        steps {

            // Install npm packages using 'ci' for a clean and reliable install

            sh 'npm ci'

            // Install Playwright browsers and necessary dependencies for running tests

            sh 'npx playwright install --with-deps'

        }

    }

Stage 3: Run Playwright tests and generate reports.

  stage('Run Playwright Tests') {

        steps {

            // Execute Playwright tests with two reporters: list (console) and html (for report generation)

            sh 'npx playwright test --reporter=list,html'

        }

        post {

            always {

                // Archive all files in the 'playwright-report' folder for later access or download

                archiveArtifacts artifacts: 'playwright-report/**', allowEmptyArchive: true

                // Publish the HTML test report to Jenkins UI for easy viewing

                publishHTML(target: [

                    reportName: 'Playwright Test Report',

                    reportDir: 'playwright-report',

                    reportFiles: 'index.html',

                    keepAll: true,

                    allowMissing: true

                ])

            }

        }

    }

}

What does this pipeline do?

  • Checkout: Pulls the latest code from your repository.
  • Install Dependencies: Installs Node.js packages and Playwright browser binaries.
  • Run Tests: Executes your Playwright test suite, generating both console and HTML reports.
  • Publish Reports: Archives the HTML report as a Jenkins artifact and displays it within Jenkins for easy access.

This setup helps your team catch failures early and provides clear, actionable feedback right in your CI dashboard.

Best Practices for Maintaining Speed and Reliability in CI

CI environments can sometimes be less forgiving than local machines, so keep these tips in mind:

  • Avoid fixed delays, such as waitForTimeout(). Instead, wait for specific elements with await page.waitForSelector().
  • Add retry logic or test retries in your Playwright config to reduce flaky test failures.
  • Disable animations or transitions during tests to improve stability.
  • Execute tests in headless mode to improve speed and resource efficiency. Use headed mode selectively when you need to debug a failing test.
  • Utilize parallel test execution to shorten the overall testing duration.

Conclusion

Integrating Playwright end-to-end tests into your Jenkins CI/CD pipeline enables your team to deliver reliable, high-quality web applications quickly and efficiently. Automated cross-browser testing detects bugs before they reach production, enhancing user experience and minimizing costly hotfixes.

With Playwright’s robust features, simple API, and built-in support for CI reporting, setting up effective E2E testing is straightforward. As you grow, explore adding visual regression testing tools like Percy or containerizing your tests with Docker for even more reproducibility.

]]>
https://blogs.perficient.com/2025/07/15/implementing-end-to-end-testing-using-playwright-within-jenkins-ci-cd-pipelines/feed/ 0 384021
Keyboard Testing in Accessibility Testing https://blogs.perficient.com/2025/07/08/keyboard-testing-in-accessibility-testing/ https://blogs.perficient.com/2025/07/08/keyboard-testing-in-accessibility-testing/#respond Tue, 08 Jul 2025 05:17:53 +0000 https://blogs.perficient.com/?p=383948

Accessibility testing guarantees that software applications can be used effectively by all individuals, including those with disabilities. Many users rely on keyboards instead of a mouse due to mobility challenges or because they use assistive technologies, such as screen readers. Keyboard accessibility testing confirms that all controls and elements on a page are usable without the use of a mouse. It ensures that the tab order is logical, focus indicators are visible, and users can move through the interface smoothly. This type of testing is crucial for developing inclusive applications that meet accessibility standards and offer a seamless experience for all users.

Why Keyboard Testing Matters

Keyboard testing is crucial for verifying that users who cannot use a mouse can still navigate and interact with the application effectively. People with motor disabilities, visual impairments, or even temporary injuries often rely on keyboards or assistive devices to navigate websites and applications. By ensuring that all features can be accessed using only a keyboard, developers enhance usability for everyone. It also helps ensure the product meets accessibility standards and legal requirements. Ultimately, keyboard-friendly design supports a more inclusive digital experience.

Steps for Performing Keyboard Accessibility Testing

  1. Open the application or webpage you want to test.
  2. Start navigating using the Tab key.
  3. Ensure all interactive elements (links, buttons, forms, etc.) are accessible using the Tab key.
  4. Use Shift + Tab to navigate backward.
  5. Press the Enter or Spacebar key to activate buttons and links.
  6. Use the Arrow keys to navigate between options in radio buttons, dropdowns, menus, sliders, and tab panels.
  7. Always maintain a visible keyboard focus and ensure it moves through elements in a clear and logical sequence.
  8. Use screen readers (like NVDA or VoiceOver) to test combined keyboard and screen reader accessibility.
  9. The Home key takes you to the top of the page, while the End key brings you to the bottom instantly.
  10. In checkbox groups, use the Spacebar to check or uncheck options.
  11. Use Page Up and Page Down to scroll or adjust slider values in larger steps (if supported).

Tips for Effective Keyboard Testing

  1. Start at the top and use Tab to navigate — Begin testing from the top of the page and press Tab to move through all interactive elements without skipping any.
  2. Ensure all interactive elements are focusable — Buttons, links, form fields, and checkboxes must be reachable using the keyboard.
  3. Make focus visible and distinct — Every focused item should have a clear and noticeable outline or indicator.
  4. Avoid focus on hidden or inactive content — Focus should never land on closed dialogs, hidden menus, or off-screen items.
  5. Maintain a logical tab order — Focus should move in a natural, visual order (usually left to right, top to bottom).
  6. Avoid unexpected focus jumps — Interacting with elements should not cause focus to jump unpredictably to other page areas.
  7. Test keyboard control of dropdowns, modals, and menus — Ensure these elements can be opened, navigated, and closed using keys such as Enter, Esc, and Arrow keys.
  8. Use the Arrow keys and Spacebar for grouped controls — Navigate radio buttons, sliders, and checkboxes easily with keyboard keys.

Benefits of Keyboard Testing

  1. Keyboard testing ensures that people who cannot use a mouse can still navigate and interact with your application smoothly.
  2. It helps ensure that assistive technologies, such as screen readers, work effectively alongside keyboard navigation for a better user experience.
  3. Conducting keyboard testing supports compliance with important guidelines such as the Web Content Accessibility Guidelines (WCAG).
  4. Even users without disabilities benefit from efficient keyboard navigation, improving overall ease of use and speed.
  5. Makes the application easier and more user-friendly for everyone.

Conclusion

Ensuring that all features can be accessed and operated using a keyboard is essential for creating inclusive digital experiences.

By carefully testing and following best practices, developers and testers can create experiences that are more inclusive, user-friendly, and compliant with accessibility standards. Prioritizing keyboard accessibility helps build a better website for all users.

]]>
https://blogs.perficient.com/2025/07/08/keyboard-testing-in-accessibility-testing/feed/ 0 383948
Elevating API Automation: Exploring Karate as an Alternative to Rest-Assured https://blogs.perficient.com/2025/06/25/elevating-api-automation-exploring-karate-as-an-alternative-to-rest-assured/ https://blogs.perficient.com/2025/06/25/elevating-api-automation-exploring-karate-as-an-alternative-to-rest-assured/#respond Wed, 25 Jun 2025 08:06:16 +0000 https://blogs.perficient.com/?p=367928

Karate, according to Karate Labs, is the only open-source tool that unifies API test automation, mocks, performance testing, and UI automation into a single framework. Using Behavior Driven Development (BDD) syntax enables easy scenario writing, even for non-programmers. With built-in assertions, a reporting mechanism, and parallel test execution, Karate streamlines project development and maintenance by offering compile-free, readable code.

The Karate Framework was created by Peter Thomas in 2017 with the goal of making testing functionalities accessible to everyone. Although it was written in Java, the framework’s files are not restricted to Java, making it more versatile and user-friendly.

Key Features of Karate

  • Utilizes the easy-to-understand Gherkin language.
  • Requires no advanced programming knowledge like Java.
  • Offers built-in parallel testing capabilities, eliminating the need for external tools like Maven or Gradle.
  • Includes a UI for debugging tests.
  • Built on popular Cucumber standards.
  • Simple to create and set up a testing framework.
  • Allows calling one feature file from another.
  • Provides built-in support for Data-Driven Testing, eliminating the need for external frameworks.
  • Features native REST reporting, with optional integration with Cucumber for enhanced UI reports and clarity.
  • Offers in-house support for switching configurations across different testing environments (QA, Stage, Prod, Pre-Prod).
  • Seamlessly integrates with CI/CD pipelines.
  • Supports various types of HTTP calls, including:
    • WebSocket support
    • SOAP requests
    • HTTP
    • Browser cookie handling
    • HTTPS
    • HTML-form data
    • XML requests

Karate vs. Rest-Assured: A Comparison

  • Rest-Assured: A Java-based library designed for testing REST services, Rest-Assured allows you to write test scripts using Java. It excels at handling various request types, enabling the verification of different business logic combinations.
  • Karate Framework: A Cucumber/Gherkin-based tool, Karate is used for testing both SOAP and REST services. It offers an easy-to-understand syntax, making it accessible to both technical and non-technical users.
    Rest-Assured Karate
    Plain Text No Yes
    Parallel Execution Partial Yes
    Data Driven Testing Not built in built in

    Compared with Cucumber enhancement

    Cucumber Karate
    Built in Step Definitions No Yes
    Parallel Execution No Yes
    Re-use feature files No Yes

For a more detailed comparison, visit Karate VS RestAssured 

Why Karate?

Karate is worth adopting because it unifies API, UI, mock‑service and performance testing in a single, low‑code framework while remaining fast, readable, and easy for both testers and developers to maintain. Its domain-specific language (DSL) enables even non-Java teams to write plain-text scenarios, while still integrating smoothly with Java and CI/CD pipelines.

1. Unified Feature Set

Karate is the only open-source tool that combines API automation, UI automation (via a Selenium-free engine), service virtualization mocks, and Gatling-powered performance testing in one framework, eliminating the need for multiple tools.

  • 1.1 API + Web in the Same Script

Within a single feature file, you can switch from calling a REST endpoint to driving a browser, enabling true end‑to‑end scenarios without context‑switching or extra libraries.

  • 1.2 Re‑usable Performance Tests

Karate lets you reuse functional API tests as Gatling load tests, saving the effort of rewriting user flows in a separate performance tool.

2. Productivity & Ease of Use

  • 2.1 Low‑Code DSL

Tests are written in a Gherkin‑like syntax that hides Java boilerplate; glue code is unnecessary, lowering the barrier for non‑programmers.

  • 2.2 Less Code, Faster Feedback

Because feature files are plain text and do not need compilation, developers iterate faster than with code‑heavy libraries like Rest Assured.

  • 2.3 Built‑In Assertions & Reports

Karate ships with powerful JSON/XML matchers and generates rich HTML reports out of the box, so teams spend zero time wiring external assertion or reporting frameworks.

3. Performance & Scalability

Parallel execution is built‑in; benchmarks show Karate tests often run faster than equivalent Rest Assured suites, which matters when suites grow large.

4. Team Collaboration & Maintainability

  • No Java prerequisite: Business testers can contribute directly, improving coverage and shared understanding.

  • Single truth of test logic: API specs, functional checks, mocks, and load profiles live in one place, reducing duplication and drift.

  • CI/CD ready: Karate runs via JUnit/TestNG and generates standard reports that integrate seamlessly with Jenkins, GitHub Actions, Azure DevOps, and other platforms, eliminating the need for plugins.

5. When Karate Shines

Scenario Why Karate Helps
Green‑field API project Rapid authoring & mocks speed up backend‑frontend co‑development
Microservices with contract testing DSL assertions keep contracts readable; mocks isolate services
Teams with mixed skill levels Non‑coders write tests; engineers extend with Java only when needed
Need one tool for API + UI Avoids juggling Selenium/WebDriver + Rest Assured

6. Potential Limitations

Karate’s power comes from its opinionated DSL—teams needing highly customised Java code or advanced XML handling may prefer lower‑level libraries.

  • Challenges in Karate Framework

Challegesinkarate

Karate is great for quick, readable API tests, but it has limitations in IDE support, type safety, UI complexity, and community resources. For more advanced scenarios, you may need to combine it with other tools or use more code-centric frameworks.

Tools Needed for Working with the Karate Framework

Eclipse

Eclipse is an Integrated Development Environment (IDE) widely used for Java programming. It serves as a robust platform for developing and managing Karate projects.

Maven

Maven is a build automation tool primarily used for Java projects. It facilitates setting up a Karate environment and managing project dependencies. To configure Eclipse with Maven, you can follow the instructions for Maven installation here.

To use Karate with Maven, you’ll need to include the following dependencies in your pom.xml.

<dependencies>
    <dependency>
        <groupId>com.intuit.karate</groupId>
        <artifactId>karate-apache</artifactId>
        <version>0.9.6</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>com.intuit.karate</groupId>
        <artifactId>karate-junit4</artifactId>
        <version>0.9.6</version>
        <scope>test</scope>
    </dependency>
</dependencies>

Note: The latest versions of these dependencies may be available in the Maven repository.

If we wanted to enable Cucumber reporting, the following dependency is also to be added.

<dependency>
   <groupId>net.masterthought</groupId>
   <artifactId>cucumber-reporting</artifactId>
   <version>5.3.0</version>
</dependency>

Java Environment Setup on Your System

You’ll need to set up the JDK (Java Development Kit) and JRE (Java Runtime Environment) on your system to start working with Karate Framework scripts.

Now with this, we are all set to start with creating the Karate framework.

Conclusion

This overview highlights the advantages of the Karate Framework for API testing, offering a simpler and more accessible alternative to other tools, such as Rest-Assured, by reducing the need for advanced programming knowledge and offering powerful built-in features.

Adopting Karate can reduce your test tool stack, speed up automation, and make quality a shared responsibility across technical and non‑technical roles. By covering functional, load, and even UI tests with the same syntax, teams gain faster feedback, simpler maintenance, and a smoother path to continuous delivery.

]]>
https://blogs.perficient.com/2025/06/25/elevating-api-automation-exploring-karate-as-an-alternative-to-rest-assured/feed/ 0 367928
Running Multiple Test Cases from a CSV File Using Playwright and TypeScript. https://blogs.perficient.com/2025/06/11/running-multiple-test-cases-from-a-csv-file-using-playwright-and-typescript/ https://blogs.perficient.com/2025/06/11/running-multiple-test-cases-from-a-csv-file-using-playwright-and-typescript/#respond Wed, 11 Jun 2025 11:46:12 +0000 https://blogs.perficient.com/?p=382284

In the world of automated testing, maintaining flexibility and scalability is crucial—especially when it comes to validating functionality across multiple data inputs. Data-driven testing enables QA professionals to decouple test scripts from the input data, allowing the same test flow to run with multiple sets of inputs.

This tutorial explains how to set up data-driven tests in Playwright using TypeScript, where external CSV files provide varying input data for each scenario.

This approach is highly effective for validating login scenarios, form submissions, and any functionality that depends on multiple sets of data.

Why Use Data-Driven Testing?

Data-driven testing provides several benefits:

  • Reduced Code Duplication: Instead of writing multiple similar tests, a single test reads various inputs from an external file.
  • Improved Maintainability: Test data can be modified independently of the test logic.
  • Scalability: Enables easier scaling of testing across a wide range of input combinations.

When working with TypeScript and Playwright, using CSV files for test input is a natural fit for structured test cases, such as form validation, login testing, and e-commerce transactions.

Setting Up the Project

To get started, make sure you have a Playwright and TypeScript project set up. If not, here’s how to initialize it:

npm init -y

npm install -D @playwright/test

npx playwright install

Enable TypeScript support:

npm install -D typescript ts-node

Create a basic tsconfig.json:

{

  "compilerOptions": {

    "target": "ES6",

    "module": "commonjs",

    "strict": true,

    "esModuleInterop": true,

    "outDir": "dist"

  },

  "include": ["*.ts"]

}

 

Now, install a CSV parsing library:

npm install csv-parse

Creating the CSV File

We’ll begin by setting up a basic loginData.csv file containing sample login credentials.

username, password

user1,password1

user2,password2

invalidUser,wrongPass

Save it in your project root directory.

Reading CSV Data in TypeScript

Create a helper function, readCSV.ts, to parse CSV files:

import fs from 'fs';

import { parse } from 'csv-parse';

export async function loadCSV(fileLocation: string): Promise<Record<string, string>[]> {

  return new Promise((resolve, reject) => {

    const results: Record<string, string>[] = [];

    fs.createReadStream(fileLocation)

      .pipe(parse({ columns: true, skip_empty_lines: true }))

      .on('readable', function () {

        let row;

        while ((row = this.read()) !== null) {

          results.push(row);

        }

      })

      .on('end', () => resolve(results))

      .on('error', (error) => reject(error));

  });

}

Writing the Data-Driven Test in Playwright

Now, let’s write a test that uses this CSV data. Create a file named login.spec.ts:

import { test, expect } from '@playwright/test';

import { readCSV } from './readCSV';




test.describe('Data-Driven Login Tests', () => {

  let testData: Array<{ username: string; password: string }>;




  test.beforeAll(async () => {

    testfile = await readCSV('./loginData.csv');

  });




  for (const data of testfile || []) {

    test(`Log in attempt with ${data.username}`, async ({ page }) => {

      await page.goto('https://example.com/login');

      await page.fill('#username', data.username);

      await page.fill('#password', data.password);

      await page.click('button[type="submit"]');




      // Adjust this check based on expected outcomes

      if (data.username.startsWith('user')) {

        await expect(page).toHaveURL(/dashboard/);

      } else {

         expect(page.locator('.error-text')).toBeVisible();

      }

    });

  }

});

The approach reads each row from the CSV and generates individual test cases dynamically, using the data from each entry as input parameters.

Best Practices

  • Separate Test Data from Logic: Always keep your data files separate from test scripts to simplify maintenance.
  • Validate Test Inputs: Ensure CSV files are clean and correctly formatted.
  • Parameterize Conditions: Adjust validation logic based on the nature of test data (e.g., valid vs. invalid credentials).

Conclusion

Using CSV-based data-driven testing with Playwright and TypeScript offers a powerful way to scale test coverage without bloating your codebase. It’s ideal for login scenarios, input validation, and other repetitive test cases where only the data varies.

By externalizing your data and looping through test scenarios programmatically, you can reduce redundancy, improve maintainability, and support continuous delivery pipelines more effectively.

As your application grows, this strategy will help ensure that your test suite remains efficient, readable, and scalable.

]]>
https://blogs.perficient.com/2025/06/11/running-multiple-test-cases-from-a-csv-file-using-playwright-and-typescript/feed/ 0 382284
Beginner’s Guide to Playwright Testing in Next.js https://blogs.perficient.com/2025/06/09/beginners-guide-to-playwright-testing-in-next-js/ https://blogs.perficient.com/2025/06/09/beginners-guide-to-playwright-testing-in-next-js/#comments Mon, 09 Jun 2025 09:17:04 +0000 https://blogs.perficient.com/?p=382450

Building modern web applications comes with the responsibility of ensuring they perform correctly across different devices, browsers, and user interactions. If you’re developing with Next.js, a powerful React framework, incorporating automated testing from the start can save you from bugs, regression s, and unexpected failures in production.

This guide introduces Playwright, a modern end-to-end testing framework from Microsoft and demonstrates how to integrate it into a Next.js project. By the end, you’ll have a basic app with route navigation and playwright test that verify pages render and behave correctly.

Why Use Playwright with Next.js

Next.js enables fast, scalable React applications with feature live server-side rendering (SSR), static site generation (SSG), dynamic routing and API routes.

Playwright helps you simulate real user action like clicking, navigating and filling out form in a browser environment. It’s:

  • Fast and reliable
  • Headless (run without UI), or headed (for debugging)
  • Multi-browser (Chromium, Firefox, WebKit)
  • Great for full end-to-end testing

Together, they create a perfect testing stack

Prerequisites

Before we start, make sure you have the following:

  • Node.js v16 or above
  • npm or yarn
  • Basic familiarity with JavaScript, Next.js and React

Step 1: Create a New Next.js App

Let’s start with a fresh project. Open your terminal and run:

npx create-next-app@latest nextjs-playwright-demo
cd nextjs-playwright-demo

Once the setup is completed, start your development server:

npm run dev

You should see the default Next.js homepage at https://localhost:3000

Step 2: Add Pages and Navigation

Let’s add two simple routes: Home and About

Create about.tsx

// src/app/about/page.tsx
export default function About() {
    return (
        <h2>About Page</h2>
    )
}

 

Update the Home Page with a Link

Edit src/app/page.tsx:

import Link from "next/link";

export default function App() {
    return (
        <div>
            <h2>Home Page</h2>
            <Link href="/about">Go to about</Link>
        </div>
    )
}

You now have two routes ready to be tested.

Step 3: Install Playwright

Install Playwright globally and its dependencies

npm install -g playwright

It installs playwright test library and browsers (Chromium, Firefox, Webkit)

Step 4: Initialize Playwright

Run:

npm init playwright

This sets up:

  • playwright.config.ts for playwright configurations
  • tests/ directory for your test files
  • Install dev dependency in the project

Step 5: Write Playwright Tests for Your App

Create a test file: tests/routes.spec.ts

import { test, expect } from "@playwright/test";

test("Home page render correctly", async ({ page }) => {
    await page.goto("http://localhost:3000/");
    await expect(page.locator("h2")).toHaveText(/Home Page/);
});

test("About page renders correctly", async ({ page }) => {
    await page.goto("http://localhost:3000/about");
    await expect(page.locator("h2")).toHaveText(/About Page/);
});

test("User can navigate from Home to About Page", async ({ page }) => {
    await page.goto("http://localhost:3000/");
    await page.click("text=Go to About");
    await page.waitForURL("/about");
    await expect(page).toHaveURL("/about");
    await expect(page.locator("h2")).toHaveText(/About Page/);
});

What’s Happening?

  • The first test visits the home page and checks heading text
  • The second test goes directly to the About page
  • The third simulates clicking a link to navigate between routes

Step 6: Run Your Tests

To run all tests:

npx playwright test

You should see output like:

Command Line Output

Run in the headed mode (visible browser) for debugging:

npx playwright test --headed

Launch the interactive test runner:

npx playwright test --ui

Step 7: Trace and Debug Failures

Playwright provides a powerful trace viewer to debug flaky or failed tests.

Enable tracing in playwright.config.ts:

Playwright Config Js

Then show the report with

npx playwright show-report

This opens a UI where you can replay each step of your test.

What You’ve Learned

In this tutorial, you’ve:

  • Create a basic Next.js application
  • Set up routing between pages
  • Installed and configured Playwright
  • Wrote end-to-end test to validate route rendering and navigation
  • Learned how to run, debug and show-report your tests

Next Steps

This is the just the beginning. Playwright can also test:

  • API endpoints
  • Form submissions
  • Dynamic content loading
  • Authentication flows
  • Responsive behavior

Conclusion

Combining Next.js with Playwright gives you confidence in your app’s behavior. It empowers you to automate UI testing in a way that simulates real user interactions. Even for small apps, this testing workflow can save you from major bugs and regressions.

]]>
https://blogs.perficient.com/2025/06/09/beginners-guide-to-playwright-testing-in-next-js/feed/ 1 382450
Capturing API Requests from Postman Using JMeter https://blogs.perficient.com/2025/06/09/capturing-api-requests-from-postman-using-jmeter/ https://blogs.perficient.com/2025/06/09/capturing-api-requests-from-postman-using-jmeter/#respond Mon, 09 Jun 2025 06:21:51 +0000 https://blogs.perficient.com/?p=382378

Performance testing is a crucial phase in the API development lifecycle. If you’re using Postman for API testing and want to transition to load testing using Apache JMeter, you’ll be glad to know that JMeter can record your Postman API calls. This blog will guide you through a step-by-step process of capturing those requests seamlessly.

Why Record Postman Requests in JMeter?

Postman is excellent for testing individual API calls manually, while JMeter excels at simulating concurrent users and measuring performance.

Prerequisites:

  • Apache JMeter
  • Postman
  • JDK 8 or later
  • Internet access

Step-by-Step Guide

  1. Launch JMeter and Create a Test Plan: Open JMeter and start by creating a Thread Group under the Test Plan and add the HTTP(S) Test Script Recorder under Non-Test Elements.
    Image11
  2. Add a Recording Controller: Inside your Thread Group, add a Recording Controller and this will collect all the requests captured during the session.
    Image 12
  3. Import the JMeter certificate in postman: Go to Postman > Settings > “Certificates” tab, click and Toggle On for “CA certificates”, locate ApacheJMeterTemporaryRootCA.crt and add.Image 3
  4. Open Postman > navigate to Settings > then go to the Proxy tab. Configure the proxy by setting the port to ‘8888’. Set the proxy server address to ‘https://localhost’ during configuration.Image 4
  5. Start the JMeter Proxy Recorder: Set the port to 8888 in the recorder and hit Start.
    Image 15
  6. Execute API Requests from Postman: Send any API requests from Postman, and you’ll see them appear in the Recording Controller in JMeter. Next, search online for REST APIs that are available for free use. Here, I have taken the example of https://reqres.in/for reference.
    Image 6
  7. Stop the Recording: Click Stop in JMeter’s recorder once you’ve captured all desired traffic.Image 7
  8. Review the Results: Add a Listener like ‘View Results Tree’ under your Thread Group to see the captured request and response data.
    Image 8

Wrapping Up

By recording Postman traffic into JMeter, you’re not only saving time but also setting up your foundation for powerful performance testing. Whether you’re preparing for stress testing or simulating concurrent user traffic, this integration is a valuable step forward.

Happy Testing!!!

]]>
https://blogs.perficient.com/2025/06/09/capturing-api-requests-from-postman-using-jmeter/feed/ 0 382378