Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Fri, 20 Dec 2024 20:54:54 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 [Webinar] Oracle Project-Driven Supply Chain at Roeslein & Associates https://blogs.perficient.com/2024/12/20/webinar-oracle-project-driven-supply-chain-at-roeslein-associates/ https://blogs.perficient.com/2024/12/20/webinar-oracle-project-driven-supply-chain-at-roeslein-associates/#respond Fri, 20 Dec 2024 20:50:29 +0000 https://blogs.perficient.com/?p=374060

Roeslein & Associates, a global leader in construction and engineering, had complex business processes that could not scale to meet its needs. It wanted to set standard manufacturing processes to fulfill highly customized demand originating from its customers.

Roeslein chose Oracle Fusion Cloud SCM, which included Project-Driven Supply Chain for Inventory, Manufacturing, Order Management, Procurement, and Cost Management, and partnered with Perficient to deliver the implementation.

Join us as project manager, Ben Mitchler, discusses the migration to Oracle Cloud. Jeff Davis, Director, Oracle ERP at Perficient will be joining Ben to share this great PDSC story.

Discussion will include:

  • Challenges with the legacy environment
  • On-premises to cloud migration approach
  • Benefits realized with the global SCM implementation

Save the date for this insightful webinar taking place January 22, 2025! Register now!

An Oracle Partner with 20+ years of experience, we are committed to partnering with our clients to tackle complex business challenges and accelerate transformative growth. We help the world’s largest enterprises and biggest brands succeed. Connect with us at the show to learn more about how we partner with our customers to forge the future.

]]>
https://blogs.perficient.com/2024/12/20/webinar-oracle-project-driven-supply-chain-at-roeslein-associates/feed/ 0 374060
Elevating Selenium Testing: Comprehensive Reporting with Pytest https://blogs.perficient.com/2024/12/20/elevating-selenium-testing-comprehensive-reporting-with-pytest/ https://blogs.perficient.com/2024/12/20/elevating-selenium-testing-comprehensive-reporting-with-pytest/#respond Fri, 20 Dec 2024 07:09:56 +0000 https://blogs.perficient.com/?p=373510

When you’re running Selenium tests in Python, particularly in large projects, the ability to generate detailed and readable reports is essential for understanding test results, tracking failures, and improving overall test management. In this blog post, we’ll explore how to integrate reporting into your Selenium tests using Pytest, one of the most popular testing frameworks for Python.

Picture9

Why Do We Need Reporting?

Test reports provide more than just a summary of whether tests have passed or failed. They help teams:

  • Track which test cases are failing and why.
  • View test logs and tracebacks for deeper insights.
  • Generate aesthetically pleasing, understandable reports for stakeholders.
  • Integrate with continuous integration (CI) systems to display results in an easily consumable format. Report

By leveraging tools like pytest-html, allure-pytest, or pytest-splunk, you can generate rich and actionable test reports that greatly enhance the debugging process and provide clear feedback.

Setting Up Pytest for Selenium

Before we dive into reporting, let’s briefly set up Selenium with Pytest for a basic test case.

  1. Install Necessary Packages:

    bash
    pip install selenium pytest pytest-html
    

     

  2. Basic Selenium Test Example:

    from selenium import webdriver
    import pytest
    
    @pytest.fixture
    def driver():
        driver = webdriver.Chrome(executable_path="/path/to/chromedriver")
        yield driver
        driver.quit()
    
    def test_open_google(driver):
        driver.get("https://www.google.com")
        assert "Google" in driver.title

This basic test launches Chrome, navigates to Google, and asserts that the word “Google” appears in the page title.

Generating Reports with Pytest-HTML

One of the easiest ways to generate reports in Pytest is by using the pytest-html plugin. It produces clean, easy-to-read HTML reports after test execution. Here’s how to integrate it into your project.

  1. Install pytest-html:

    bash
    pip install pytest-html
    

     

  2. Running Tests with HTML Report: You can generate an HTML report by running Pytest with the –html option:
    Pytest with the --html option:
    
    bash
    pytest --html=report.html
    
  3. Report Output: After running the tests, the report.html file will be created in the current directory. Open the file in your browser to view a well-structured report that shows the status of each test (pass/fail), test duration, and logs.The report includes:
    • Test Summary: A section that lists the total number of tests, passed, failed, or skipped.
    • Test Case Details: Clickable details for each test case including the test name, outcome, duration, and logs for failed tests.
    • Screenshots: If you capture screenshots during your tests, they will be automatically added to the report (more on this later).


Adding Screenshots for Test Failures

One of the best practices in Selenium test automation is to capture screenshots when a test fails. This helps you quickly diagnose issues by giving you a visual reference.

Here’s how to capture screenshots in your Selenium tests and add them to your HTML report:

  1. Modify Test Code to Capture Screenshots on Failure:

    import pytest
    from selenium import webdriver
    import os
    from datetime import datetime
    
    @pytest.fixture
    def driver():
        driver = webdriver.Chrome(executable_path="/path/to/chromedriver")
        yield driver
        driver.quit()
    
    def take_screenshot(driver):
        timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
        screenshot_dir = "screenshots"
        os.makedirs(screenshot_dir, exist_ok=True)
        screenshot_path = f"{screenshot_dir}/screenshot_{timestamp}.png"
        driver.save_screenshot(screenshot_path)
        return screenshot_path
    
    def test_open_google(driver):
        driver.get("https://www.google.com")
        assert "NonexistentPage" in driver.title  # This will fail
    

     

  2. Update the Report to Include Screenshots: To ensure that the screenshot appears in the HTML report, modify your test setup and teardown functions to capture screenshots when the test fails.
    def pytest_runtest_makereport(item, call):
        if call.excinfo is not None:
            driver = item.funcargs.get("driver")
            screenshot_path = take_screenshot(driver)
            item.user_properties.append(("screenshot", screenshot_path))

    This will add a screenshot to the report whenever a test fails, making it much easier to troubleshoot issues.


Using Allure for Advanced Reporting

If you need more advanced reporting features, like interactive reports or integration with CI/CD pipelines, you can use Allure. Allure is a popular framework that provides rich, interactive test reports with a lot of customization options.

  1. Install Allure and Pytest Plugin:

    bash
    
    pip install allure-pytest

     

  2. Running Tests with Allure: To generate an allure report, run the following commands:
    bash
    
    pytest --alluredir=allure-results
    
    allure serve allure-results

    This will launch a local server with a beautiful, interactive report.


Conclusion

Integrating a robust reporting structure into your Selenium tests with Python (and Pytest) is a great way to gain deeper insights into your test results, improve test reliability, and speed up the debugging process. Whether you use pytest-html for quick and simple reports or opt for Allure for advanced, interactive reports, the ability to visualize your test outcomes is essential for maintaining high-quality automated test suites.

With reporting in place, you can more effectively track test progress, communicate results with your team, and ensure that issues are identified and resolved quickly.

Happy testing!

]]>
https://blogs.perficient.com/2024/12/20/elevating-selenium-testing-comprehensive-reporting-with-pytest/feed/ 0 373510
Building GitLab CI/CD Pipelines with AWS Integration https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/ https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/#respond Wed, 18 Dec 2024 11:05:19 +0000 https://blogs.perficient.com/?p=373778

Building GitLab CI/CD Pipelines with AWS Integration

GitLab CI/CD (Continuous Integration/Continuous Deployment) is a powerful, integrated toolset within GitLab that automates the software development lifecycle (SDLC). It simplifies the process of building, testing, and deploying code, enabling teams to deliver high-quality software faster and more efficiently.

Understanding GitLab CI/CD

Getting started with GitLab CI/CD is simple. Start by creating a GitLab account and setting up a project for your application if you don’t have then install and configure a GitLab Runner, a tool responsible for executing the tasks defined in your .gitlab-ci.yml file. The runner handles building, testing, and deploying your code, ensuring the pipeline works as intended. This setup streamlines your development process and helps automate workflows efficiently.

What is a GitLab Pipeline?

A pipeline automates the process of building, testing, and deploying applications. CI (Continuous Integration) means regularly merging code changes into a shared repository. CD (Continuous Deployment/Delivery) automates releasing the application to its target environment.

Gitlab 1

Related CODE: In this step, you push your local code changes to the remote repository and commit any updates or modifications.

CI Pipeline: Once your code changes are committed and merged, you can run the build and test jobs defined in your pipeline. After completing these jobs, the code is ready to be deployed to staging and production environments.

Important Terms in GitLab CI/CD

1. .gitlab-ci.yaml file

A .gitlab-ci.yml file in a GitLab repository is used to define the Continuous Integration/Continuous Deployment (CI/CD) pipeline configuration. This file contains instructions on building, testing, and deploying your project.

2. Gitlab-Runner

In GitLab CI/CD, a “runner” refers to the agent that executes the jobs defined in the .gitlab-ci.yml pipeline configuration. Runners can be either shared or specific to the project.

Here’s how runners work:

  1. Shared Runners: GitLab provides shared runners available to all projects within a GitLab instance. These runners are managed by GitLab administrators and can be used by any project. Shared runners are convenient if we don’t want to set up and manage our own runners.
  2. Specific Runners: We can also set up our own runners that are dedicated to our project. These runners can be deployed on our infrastructure (e.g., on-premises servers, cloud instances) or using a variety of methods like Docker, Kubernetes, shell, or Docker Machine. Specific runners offer more control over the execution environment and can be customized to meet the specific needs of our project.

3. Pipeline:

Pipelines are made up of jobs and stages:

  • Jobs define what you want to do. For example, test code changes, or deploy to a dev environment.
  • Jobs are grouped into stages. Each stage contains at least one job. Common stages include build, test, and deploy.
  • You can run the pipeline either manually or from the pipeline schedule Job.

First is manually means directly commit, when you merged or commit any changes into code pipeline directly trigger.

And second is by using rules for that, you need to create a scheduled job.

 

Gitlab 2

 

 4. Schedule Job:

We use scheduled jobs to automate pipeline execution. To create a scheduled job, follow these steps:

  1. Navigate to Schedule Settings: Go to Build, select Pipeline Schedules, and click Create New Schedule.
  2. Configure Schedule Details:
    1. Description: Enter a name for the scheduled job.
    2. Cron Timezone: Set the timezone according to your requirements.
    3. Interval Pattern: Define the cron schedule to determine when the pipeline should run. If you   prefer to run it manually by clicking the play button when needed, uncheck the Activate button at the end.
    4. Target Branch: Specify the branch where the cron job will run.
  3. Add Variables: Include any variables mentioned in the rules section of your .gitlab-ci.yml file to ensure the pipeline runs correctly.
    1. Input variable key = SCHEDULE_TASK_NAME
    2. Input variable value = prft-deployment

Gitlab 3

 

Gitlab3.1

Demo

Prerequisites for GitLab CI/CD 

  • GitLab Account and Project: You need an active GitLab account and a project repository to store your source code and set up CI/CD workflows.
  • Server Environment: You should have access to a server environment, like a AWS Cloud, where your install gitlab-runner.
  • Version Control: Using a version control system like Git is essential for managing your source code effectively. With Git and a GitLab repository, you can easily track changes, collaborate with your team, and revert to previous versions whenever necessary.

Configure Gitlab-Runner

  • Launch an AWS EC2 instance with any operating system of your choice. Here, I used Ubuntu. Configure the instance with basic settings according to your requirements.
  • SSH into the EC2 instance and follow the steps below to install GitLab Runner on Ubuntu.
  1. sudo apt install -y curl
  2. curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
  3. sudo apt install gitlab-runner

After installing GitLab Runner, proceed to register it. Navigate to GitLab, go to Settings, then CI/CD, and under Runners, click on the three dots to access the registration options.

And copy-paste the below cmd:

Gitlab 4

Run the following command on your EC2 instance and provide the necessary details for configuring the runner based on your requirements:

  1. URL: Press enter to keep it as the default.
  2. Token: Use the default token and press enter.
  3. Description: Add a brief description for the runner.
  4. Tags: This is critical; the tag names define your GitLab Runner and are referenced in your .gitlab-ci.yml file.
  5. Notes: Add any additional notes if required.
  6. Executor: Choose shell as the executor.

Gitlab 5

Check GitLab-runner status and active status using the below cmd:

  • gitlab-runner verify
  • gitlab-runner list

Gitlab 6

Check gitlab-runner is active in gitlab also:

Navigate to GitLab, then go to Settings and select GitLab Runners.

 

Gitlab 7

 Configure gitlab-ci.yaml file

  • Stages: Stages that define the sequence in which jobs are executed.
    • build
    • deploy
  • Build-job: This job is executed in the build stage, the first run stage.
    • Stage: build
    • Script:
      • Echo “Compiling the code…”
      • Echo “Compile complete.”‘
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner
  • Deploy-job: This job is executed in the deploy stage.
    • Stage: deploy   #It will only execute when both jobs in the build job & test job (if added) have been successfully completed.
    • script:
      • Echo “Deploying application…”
      • Echo “Application successfully deployed.”
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner

Note: If needed, you can add a test job similar to the BUILD and DEPLOY jobs.

Run Pipeline

Since the Cron job is already configured in the schedule, simply click the Play button to automatically trigger your pipeline.

Gitlab 8

To check pipeline status, go to Build and then Pipeline. Once the Build Job is successfully completed, the Test Job will start, and once the Test Job is completed, the deploy job will start.

Gitlab 9

Output

We successfully completed BUILD & DEPLOY Jobs.

Gitlab 10

Build Job

Gitlab 11

Deploy Job

Gitlab 12

Conclusion

As we can see, the BUILD & DEPLOY jobs pipeline has successfully passed.

We’ve provided a brief overview of GitLab CI/CD pipelines and a practical demonstration of how its components work together. Hopefully, everything is running smoothly on your end!

 

]]>
https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/feed/ 0 373778
A New Normal: Developer Productivity with Amazon Q Developer https://blogs.perficient.com/2024/12/13/a-new-normal-developer-productivity-with-amazon-q-developer/ https://blogs.perficient.com/2024/12/13/a-new-normal-developer-productivity-with-amazon-q-developer/#respond Fri, 13 Dec 2024 21:35:17 +0000 https://blogs.perficient.com/?p=373559

Amazon Q was front and center at AWS re:Invent last week.  Q Developer is emerging as required tooling for development teams focused on custom development, cloud-native services, and the wide range of legacy modernizations, stack conversions and migrations required of engineers.  Q Developer is evolving beyond “just” code generation and is timing its maturity well alongside the rise of agentic workflows with dedicated agents playing specific roles within a process… a familiar metaphor for enterprise developers.

The Promise of Productivity

Amazon Q Developer makes coders more effective by tackling repetitive and time-consuming tasks. Whether it’s writing new code, refactoring legacy systems, or updating dependencies, Q brings automation and intelligence to the daily work experience:

  • Code generation including creation of full classes based off natural language comments
  • Transformation legacy code into other programming languages
  • AI-fueled analysis of existing codebases
  • Discovery and remediation of dependencies and outdated libraries
  • Automation of unit tests and system documentation
  • Consistency of development standards across teams

Real Impacts Ahead

As these tools quickly evolve, the way in which enterprises, product teams and their delivery partners approach development must now transform along with them.  This reminds me of a favorite analogy, focused on the invention of the spreadsheet:

The story goes that it would take weeks of manual analysis to calculate even minor changes to manufacturing formulas, and providers would compute those projections on paper, and return days or weeks later with the results.  With the rise of the spreadsheet, those calculations were completed nearly instantly – and transformed business in two interesting ways:  First, the immediate availability of new information made curiosity and innovation much more attainable.  And second, those spreadsheet-fueled service providers (and their customers) had to rethink how they were planning, estimating and delivering services considering this revolutionary technology.  (Planet Money Discussion)

This certainly rings a bell with the emergence of GenAI and agentic frameworks and their impacts on software engineering.  The days ahead will see a pivot in how deliverables are estimated, teams are formed, and the roles humans play across coding, testing, code reviews, documentation and project management.  What remains consistent will be the importance of trusted and transparent relationships and a common understanding of expectations around outcomes and value provided by investment in software development.

The Q Experience

Q Developer integrates with multiple IDEs to provide both interactive and asynchronous actions. It works with leading identity providers for authentication and provides an administrative console to manage user access and assess developer usage, productivity metrics and per-user subscription costs.

The sessions and speakers did an excellent job addressing the most common concerns: Safety, Security and Ownership.  Customer code is not used to train models using the Pro Tier but requires opt-out using Free version.  Foundation models are updated on a regular basis.  And most importantly: you own the generated code, although with that, the same level of responsibility and ownership falls to you for testing & validation – just like traditional development outputs.

The Amazon Q Dashboard provides visibility to user activity, metrics on lines of code generated, and even the percentage of Q-generated code accepted by developers, which provides administrators a clear, real-world view of ROI on these intelligent tooling investments.

Lessons Learned

Experts and early adopters at re:Invent shared invaluable lessons for making the most of Amazon Q:

  • Set guardrails and develop an acceptable use policy to clarify expectations for all team members
  • Plan a thorough developer onboarding process to maximize adoption and minimize the unnecessary costs of underutilization
  • Start small and evangelize the benefits unique to your organization
  • Expect developers to become more effective Prompt Engineers over time
  • Expect hidden productivity gains like less context-switching, code research, etc.

The Path Forward

Amazon Q is more than just another developer tool—it’s a gateway to accelerating workflows, reducing repetitive tasks, and focusing talent on higher-value work. By leveraging AI to enhance coding, automate infrastructure, and modernize apps, Q enables product teams to be faster, smarter, and more productive.

As this space continues to evolve, the opportunities to optimize development processes are real – and will have a huge impact from here on out.  The way we plan, execute and measure software engineering is about to change significantly.

]]>
https://blogs.perficient.com/2024/12/13/a-new-normal-developer-productivity-with-amazon-q-developer/feed/ 0 373559
Essential Flutter Optimization Techniques for a Smooth UX https://blogs.perficient.com/2024/12/11/essential-flutter-optimization-for-ux/ https://blogs.perficient.com/2024/12/11/essential-flutter-optimization-for-ux/#respond Wed, 11 Dec 2024 15:23:22 +0000 https://blogs.perficient.com/?p=373395

Flutter’s versatility and powerful UI capabilities have made it a leading choice for building cross-platform mobile apps. However, its resource-intensive nature requires developers to fine-tune performance to create smooth, responsive, and memory-efficient apps. This guide compiles essential optimization techniques to help you craft a stellar Flutter experience.

Blogreference

1. Widget and Layout Optimization

Flutter’s widget-based design is its biggest strength, but it can lead to inefficiencies if not managed properly. Optimize your widgets to reduce unnecessary rebuilding:

  • Minimize Widget Rebuilds: Use const constructors whenever possible to signal Flutter that a widget doesn’t need to be rebuilt.
  • Strategic Stateful Widgets: Wrap only dynamic parts of your UI in StatefulWidget. Avoid wrapping large widget trees to reduce the overhead of setState().
  • Use Keys Wisely: Leverage ValueKey or UniqueKey for widgets in lists or complex layouts. This ensures efficient state management and prevents unnecessary reordering.

2. Simplify Layout Complexity

Keep your widget tree as shallow and efficient as possible:

  • Avoid Deep Widget Trees: Flatten widget hierarchies to prevent excessive layout calculations.
  • Leverage const Widgets: Const widgets don’t require rebuilding, improving memory, and rendering efficiency.

3. Image and Asset Optimization

Heavy images can slow down your app. Optimize them to improve performance:

  • Cache Images: Use the cached_network_image package to store network-loaded images locally, minimizing repeated calls.
  • Compress and Resize: Compress images before adding them to your project. Use JPEG for photos and PNG for graphics, or switch to SVG for scalable vector images.
  • Use Image.asset for Local Images: It is faster and more efficient compared to network or file-based alternatives.

4. Asynchronous Programming and Network Optimization

Flutter’s asynchronous nature allows for smooth UI updates, but heavy tasks can still bog down performance:

  • Use FutureBuilder and StreamBuilder: These widgets enable responsive UI by handling asynchronous data without blocking the main thread.
  • Offload Heavy Tasks to Isolates: Use Isolates for CPU-intensive operations like data parsing, keeping the UI thread free.
  • Cache API Responses: Minimize redundant network requests and implement caching for faster data loading.

5. Optimize Rendering with Repaint Boundaries

Rendering optimizations can drastically reduce jank:

  • Utilize RepaintBoundary: Wrap frequently changing parts of the UI to isolate their rendering layers. This prevents the entire screen from being redrawn.
  • Monitor GPU and CPU Usage: Use Flutter’s Performance Overlay and DevTools to detect high frame rendering times and optimize problem areas.

6. Master Memory Management

Efficient memory usage ensures your app runs smoothly, even on low-end devices:

  • Avoid Large Objects in State: Keep memory-heavy objects like large lists or images out of the widget state.
  • Dispose of Controllers: Always dispose of unused controllers (e.g., AnimationController, TextEditingController) to free up resources.
  • Reuse Objects: Reduce memory churn by reusing objects instead of creating new ones in frequently-called methods like build.

7. Streamline Navigation and Route Management

Keep memory consumption in check while navigating between screens:

  • Optimize Routes: Use Navigator.pushReplacement to remove unnecessary routes from the stack.
  • Lazy Loading Screens: Load screen resources dynamically to reduce initial load time and memory usage.

8. Enhance Startup Performance

First impressions matter! Reduce your app’s startup time with these tips:

  • Defer Initialization: Delays non-essential tasks (e.g., analytics, feature checks) until after the home screen is loaded.
  • Streamline Splash Screens: Keep splash screens lightweight and avoid heavy operations in the main() function.
  • Build for Release Mode: Always use the –release flag for production builds to enable full optimizations.

9. Use Flutter DevTools for Profiling

DevTools is your best friend for diagnosing performance bottlenecks:

  • Track Widget Rebuilds: Identify excessive rebuilds and optimize them.
  • Monitor Memory Usage: Spot memory leaks and manage object disposal.
  • Analyze Network Calls: Detect redundant API calls and improve network efficiency.

10. Optimize Third-Party Plugins

Plugins can simplify development but may introduce inefficiencies:

  • Audit Plugins: Use well-maintained plugins with good benchmarks and avoid bloated ones.
  • Prune Dependencies: Regularly review your pubspec.yaml to remove unused dependencies.

11. Target Low-End Devices

Expand your app’s reach by optimizing for devices with limited resources:

  • Reduce Animations: Provide options to disable or simplify animations.
  • Lower Image Quality: Dynamically adjust asset resolutions based on the device’s capabilities.
  • Test on Low-End Devices: Regularly test performance on older hardware to identify potential issues.

12. Optimize State Management

Efficient state management reduces unnecessary widget rebuilds:

  • Choose the Right Solution: Evaluate state management tools like Provider, Bloc, Riverpod, or GetX based on your app’s complexity.
  • Localize State Updates: Keep state changes localized to the smallest possible widget subtree.

13. Shrink App Size

Minimize your app’s footprint for faster downloads and better performance:

  • Enable Code Shrinking: Use Dart’s tree-shaking feature and ProGuard (for Android) to remove unused code.
  • Split APKs or App Bundles: Deliver architecture-specific binaries to reduce download size.
  • Compress Assets: Use tools like TinyPNG or WebP for image compression.

14. Plan for Error Handling

Unoptimized error handling can harm user experience:

  • Provide Fallback UIs: Display lightweight placeholders for loading or error states.
  • Retry Mechanisms: Implement robust retry logic for failed actions, reducing frustration.

15. Code Splitting for Large Apps

Modularize your app to load features dynamically:

  • Lazy Load Modules: Load screens or modules on demand to reduce the initial app load.
  • Dynamic Localization: Load translations dynamically instead of bundling all languages.

Final Thoughts on Flutter Optimization

Flutter’s powerful capabilities make it possible to create stunning, high-performance apps. However, achieving smooth Flutter performance requires careful planning, constant profiling, and iterative optimizations. Following these optimization techniques, you’ll improve your Flutter app’s responsiveness and delight your users with a seamless experience.

Start implementing these strategies today and take your Flutter app to the next level! 🚀

]]>
https://blogs.perficient.com/2024/12/11/essential-flutter-optimization-for-ux/feed/ 0 373395
Perficient Named as a Major Player for Worldwide Adobe Experience Cloud Professional Services https://blogs.perficient.com/2024/12/10/perficient-named-as-a-major-player-for-worldwide-adobe-experience-cloud-professional-services/ https://blogs.perficient.com/2024/12/10/perficient-named-as-a-major-player-for-worldwide-adobe-experience-cloud-professional-services/#respond Tue, 10 Dec 2024 16:13:44 +0000 https://blogs.perficient.com/?p=373304

We’re pleased to announce that Perficient has been named a Major Player in the IDC MarketScape: Worldwide Adobe Experience Cloud Professional Services 2024-2025 Vendor Assessment (Doc #US51741024, December 2024). We believe this recognition is a testament to our commitment to excellence and our dedication to delivering top-notch Adobe services to our clients.

Continue reading to learn more about what the IDC MarketScape is, why Perficient is named a Major Player, and what this designation means to our clients.

Understanding This IDC MarketScape

This IDC MarketScape evaluated Adobe Experience Cloud professional service providers, creating a framework to compare vendors’ capabilities and strategies. Many organizations need help planning and deploying technology, and finding the right vendor is critical.

According to Douglas Hayward, senior research director for CX services and strategies at IDC, “Organizations choosing an Adobe Experience Cloud professional service should look for proof that their vendor has high-quality professionals who have a track record in empowering their clients and delivering the best value for the fairest price.”

This IDC MarketScape study provides a comprehensive vendor assessment of the Adobe Experience Cloud professional services ecosystem. It evaluates both quantitative and qualitative characteristics that contribute to success in this market. The study covers various vendors, assessing them against a rigorous framework that highlights the most influential factors for success in both the short and long term.

Perficient is a Major Player

We believe being named a Major Player in the IDC MarketScape is a significant achievement for Perficient and underscores our Adobe Experience Cloud capabilities, industry and technical acumen, global delivery center network, and commitment to quality customer service. We further believe the study is evidence of our expertise and continued focus on solving our clients’ business challenges.

Hayward said, “In our evaluation of Perficient for the IDC MarketScape: Worldwide Adobe Experience Cloud Professional Services 2024-2025 Vendor Assessment, it was evident that Perficient has global delivery expertise that combines an experience design heritage with strong capabilities in digital experience transformation.”

The IDC MarketScape also says, “Based on conversations with Perficient’s clients, the vendor’s three main strengths are value creation, people quality, and client empowerment.”

Our Commitment to Excellence

At Perficient, we are committed to maintaining and improving our services and solutions. We continuously strive to innovate and enhance our capabilities and offerings to meet the evolving needs of our clients, further empower them, and drive value.

Learn More

You can also read our News Release for more details on this recognition and make sure to follow our Adobe blog for more Adobe platform insights!

]]>
https://blogs.perficient.com/2024/12/10/perficient-named-as-a-major-player-for-worldwide-adobe-experience-cloud-professional-services/feed/ 0 373304
All In on AI: Amazon’s High-Performance Cloud Infrastructure and Model Flexibility https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/ https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/#respond Tue, 10 Dec 2024 14:00:09 +0000 https://blogs.perficient.com/?p=373238

At AWS re:Invent last week, Amazon made one thing clear: it’s setting the table for the future of AI. With high-performance cloud primitives and the model flexibility of Bedrock, AWS is equipping customers to build intelligent, scalable solutions with connected enterprise data. This isn’t just about technology—it’s about creating an adaptable framework for AI innovation:

Cloud Primitives: Building the Foundations for AI

Generative AI demands robust infrastructure, and Amazon is doubling down on its core infrastructure to meet the scale and complexity of these market needs across foundational components:

  1. Compute:
    • Graviton Processors: AWS-native, ARM-based processors offering high performance with lower energy consumption.
    • Advanced Compute Instances: P6 instances with NVIDIA Blackwell GPUs, delivering up to 2.5x faster GenAI compute speeds.
  2. Storage Solutions:
    • S3 Table Buckets: Optimized for Iceberg tables and Parquet files, supporting scalable and efficient data lake operations critical to intelligent solutions.
  3. Databases at Scale:
    • Amazon Aurora: Multi-region, low-latency relational databases with strong consistency to keep up with massive and complex data demands.
  4. Machine Learning Accelerators:
    • Trainium2: Specialized chip architecture ideal for training and deploying complex models with improved price performance and efficiency.
    • Trainium2 UltraServers: Connected clusters of Trn2 servers with NeuronLink interconnect for massive scale and compute power for training and inference for the world’s largest models – with continued partnership with companies like Anthropic.

 Amazon Bedrock: Flexible AI Model Access

Infrastructure provides the baseline requirements for enterprise AI, setting the table for business outcome-focused innovation.  Enter Amazon Bedrock, a platform designed to make AI accessible, flexible, and enterprise-ready. With Bedrock, organizations gain access to a diverse array of foundation models ready for custom tailoring and integration with enterprise data sources:

  • Model Diversity: Access 100+ top models through the Bedrock Marketplace, guiding model availability and awareness across business use cases.
  • Customizability: Fine-tune models using organizational data, enabling personalized AI solutions.
  • Enterprise Connectivity: Kendra GenAI Index supports ML-based intelligent search across enterprise solutions and unstructured data, with natural language queries across 40+ enterprise sources.
  • Intelligent Routing: Dynamic routing of requests to the most appropriate foundation model to optimize response quality and efficiency.
  • Nova Models: New foundation models offer industry-leading price performance (Micro, Lite, Pro & Premier) along with specialized versions for images (Canvas) and video (Reel).

 Guidance for Effective AI Adoption

As important as technology is, it’s critical to understand success with AI is much more than deploying the right model.  It’s about how your organization approaches its challenges and adapts to implement impactful solutions.  I took away a few key points from my conversations and learnings last week:

  1. Start Small, Solve Real Problems: Don’t try to solve everything at once. Focus on specific, lower risk use cases to build early momentum.
  2. Data is King: Your AI is only as smart as the data it’s fed, so “choose its diet wisely”.  Invest in data preparation, as 80% of AI effort is related to data management.
  3. Empower Experimentation: AI innovation and learning thrives when teams can experiment and iterate with decision-making autonomy while focused on business outcomes.
  4. Focus on Outcomes: Work backward from the problem you’re solving, not the specific technology you’re using.  “Fall in love with the problem, not the technology.”
  5. Measure and Adapt: Continuously monitor model accuracy, retrieval-augmented generation (RAG) precision, response times, and user feedback to fine-tune performance.
  6. Invest in People and Culture: AI adoption requires change management. Success lies in building an organizational culture that embraces new processes, tools and workflows.
  7. Build for Trust: Incorporate contextual and toxicity guardrails, monitoring, decision transparency, and governance to ensure your AI systems are ethical and reliable.

Key Takeaways and Lessons Learned

Amazon’s AI strategy reflects the broader industry shift toward flexibility, adaptability, and scale. Here are the top insights I took away from their positioning:

  • Model Flexibility is Essential: Businesses benefit most when they can choose and customize the right model for the job. Centralizing the operational framework, not one specific model, is key to long-term success.
  • AI Must Be Part of Every Solution: From customer service to app modernization to business process automation, AI will be a non-negotiable component of digital transformation.
  • Think Beyond Speed: It’s not just about deploying AI quickly—it’s about integrating it into a holistic solution that delivers real business value.
  • Start with Managed Services: For many organizations, starting with a platform like Bedrock simplifies the journey, providing the right tools and support for scalable adoption.
  • Prepare for Evolution: Most companies will start with one model but eventually move to another as their needs evolve and learning expands. Expect change – and build flexibility into your AI strategy.

The Future of AI with AWS

AWS isn’t just setting the table—it’s planning for an explosion of enterprises ready to embrace AI. By combining high-performance infrastructure, flexible model access through Bedrock, and simplified adoption experiences, Amazon is making its case as the leader in the AI revolution.

For organizations looking to integrate AI, now is the time to act. Start small, focus on real problems, and invest in the tools, people, and culture needed to scale. With cloud infrastructure and native AI platforms, the business possibilities are endless. It’s not just about AI—it’s about reimagining how your business operates in a world where intelligence is the new core of how businesses work.

]]>
https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/feed/ 0 373238
Managing Dependencies for Test Automation with requirements.txt https://blogs.perficient.com/2024/12/09/managing-dependencies-for-test-automation-with-requirements-txt/ https://blogs.perficient.com/2024/12/09/managing-dependencies-for-test-automation-with-requirements-txt/#respond Mon, 09 Dec 2024 10:32:33 +0000 https://blogs.perficient.com/?p=373062

When it comes to test automation in Python, managing your dependencies is essential for ensuring consistent and reliable test execution. One of the most effective ways to handle these dependencies is through the requirements.txt file. In this blog post, we’ll discuss what requirements.txt is, its importance in the context of testing, and how to effectively use it to streamline your test automation workflow.

Req

What is requirements.txt?

requirements.txt is a text file that lists the external libraries and packages your Python project depends on. In the realm of testing, this file becomes crucial for ensuring that the correct versions of testing frameworks, assertion libraries, and other dependencies are installed in your environment.

Basic Structure

A typical requirements.txt file for a testing environment might look like this:

makefile
pytest==7.1.2
pytest-cov>=2.12.0
selenium>=4.1.0
requests==2.26.0

In this example:

  • pytest==7.1.2 specifies the version of the PyTest framework to ensure compatibility with your tests.
  • pytest-cov>=2.12.0 allows for coverage reporting, helping you understand how much of your code is tested.
  • selenium>=4.1.0 is essential for browser automation during testing.
  • requests==2.26.0 can be useful for testing APIs or making HTTP requests during tests.

Why is requirements.txt Important for Testing?

  1. Consistency Across Environments: By specifying exact versions of testing libraries, you can ensure that your tests behave consistently in different environments, whether it’s on your local machine, a CI server, or a colleague’s setup.
  2. Simplified Setup for New Contributors: When new team members join a project, they can quickly set up their environment by installing the dependencies listed in requirements.txt, allowing them to focus on writing and running tests.
  3. Dependency Management: As testing frameworks and libraries evolve, keeping track of versioning becomes essential to avoid breaking changes that could affect your test outcomes.
  4. Facilitating Continuous Integration: In CI/CD pipelines, having a well-defined requirements.txt file ensures that the testing environment is consistently replicated, reducing the risk of errors due to missing or incompatible packages.

How to Create and Use requirements.txt for Testing

1. Creating a requirements.txt File

You can create a requirements.txt file manually, but if you’re using a virtual environment, it’s straightforward to generate it automatically.

  • Set Up a Virtual Environment

    bash
    
    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
  • Install Your Testing Dependencies

    Install the necessary testing packages

    bash
    
    pip install pytest pytest-cov selenium requests
  • Generate requirements.txt

    After installing your dependencies, you can create the requirements.txt file:

bash

pip freeze > requirements.txt

2. Installing Dependencies from requirements.txt

When setting up your test environment, you can easily install all required packages using:

bash

pip install -r requirements.txt

This command reads the requirements.txt file and installs the specified libraries, ensuring your testing setup is ready.

Best Practices for Managing Testing Dependencies with requirements.txt

  1. Pin Versions: Always pin your testing library versions to avoid unexpected behavior due to library updates. This ensures that everyone on the team uses the same versions.
  2. Use Separate Files for Different Environments: If your project has different dependencies for testing, development, and production, consider creating separate requirements files (e.g., requirements-test.txt, requirements-dev.txt, requirements-prod.txt).
  3. Keep It Updated: Regularly update your requirements.txt file as you add or upgrade testing dependencies. This helps maintain a current and accurate list of required packages.
  4. Document Your Requirements: Add comments to your requirements.txt file to clarify why specific versions or packages are included, especially for critical testing libraries.
    shell
    # Testing framework
    pytest==7.1.2
    
    # Coverage tool
    pytest-cov>=2.12.0
    
    # Browser automation
    selenium>=4.1.0
    
  5. Run Security Checks: Utilize tools like pip-audit or safety to check for vulnerabilities in your testing dependencies. Keeping your environment secure is just as important as functionality.

Conclusion

The requirements.txt file is vital for managing dependencies in any Python testing project. By clearly defining your testing framework and its associated libraries, you ensure consistency, simplify the setup process, and facilitate smooth collaboration among team members.

As you develop and refine your test automation practices, remember to maintain a well-organized requirements.txt file. Doing so will enhance your testing workflow and contribute to more robust, reliable software development. Happy testing!

 

]]>
https://blogs.perficient.com/2024/12/09/managing-dependencies-for-test-automation-with-requirements-txt/feed/ 0 373062
CCaaS Migration Best Practices: Tips for moving your customer care platform to the cloud https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/ https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/#respond Fri, 06 Dec 2024 16:28:56 +0000 https://blogs.perficient.com/?p=373159

Migrating to a cloud-delivered Contact Center as a Service (CCaaS) solution can revolutionize how your organization delivers customer service. However, this transition requires careful planning and execution to avoid disruptions. Assuming you have selected a CCaaS platform that aligns with your organizational needs, the following best practices outline key considerations for a seamless migration.

A successful migration to CCaaS not only enhances operational efficiency and scalability but also ensures a significant improvement in service delivery, directly impacting customer satisfaction and retention. Organizations should consider the risks of not embracing modern cloud-based customer care solutions, which can

include diminished customer service capabilities and potential costs due to outdated or inflexible systems. Moreover, organizations that delay this shift risk falling behind competitors who can adapt more quickly to market demands and customer needs. Thus, embarking on a well-planned migration journey is imperative for companies aiming to optimize their customer care operations and secure a competitive advantage in their respective markets.

 

  1. Physical Infrastructure Migration

Understanding your current environment is critical for a successful transition. Start with a thorough site review to document the infrastructure and identify unique user requirements. Engage with call center managers, team leaders, and power users to uncover specific needs and configured features such as whisper settings, omnichannel components, call management, etc.

Factors such as bandwidth and latency are paramount for seamless operations. Evaluate your facility’s connectivity for both on-site and remote users, ensuring it aligns with the CCaaS product requirements. Fortunately, modern CCaaS solutions such as Amazon Connect, Twilio Flex and Five9 supply agent connectivity tools to verify that workers have sufficient resources to provide good customer service over various channels.

Additionally, document call treatments and station-specific configurations like call coverage paths. Legacy components requiring continued functionality should be cataloged to prepare for integration.

 

  1. Change Management Planning

Change management is essential to mitigate risks and maximize adoption. A staged cutover strategy is recommended over a single-event migration, allowing for gradual testing and adjustments.

Develop a robust testing strategy to validate the platform’s performance under real-world conditions. Complement this with an organizational enablement strategy to train users and ensure they are comfortable with the new system. Adoption by your business units and users is one of the most critical factors which will determine the success of your CCaaS migration.

 

  1. Operational Considerations

Operational continuity is vital during migration. Start by understanding the reporting requirements for business managers to ensure no loss of visibility into critical metrics. Additionally, review monitoring processes to maintain visibility into system performance post-migration.

 

  1. Integration Planning

Integrating legacy infrastructure with the new CCaaS platform can present significant challenges. Document existing components, including FXO/FXS interfaces, Workforce Management solutions, FAX systems, wallboards, and specialty dialers. Verify that integrations comply with any regulatory requirements, such as HIPAA or FINRA.

Interactive Voice Response (IVR) systems often require specific integrations with local data sources or enterprise middleware. Assess these integrations to ensure call flows function as intended. For specialized applications, verify that they meet operational needs within the new environment.

 

  1. Fault Tolerance and Disaster Recovery

Testing fault tolerance and disaster recovery capabilities are critical steps in any CCaaS migration. Develop and execute a failsafe testing plan to ensure resilience against both premise-level and carrier-level failures. It is important to align to your IT organization’s standards for recovery time objective (RTO) and business up-time expectations. Disaster recovery plans must reflect these measures and be tested to protect against potential downtime.

 

  1. Scalability and Compliance

CCaaS solutions must scale with your business. Validate scalability by conducting load tests and documenting performance metrics. Compliance is equally important—ensure your migration adheres to industry standards like HIPAA, FedRAMP, or FINRA through thorough compliance testing and documentation.

 

Conclusion

A successful CCaaS migration hinges on meticulous planning, comprehensive testing, and strong change management. By following these best practices, you can minimize risks, ensure operational continuity, and set your organization up for long-term success with its new contact center platform. The result? An enhanced customer experience and a contact center infrastructure that grows with your business.

 

 

]]>
https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/feed/ 0 373159
Handling Complex Test Scenarios with Selenium and Pytest: Advanced Techniques https://blogs.perficient.com/2024/12/06/handling-complex-test-scenarios-with-selenium-and-pytest-advanced-techniques/ https://blogs.perficient.com/2024/12/06/handling-complex-test-scenarios-with-selenium-and-pytest-advanced-techniques/#respond Fri, 06 Dec 2024 10:18:55 +0000 https://blogs.perficient.com/?p=372988

In the world of test automation, Selenium paired with Pytest is a powerful combination. While basic web interactions can be automated easily, complex test scenarios often require advanced techniques. These scenarios may involve dealing with dynamic elements, multiple browser windows, interacting with iFrames, handling AJAX calls, or managing file uploads. In this blog, we will explore some advanced strategies to handle these complex situations, ensuring your tests are robust and reliable.

Selpython

Handling Dynamic Elements in Selenium

Web applications today are highly dynamic, often relying on JavaScript and AJAX to load or update content. Selenium can struggle with elements that appear or change dynamically on the page. Fortunately, explicit waits in Selenium can help you handle these situations.

Explicit Waits ensure the test waits until a specific condition is met before proceeding. This is crucial when interacting with dynamic elements that load after the page is rendered.

python
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

# Wait for an element to be visible
wait = WebDriverWait(driver, 10)
dynamic_element = wait.until(EC.visibility_of_element_located((By.ID, "dynamicElementId")))
dynamic_element.click()

Best Practice: Always use explicit waits for elements that load after page load or change state based on user interaction (e.g., buttons that become clickable after AJAX completion).

Working with iFrames

In modern web applications, iFrames (inline frames) embed content like videos, forms, or ads within a webpage. Selenium interacts with the main document by default, so interacting with elements inside iFrames requires special handling.

You must first switch to the frame to interact with elements inside an iFrame.

python
# Switch to an iFrame by index, name, or WebElement
driver.switch_to.frame(0)  # Switch to the first iFrame on the page

# Interact with elements inside the iFrame
button = driver.find_element_by_id("submit_button")
button.click()

# Switch back to the main document
driver.switch_to.default_content()

Best Practice: Always switch back to the default content after interacting with elements in an iFrame to avoid issues with subsequent commands.

Handling Multiple Windows or Tabs

You may often need to interact with multiple windows or tabs in web applications. Selenium manages this scenario by using window handles. When a new window or tab is opened, it gets a unique handle that you can use to switch between windows.

python
# Get the current window handle
main_window = driver.current_window_handle

# Perform actions that open a new window or tab
driver.find_element_by_id("open_window_button").click()

# Get all window handles and switch to the new window
all_windows = driver.window_handles
for window in all_windows:
    if window != main_window:
        driver.switch_to.window(window)
        break

# Interact with the new window
driver.find_element_by_id("new_window_element").click()

# Switch back to the main window
driver.switch_to.window(main_window)

Best Practice: Always store the handle of the main window before switching to new windows or tabs to easily return after finishing the tasks in the new window.

Handling AJAX Requests

AJAX (Asynchronous JavaScript and XML) calls are often used to load data dynamically without reloading the entire page. However, this can complicate testing, as Selenium might try to interact with elements before completing the AJAX request.

To handle AJAX calls efficiently, we use explicit waits in conjunction with conditions that ensure the page has fully loaded or the necessary AJAX requests are completed.

python
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

# Wait until an AJAX request is complete (e.g., waiting for an element to be visible after AJAX content loads)
wait = WebDriverWait(driver, 10)
wait.until(EC.presence_of_element_located((By.ID, "ajax_loaded_element")))

Best Practice: Use explicit waits for specific AJAX elements to load before interacting with them. Avoid using implicit waits globally, as they can slow down your tests.

File Uploads with Selenium

File uploads are a frequent requirement in many web applications, such as submitting a form with a file attachment. In Selenium, you can interact with file input elements using the send_keys() method to upload a file directly.

python
file_input = driver.find_element_by_id("file_upload")
file_input.send_keys("/path/to/file.txt")

Best Practice: Ensure the file path is correct and test in different environments to handle scenarios where file paths may vary.

Taking Screenshots for Debugging

During test execution, capturing screenshots is crucial to help debug failures. Selenium allows you to capture screenshots at any point during the test.

python
driver.get_screenshot_as_file('screenshot.png')

Best Practice: Take screenshots after each test step or on failure to capture the application’s state, making it easier to debug issues.

Data-Driven Testing with Pytest

Handling multiple input data sets in tests is essential, especially when you want to run the same test with different sets of inputs. With Pytest, you can utilize fixtures or parametrize to pass other data sets to the test function.

python
import pytest

@pytest.mark.parametrize("username, password", [("user1", "pass1"), ("user2", "pass2")])
def test_login(username, password):
    driver.get("http://example.com/login")
    driver.find_element_by_id("username").send_keys(username)
    driver.find_element_by_id("password").send_keys(password)
    driver.find_element_by_id("login_button").click()
    assert "Welcome" in driver.page_source

Best Practice: Use parametrize to run the same test with different data inputs. It ensures your tests are comprehensive and reduces the need for repetitive code.

Conclusion

Handling complex test scenarios requires a combination of the right tools and techniques. Selenium and Pytest offer powerful mechanisms to tackle dynamic elements, iFrames, multiple windows, AJAX requests, and more. You can create more robust, reliable, and efficient test automation solutions by mastering these advanced techniques.

When working with dynamic and complex applications, always ensure that you are leveraging Selenium’s full capabilities—like waits, window handles, and file uploads—while using Pytest to manage test data, structure tests, and handle failures efficiently. This will help you automate tests effectively and maintain quality across complex applications.

 

]]>
https://blogs.perficient.com/2024/12/06/handling-complex-test-scenarios-with-selenium-and-pytest-advanced-techniques/feed/ 0 372988
Streamlining Test Automation with Page Object Model (POM) in PyTest and Selenium https://blogs.perficient.com/2024/12/05/streamlining-test-automation-with-page-object-model-pom-in-pytest-and-selenium/ https://blogs.perficient.com/2024/12/05/streamlining-test-automation-with-page-object-model-pom-in-pytest-and-selenium/#respond Thu, 05 Dec 2024 10:48:47 +0000 https://blogs.perficient.com/?p=372663

In our previous discussion about utilizing PyTest with Selenium, we laid the groundwork for automated testing in web applications. Now, let’s enhance that foundation by exploring the Page Object Model (POM), a design pattern that improves the organization of your code and boosts your tests’ maintainability.

1 Hjpcblbvd8mpqaezzxwvgg

What is the Page Object Model (POM)?

The Page Object Model is a widely adopted design pattern in test automation that promotes better code organization and enhances test maintenance. In essence, POM represents each page of your application as a class. Each class encapsulates that page’s web elements and functionalities, allowing for a clear separation between the test logic and the UI interaction logic.

Core Benefits of Implementing POM:

  1. Improved Maintainability: By abstracting page-specific logic, any changes to the UI need to be made only in one place—within the page class—rather than in every test case that interacts with that page.
  2. Enhanced Readability: Tests written using POM tend to be more intuitive, as they read more like user actions (e.g., login_page.enter_username(“user”)), which can help team members understand the intent behind the tests.
  3. Increased Reusability: Common interactions with a page, like clicking buttons or filling out forms, can be reused across different test cases, reducing duplication and making tests less prone to errors.
  4. Scalability: As your application grows, you can add new page classes without affecting existing tests, allowing your automation suite to scale alongside your application.

Setting Up POM with PyTest and Selenium

To illustrate how to implement POM in a PyTest project with Selenium, we’ll follow a structured approach.

  1. Organizing Your Project Structure

    A well-organized project structure can make a significant difference in managing your automation suite. Here’s a recommended directory layout:

bash
/your_project
    /tests
        test_login.py
    /pages
        login_page.py
    /drivers
        chrome_driver.py
    pytest.ini
    requirements.txt
  1. Defining the Page Object

    Let’s create a LoginPage class that represents our application’s login page. This class will encapsulate the page elements and the actions a user can perform on that page.

python

# /pages/login_page.py

from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

class LoginPage:
    def __init__(self, driver):
        self.driver = driver
        self.username_field = (By.ID, "username")
        self.password_field = (By.ID, "password")
        self.login_button = (By.ID, "login")

    def enter_username(self, username):
        self.driver.find_element(*self.username_field).send_keys(username)

    def enter_password(self, password):
        self.driver.find_element(*self.password_field).send_keys(password)

    def click_login(self):
        self.driver.find_element(*self.login_button).click()

    def is_login_successful(self):
        # Example method to check if login was successful
        return "Dashboard" in self.driver.title
  1. Writing a Test Case

    Now, let’s create a test case that utilizes our LoginPage class. This will be placed in the tests directory.

python

# /tests/test_login.py

import pytest
from selenium import webdriver
from pages.login_page import LoginPage

@pytest.fixture
def setup():
    # Initialize the Chrome WebDriver
    driver = webdriver.Chrome(executable_path="path/to/chromedriver")
    driver.get("http://example.com/login")  # Replace with your actual login URL
    yield driver
    driver.quit()

def test_login(setup):
    driver = setup
    login_page = LoginPage(driver)

    # Perform login actions
    login_page.enter_username("testuser")
    login_page.enter_password("testpassword")
    login_page.click_login()

    # Assert login was successful
    assert login_page.is_login_successful(), "Login failed: User was not redirected to the dashboard."
  1. Running Your Test

    To execute your tests, simply run the following command in your terminal:

bash
pytest tests/test_login.py

If everything is configured correctly, you should see the results of your tests displayed in the terminal output.

Best Practices for POM

When implementing the Page Object Model, consider the following best practices to maximize its benefits:

  • Keep Page Classes Focused: Each page class should represent a single web page or component of your application. To maintain clarity, avoid including methods that belong to other pages.
  • Use Meaningful Method Names: Method names should clearly describe their function, such as enter_username or click_login. This enhances the readability of your tests.
  • Incorporate Waits: Use explicit waits to handle dynamic content. This helps to ensure that elements are ready for interaction before your tests attempt to interact with them.
  • Group Related Methods: If your page has many functionalities, consider grouping related methods together to maintain organization and readability.

Conclusion

By leveraging the Page Object Model in your test automation strategy with PyTest and Selenium, you can significantly improve your test code’s structure, readability, and maintainability. This design pattern allows you to write cleaner tests while ensuring that your automation suite remains adaptable to application UI changes.

As you continue to build your automation framework, consider exploring additional strategies for enhancing your test automation practices, such as using fixtures, data-driven tests, and parallel test execution.

Happy testing, and see you next time as we dive into advanced strategies for effective test automation!

]]>
https://blogs.perficient.com/2024/12/05/streamlining-test-automation-with-page-object-model-pom-in-pytest-and-selenium/feed/ 0 372663
Enhancing Coveo Search Experience: Enabling Partial Match and Query Syntax Toggles https://blogs.perficient.com/2024/12/04/enhancing-coveo-search-experience-enabling-partial-match-and-query-syntax-toggles/ https://blogs.perficient.com/2024/12/04/enhancing-coveo-search-experience-enabling-partial-match-and-query-syntax-toggles/#respond Wed, 04 Dec 2024 12:21:04 +0000 https://blogs.perficient.com/?p=372661

The Coveo platform provides a powerful, customizable search experience. However, making advanced features like Partial Match and Query Syntax user-friendly can significantly improve users’ interactions with your search interface. This blog focuses on how to implement these toggles and integrate them seamlessly with the Coveo Query Pipeline.

Why Partial Match and Query Syntax?

  • Partial Match: This Coveo query parameter ensures results include documents that match a subset of the user’s query terms. It’s particularly useful for long-tail searches or cases where exact matches are unlikely.
  • Query Syntax: This feature enables advanced search operators (e.g., AND, OR) in the user’s query, giving power users better control over their search results.

Adding checkboxes for these features lets users toggle them on or off dynamically, tailoring the search experience to their preferences.

Implementation Overview

Step 1: Add Toggles to the UI

We introduced two simple checkboxes to toggle Partial Match and Query Syntax in real time. Here’s the HTML structure:

<div class="container">
<label class="checkbox-label">
<input type="checkbox" id="partialMatchCheckbox" onclick="togglePartialMatch()" />
Partial Match
</label>
<label class="checkbox-label">
<input type="checkbox" id="querySyntaxCheckbox" onclick="toggleQuerySyntax()" />
Query Syntax
</label>
</div>
.container {
display: flex;
gap: 10px;
}

.checkbox-label {
font-family: Arial, sans-serif;
font-size: 14px;
font-weight: bold;
display: flex;
align-items: center;
gap: 5px;
}

.checkbox-label input[type="checkbox"] {
width: 16px;
height: 16px;
cursor: pointer;
}

Step 2: Implement Toggle Logic

Use JavaScript to dynamically update the query context and trigger changes. The toggles were made functional by leveraging the Coveo Search API and the buildingQuery event, allowing real-time updates to the query context based on the states of the checkboxes.

// Root element for Coveo search interface
const root = document.querySelector("#search");

/** 
* Toggles the Partial Match context parameter based on the checkbox state. 
*/

function togglePartialMatch() {
    const partialMatchCheckbox = document.querySelector("#partialMatchCheckbox");
    if (partialMatchCheckbox) {
        const isActive = partialMatchCheckbox.checked;
        if (isActive) {
            console.log("Partial Match Enabled");
            // Listen to the buildingQuery event and update the query context
            Coveo.$$(root).on("buildingQuery", (e, args) => {
                args.queryBuilder.addContext({
                    partialMatch: isActive
                });
            });
        } else {
            console.log("Partial Match Disabled");
            Coveo.$$(root).on("buildingQuery", (e, args) => {
                args.queryBuilder.addContext({
                    partialMatch: isActive
                });
            });
        }
    } else {
        console.error("Partial Match Checkbox not found!");
    }
}

/**
* Toggles the Query Syntax context parameter based on the checkbox state. 
*/

function toggleQuerySyntax() {
    const querySyntaxCheckbox = document.querySelector("#querySyntaxCheckbox");
    if (querySyntaxCheckbox) {
        const isActive = querySyntaxCheckbox.checked;
        if (isActive) {
            console.log("Query Syntax Enabled");
            Coveo.$$(root).on("buildingQuery", (e, args) => {
                args.queryBuilder.addContext({
                    enableQuerySyntax: isActive
                });
            });
        } else {
            console.log("Query Syntax Disabled");
            Coveo.$$(root).on("buildingQuery", (e, args) => {
                args.queryBuilder.addContext({
                    enableQuerySyntax: isActive
                });
            });
        }
    } else {
        console.error("Query Syntax Checkbox not found!");
    }
}

Step 3: Configure Query Pipeline Rules

In the Coveo Admin Console, modify your query pipeline to respond to the context values sent from the front end.

Partial Match Configuration

Query Parameter: partialMatch

  • Override Value: true
  • Condition: Context[partialMatch] is true

Additional Overrides:

  • partialMatchKeywords: Set to 3
  • partialMatchThreshold: Set to 35%

Query Syntax Configuration

Query Parameter: enableQuerySyntax

  • Override Value: true
  • Condition: Context[enableQuerySyntax] is true.

Step 4: Detailed Flow for Context Parameters

  1.  User Interaction: When a user checks the Partial Match or Query Syntax toggle, the respective JavaScript function (togglePartialMatch or toggleQuerySyntax) is triggered.
  2.  Frontend Logic: The buildingQuery event dynamically updates the query context with parameters like partialMatch or enableQuerySyntax.
    Example Context Update:
    {
       "q": "example query",
       "context": {
         "partialMatch": true,
         "enableQuerySyntax": false
       }
    }
  3. Backend Processing: The query, along with the updated context, is sent to the Coveo backend. The Query Pipeline evaluates the context parameters and applies corresponding rules, like enabling partialMatch or enableQuerySyntax.
  4.  Dynamic Overrides: Based on the context values, overrides like partialMatchKeywords or partialMatchThreshold are applied dynamically.
  5. Real-Time Results: Updated search results are displayed to the user without requiring a page reload.

Benefits of This Approach

  • Enhanced User Control: Allows users to tailor search results to their needs dynamically.
  • Real-Time Updates: Search settings are updated immediately, with no reloads.
  • Flexible Configuration: Query behavior can be adjusted via the Admin Console without modifying frontend code.
  • Scalable: Easily extendable for other toggles or advanced features.

The Results

With these toggles in place, users can:

  • Effortlessly switch between enabling and disabling Partial Match and Query Syntax.
  • Experience improved search results tailored to their input style.

Partial Match Results:

Prmt1Prmt2
Query Syntax Results:
Qsmt1 Qsmt2

Conclusion

Leveraging Coveo’s context and query pipeline capabilities can help you deliver a highly interactive and dynamic search experience. Combining the UI toggles and backend processing empowers users to control their search experience and ensures that results align with their preferences.

Implement this feature today and take your Coveo search interface to the next level!

Useful Links

About custom context | Coveo Machine Learning

PipelineContext | Coveo JavaScript Search Framework – Reference Documentation

Taking advantage of the partial match feature | Coveo Platform

Query syntax | Coveo Platform

]]>
https://blogs.perficient.com/2024/12/04/enhancing-coveo-search-experience-enabling-partial-match-and-query-syntax-toggles/feed/ 0 372661