Innovation and Product Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/ Expert Digital Insights Thu, 21 Nov 2024 13:41:32 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Innovation and Product Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/ 32 32 30508587 Creating a Dual List Box Component in Vue.js https://blogs.perficient.com/2024/11/21/creating-a-dual-list-box-component-in-vue-js/ https://blogs.perficient.com/2024/11/21/creating-a-dual-list-box-component-in-vue-js/#respond Thu, 21 Nov 2024 13:41:32 +0000 https://blogs.perficient.com/?p=369896

A dual list box is a commonly used UI component in Vue.js that allows users to select items from one list (the source) and move them to another (the destination). This component is ideal for scenarios where users need to select multiple items from a large list, offering functionality to filter, search, and select items easily.

In this blog, we’ll learn how to build a dual list box component in Vue.js using TypeScript, highlighting features such as searching, filtering, and moving items between the lists.

Features of the Dual List Box

  • Search functionality for refining list items.
  • Move single or multiple items between lists.
  • Filter items based on their status (e.g., active or inactive).
  • Support for keyboard interactions like Ctrl + Click to select multiple items.

Step 1: Setting Up the Project

Before diving into building the component, let’s first set up the project. We’ll begin by installing the necessary dependencies to get started.

Step 1.1: Install Vue and Vue-Property-Decorator

To start, open your terminal and run the following command to install Vue and the vue-property-decorator package, which we’ll use to leverage TypeScript decorators:

Install Basic Components

Step 1.2: Create the DualListBox.vue File

Next, navigate to the components directory of your Vue project and create a new file called DualListBox.vue. This file will serve as the foundation for our component. We’ll define the structure for two key sections: one for available items and one for selected items, with buttons in between to move items back and forth.

Explore more on setting up Vue projects in this detailed guide from Vue Mastery

Step 2: Create the Layout for Available Items

With the project setup complete, let’s move on to creating the layout for the available items section. This part will include the search functionality, item counts, and filtering options.

Step 2.1: Search Box for Available Items

To help users refine the available items, add a search input field. This input will be bound to the searchSource data property, allowing users to filter items dynamically.

Search Box For Available Items

Step 2.2: Display Available Items Count

After adding the search input, it’s useful to show users the number of items matching the search criteria. We can display the count of filtered items alongside the total number of available items.

Display Available Items Count

Step 2.3: Add Checkbox for Active Items

In some cases, users may want to filter items based on their status (e.g., active or inactive). To provide this option, include a checkbox that filters active items, controlled by the activeItems property.

Add Checkbox For Active Items

Step 2.4: List of Available Items

Now, we’ll display the available items in a list. Each item will be clickable, allowing users to select it and move it to the selected items list.

List Of Available Items

Step 3: Add Buttons to Move Items

Between the available and selected items lists, we need buttons to facilitate the movement of items. These buttons allow users to move items between the two lists, either one at a time or all at once.

Add Buttons To Move Items

Step 4: Create the Layout for Selected Items

Next, we’ll create the layout for the selected items section. This part mirrors the available items section, with search, filtering, and item display functionalities.

Step 4.1: Search Box for Selected Items

To allow users to search through the selected items, add a search input bound to searchDestination.

Search Box For Selected Items

Step 4.2: Display Selected Items Count

Just like the available items section, show the count of filtered and total items in the selected list.

Display Selected Items Count

Step 4.3: List of Selected Items

Finally, we’ll display the selected items, allowing users to click on them to remove them from the list.

List Of Selected Items

Step 5: Defining Properties and Data

Now that the template is in place, let’s move to the script section. We’ll define the properties and data needed to manage the lists and interactions. Use @Prop to accept availableItems and selectedItems from the parent component.

Define Properties And Data

Step 6: Implementing Filtered Lists

To ensure the search and filter functionalities work as intended, create computed properties that return the filtered lists based on search terms and item status.

Get Filtered Lists

Step 7: Implementing the Move Logic

Next, implement the methods that handle moving items between the available and selected lists. These include:

  • moveToDestination for moving selected items from available to selected.
  • moveToSource for moving items back to available.
  • moveAllToDestination to move all available items.
  • moveAllToSource to move all selected items back.

Implementing Move Logic

Step 8: Emitting Events to the Parent Component

When items are moved between lists, we’ll emit an event to notify the parent component to update its state. Here’s an example of emitting the onChangeList event:

Emitting Events To Parent Component

In the parent component, listen for this event to update the selected and available items accordingly.

Listen For Event In Parent

For further reading on event handling in Vue.js, check out the official Vue documentation on event handling.

Step 9: Styling the Component

Finally, add CSS styles to make the dual list box visually appealing. You can customize the list boxes, buttons, and search inputs to match your design requirements.

Styling The Component

Conclusion

In this tutorial, you’ve built a dual list box component using Vue.js with TypeScript. This component allows users to move items between two lists, search within the lists, and filter items by status. You can extend this component further by adding features like drag-and-drop or keyboard navigation.

]]>
https://blogs.perficient.com/2024/11/21/creating-a-dual-list-box-component-in-vue-js/feed/ 0 369896
Adaptive by Design: The Promise of Generative Interfaces https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/ https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/#respond Wed, 20 Nov 2024 21:44:55 +0000 https://blogs.perficient.com/?p=372351

Imagine a world where digital interfaces anticipate your needs, understand your preferences, and adapt in real-time to enhance your experience. This is not a futuristic daydream, but the promise of generative interfaces. 

Generative interfaces represent a new paradigm in user experience design, moving beyond static layouts to create highly personalized and adaptive interactions. These interfaces are powered by generative AI technologies that respond to each user’s unique needs, behaviors, and context. The result is a fluid, intuitive experience—a digital environment that transforms, adapts, and grows with its users. 

 

The Evolution of User Interaction 

Traditional digital interfaces have long relied on predefined structures and user journeys. While these methods have served us well, they fall short of delivering truly personalized experiences. 

Generative interfaces, on the other hand, redefine personalization and interactivity at the level of individual interactions. They have the capability to bring data and components directly to users from multiple systems, seamlessly integrating them into a cohesive user experience.  

Users can perform tasks without switching applications as generative systems dynamically render necessary components within the interface, such as images, interactive components, and data visualizations. 

This adaptability means that generative interfaces continually evolve based on users’ inputs, preferences, and behaviors, creating a more connected and fluid experience. Instead of users adapting to software, the software adapts to them, enhancing productivity, reducing friction, and making digital interactions feel natural. 

 

Adaptive Design Principles 

At the heart of generative interfaces lies the principle of adaptability. This adaptability is more than just personalization—it’s about creating an interface that is in constant dialogue with its user. Unlike conventional systems that rely on rules and configurations set during development, generative interfaces leverage machine learning and user data to generate real-time responses. This not only makes the experience dynamic but also inherently human-centered. 

For instance, a digital assistant that supports a knowledge worker doesn’t just answer questions—it understands the context of the work, anticipates upcoming needs, and interacts in a way that aligns with the user’s goals. Generative interfaces are proactive and responsive, driven by the understanding that user needs can change from moment to moment. 

 

Envisioning the Future 

Generative interfaces hold the promise of reshaping not just individual applications, but entire categories of digital interaction—from productivity tools to entertainment platforms. Imagine entertainment systems that automatically adjust content suggestions based on your mood, or collaboration platforms that adapt their layouts and tools depending on whether you are brainstorming or executing a task. 

This is why data privacy and security considerations must be built into every aspect of the system, from data collection and storage to processing and output generation.  Without control of the experience, you risk low-quality outputs that can do more harm than good. 

As organizations deploy generative interfaces, robust governance frameworks become essential for managing risks and ensuring responsible AI use 

 

Embracing Generative Interfaces

The shift towards generative interfaces is a step towards making technology more human-centric. As we embrace these adaptive designs, we create an opportunity to redefine our digital experiences, making them more intuitive, enjoyable, and impactful. At Perficient, we are pushing the boundaries of how technology can adapt to users rather than forcing users to adapt to technology. 

The impact of these interfaces goes beyond just convenience; they are capable of crafting meaningful digital experiences that feel personal and fulfilling. As generative AI continues to advance, I envision a future where technology fades into the background, seamlessly blending into our lives and intuitively enhancing everything from work to leisure. 

]]>
https://blogs.perficient.com/2024/11/20/adaptive-by-design-the-promise-of-generative-interfaces/feed/ 0 372351
The Emotional Conclusion : Project Estimating (Part 4) https://blogs.perficient.com/2024/11/19/the-emotional-conclusion-project-estimating-part-4/ https://blogs.perficient.com/2024/11/19/the-emotional-conclusion-project-estimating-part-4/#respond Tue, 19 Nov 2024 20:09:05 +0000 https://blogs.perficient.com/?p=372319

The emotional finale is here! Don’t worry, this isn’t about curling up in a ball and crying – we’ve already done that. This final installment of my series on project estimating is all about navigating the emotions of everyone involved and trying to avoid frustration.

If you’ve been following this blog series on project estimations, you’ve probably noticed one key theme: People. Estimating isn’t just a numbers game, it’s full of opinions and feelings. So, let’s dive into how emotions can sway our final estimates!

Partners or Opponents

There are many battle lines drawn when estimating larger projects.

  • Leadership vs Sales Team
  • Sales Team vs Project Team
  • Agency vs Client
  • Agency Bid vs Competing Bids
  • Quality Focus vs Time/Financial Constraints
  • Us vs Ourselves

It’s no wonder we all feel like we’re up against the ropes! Every round brings new threats – real or imagined. How will they react to the estimate? What will they consider an acceptable range?

To make matters worse, everyone involved brings their own personality into the ring. Some see negotiations as a game to be won. Others approach it as a collaboration toward shared goals. And then there’s the age-old playbook: start high, counter low, meet in the middle.

Planning the Attack with Empathy

Feeling pummeled while estimating? Tempted to throw in the towel? Don’t! The best estimates aren’t decided in the ring – they’re made by stepping back, planning, and understanding the perspectives of your partners.

Empathy is your secret weapon. It’s a tactical advantage. When you understand what motivates others, new paths emerge to meet eye to eye.

How do you wield empathy? By asking real questions. Don’t steer people to what you want, instead ask open-ended questions that encourage discussion. How does the budgeting process work? How will you report on the project? How do you handle unexpected changes? Even “this-or-that” questions can help: Do you prioritize on-time delivery or staying on-budget? Do you want quality, or just want to get it done? Let them be heard.

Studying the Playing Field

The good news? Things tend to get smoother over time. If you’ve gone a few rounds with the same group, you already know some of their preferences. But when it’s your first matchup, you’ve got to learn their style quickly.

With answers in hand, it’s time to plan your strategy. But check your ego – this still isn’t about you. It’s about finding the sweet spot where both sides feel like winners. Strategize for the win-win.

If they have a North Star, then determine what it takes to follow that journey. If budget is their weak point, consider ways to creatively trim without losing the project’s intent. If the timeline is the pressure point, then consider simplifying and phasing out the approach to deliver quick wins sooner.

Becoming a Champion

Victory isn’t about knocking your opponent out. It’s about both sides entering the ring as a team and excited to start. The client needs to feel understood, with clear expectations for the project. The agency needs confidence that it won’t constantly trade quality to remain profitable.

Things happen though. It’s inevitable. As in life, projects are imperfect. Things will go off-script. Partnerships are tested when hit hard by the unexpected. Were there contingency plans? Were changes handled properly?

True champions rise to the occasion. Even if the result is no longer ideal, your empathy and tactical questions can guide everyone toward the next best outcome.

Conclusion

Emotional tension almost always comes from a lack of communication. Expectations were not aligned and people felt unheard.

Everyone is different. Personalities will either mesh or clash, but recognizing this helps you bob and weave with precision.

Focus on partnership. Ask questions that foster understanding, and strategize to find a win for both sides. With empathy, clear communication, and a plan for the unexpected, you’ll look like a champion – even when things don’t go perfectly.

……

If you are looking for a sparring partner who can bring out the best in your team, reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2024/11/19/the-emotional-conclusion-project-estimating-part-4/feed/ 0 372319
A Comprehensive Guide to IDMC Metadata Extraction in Table Format https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/ https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/#respond Sun, 17 Nov 2024 00:00:27 +0000 https://blogs.perficient.com/?p=372086

Metadata Extraction: IDMC vs. PowerCenter

When we talk about metadata extraction, IDMC (Intelligent Data Management Cloud) can be trickier than PowerCenter. Let’s see why.
In PowerCenter, all metadata is stored in a local database. This setup lets us use SQL queries to get data quickly and easily. It’s simple and efficient.
In contrast, IDMC relies on the IICS Cloud Repository for metadata storage. This means we have to use APIs to get the data we need. While this method works well, it can be more complicated. The data comes back in JSON format. JSON is flexible, but it can be hard to read at first glance.
To make it easier to understand, we convert the JSON data into a table format. We use a tool called jq to help with this. jq allows us to change JSON data into CSV or table formats. This makes the data clearer and easier to analyze.

In this section, we will explore jq. jq is a command-line tool that helps you work with JSON data easily. It lets you parse, filter, and change JSON in a simple and clear way. With jq, you can quickly access specific parts of a JSON file, making it easier to work with large datasets. This tool is particularly useful for developers and data analysts who need to process JSON data from APIs or other sources, as it simplifies complex data structures into manageable formats.

For instance, if the requirement is to gather Succeeded Taskflow details, this involves two main processes. First, you’ll run the IICS APIs to gather the necessary data. Once you have that data, the next step is to execute a jq query to pull out the specific results. Let’s explore two methods in detail.

Extracting Metadata via Postman and jq:-

Step 1:
To begin, utilize the IICS APIs to extract the necessary data from the cloud repository. After successfully retrieving the data, ensure that you save the file in JSON format, which is ideal for structured data representation.
Step 1 Post Man Output

Step 1 1 Save File As Json

Step 2:
Construct a jq query to extract the specific details from the JSON file. This will allow you to filter and manipulate the data effectively.

Windows:-
(echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:-
jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
To proceed, run the jq query in the Command Prompt or Terminal. Upon successful execution, the results will be saved in CSV file format, providing a structured way to analyze the data.

Step 3 1 Executing Query Cmd

Step 3 2 Csv File Created

Extracting Metadata via Command Prompt and jq:-

Step 1:
Formulate a cURL command that utilizes IICS APIs to access metadata from the IICS Cloud repository. This command will allow you to access essential information stored in the cloud.

Windows and Linux:-
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json"

Step 2:
Develop a jq query along with cURL to extract the required details from the JSON file. This query will help you isolate the specific data points necessary for your project.

Windows:
(curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json") | (echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json" | jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
Launch the Command Prompt and run the cURL command that includes the jq query. Upon running the query, the results will be saved in CSV format, which is widely used for data handling and can be easily imported into various applications for analysis.

Step 3 Ver 2 Cmd Prompt

Conclusion
To wrap up, the methods outlined for extracting workflow metadata from IDMC are designed to streamline your workflow, minimizing manual tasks and maximizing productivity. By automating these processes, you can dedicate more energy to strategic analysis rather than tedious data collection. If you need further details about IDMC APIs or jq queries, feel free to drop a comment below!

Reference Links:-

IICS Data Integration REST API – Monitoring taskflow status with the status resource API

jq Download Link – Jq_Download

]]>
https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/feed/ 0 372086
Introduction to State Handling Excellence in React- A Developer’s Perspective https://blogs.perficient.com/2024/11/14/introduction-to-state-handling-excellence-in-react-a-developers-perspective/ https://blogs.perficient.com/2024/11/14/introduction-to-state-handling-excellence-in-react-a-developers-perspective/#respond Thu, 14 Nov 2024 12:46:46 +0000 https://blogs.perficient.com/?p=371698

Handling an application’s state, or state management, plays an essential role in creating dynamic and responsive user interfaces and effectively executing business logic.

React offers numerous state management methods for storing and updating data, making it a popular web development technology.

Think of it like different ice cream flavors: some people like chocolate (Redux), some like vanilla (Recoil), and some like strawberry (MobX). With React, developers can select the flavor that best suits their needs and projects.

React allows developers the flexibility to select how best to organize their code, whether that means keeping things simple using React Hooks or putting everything in one location with Redux.

It is like having a bunch of ice cream toppings; it makes development flexible and enjoyable! 

 

 

Picture1

 

 

Choosing an appropriate state management library for your React application involves considering various aspects. When making your choice, consider the following crucial factors: 

 

Picture7

 

 

Let’s discuss some of the popular state management libraries and patterns in the React ecosystem:

State Management

 

Let’s dive deeper into these popular state management libraries.

Redux 

For developers, Redux is like a superhero, especially when they are creating large, complex programs. This amazing tool assists with tracking everything that occurs within your app, much like a superhero watching over the whole city. Redux provides a special store to store all your project data.

The best feature is that no component can just modify things in this store at random; instead, they must notify Redux of what needs to be done by sending a message known as an action. Everything becomes easier to understand and more predictable as the outcome. 

One key advantage of Redux is its seamless integration with React, a popular framework in web development. By combining these two technologies, developers can ensure the smooth functioning of their applications and easily address any issues that may arise.

Think of Redux as a reliable guide for managing your app’s state, simplifying the process, and preventing you from getting overwhelmed by its complexity.  

Here are some Key concepts in Redux. These concepts work together to establish a structured and predictable framework that ensures systematic management of data flow and application state. 

Key Concept in Redux

Recoil

Recoil is an experimental state management library developed by Facebook that provides a powerful solution for handling states in React applications, particularly for small-to-medium-sized projects.

It enhances the capabilities of the basic React framework and offers a group of features that can be tough to accomplish with React alone. 

Recoil’s flexibility and adaptability in managing state are key advantages, especially when dealing with components. It allows developers to handle the state in ways that meet a project’s specific needs.

Recoil simplifies managing data in React, making complex state handling effortless. It is like having a handy tool that takes the stress out of managing your application’s data.

For more information about Recoil, you can check out my upcoming blog. 

React Hooks 

The introduction of React Hooks in React version 16.8 in February revolutionized state management in functional components. Before Hooks, the handling state of functional components was limited, and class components were mainly used. 

The useState Hook is the primary component of Hooks, allowing simple state management within functional components.

Furthermore, react offers additional Hooks for particular use cases and advanced functionality, enhancing the overall development experience and increasing state management flexibility. 

Below are a few of the most used React hooks for state management. 

 

most popular

Context API 

A built-in feature of React called the Context API makes it easier to maintain a local state within a component tree and allows the state to be shared globally without having to be explicitly sent down through props.

It is frequently used to update and provide easy access to states for components located further down the component tree and to manage states at higher levels of the component tree. 

Context Object

It was made with createContext and comes with a provider and a consumer for sharing state management. 

Supplier: wraps components to provide context. 

Consumer: Accesses the component’s context value. 

Default Values: Provides a default value for components outside a provider’s scope when the context is created. 

Nested Contexts: Allows the nesting of multiple contexts, each with a separate provider and consumer. 

Dynamic Context Updates: This enables the context value to be updated dynamically based on the component’s logic or state. 

Performance Optimization: Optimizes rendering and avoids needless re-renders using techniques such as React. Memo. 

 

MobX

React developers can use MobX, a useful tool, to manage the dynamic data in their projects. Like a manager operating in the background, it ensures that the user interface (the user interface) updates automatically when data changes.

It is beneficial for clean, scalable state management for your app. 

It simplifies the process of tracking changing data, which is essential in React. MobX lets you specify certain areas of your data (or state) to be tracked and updates the display whenever those parts change.  

MobX is a helpful utility that monitors the data in your app and ensures that everything remains coordinated without requiring you to update the UI (User Interface) manually whenever something changes.

It is a clever approach to dealing with React application state management problems. 

You can refer to the official MobX documentation at https://mobx.js.org/  for further information and advanced usage. 

Zustand 

Zustand is a helpful tool for React, making it easier to manage and control the data in your applications.

It is known for being straightforward and not overly complex, yet despite this, it still has a lot of powerful features for managing how things are stored and updated in React.

It functions similarly to a straightforward and reliable helper in managing the data in your application. It is a lightweight yet powerful alternative to other state management solutions like Redux or MobX. 

Below are a few key points about Zustand worth noting. 

Zustand

 

Jotai 

For React apps, Jotai is a simplified state management solution that offers an option to more well-known tools like Redux and Context API. Its easy-to-use API and lightweight design make it attractive for developers looking for a simple state management solution. 

Jotai is made to work well with small or big applications and is easy to integrate into projects. It has a cool feature called atoms and derived atoms that make handling state simple and improve the overall development experience. 

With a focus on simplicity, Jotai presents a concise API, unveiling just a few exports from its main bundle. 

Atom’ is used to create fundamental states without assigned values, and ‘useAtom’ helps in managing states within React components. ‘createStore’ is the core of state management in Jotai, acting as a pivotal point.

The ‘Provider’ connects components, enabling the sharing of state details and simplifying communication across various project parts. This approach indeed offers a straightforward method for managing state in React applications. 

In summary, React provides a variety of state management libraries to cater to different requirements. Whether you value Redux for its established dependability, MobX for its simplicity, or Recoil for its contemporary approach, every project has an option. The crucial aspect is to select the most suitable option for your project and what your team is comfortable with. By thoroughly evaluating the factors mentioned in the blog, you can confidently opt for the most appropriate state management solution for your React applications. 

 

Here are the official websites where you can find more information about the state management libraries and React features we discussed. 

Redux: https://redux.js.org/ 

Recoil: https://recoiljs.org/ 

React Hooks: https://reactjs.org/docs/hooks-intro.html 

Context API: https://reactjs.org/docs/context.html 

MobX: https://mobx.js.org/README.html 

Zustand: https://github.com/pmndrs/zustand 

Jotai: https://github.com/pmndrs/jotai  

Plus, stay tuned for my upcoming blogs to dive deep into the world of state management! 

]]>
https://blogs.perficient.com/2024/11/14/introduction-to-state-handling-excellence-in-react-a-developers-perspective/feed/ 0 371698
Understanding Debouncing and Throttling in JavaScript – A Comprehensive Guide https://blogs.perficient.com/2024/11/12/understanding-debouncing-and-throttling-in-javascript-a-comprehensive-guide/ https://blogs.perficient.com/2024/11/12/understanding-debouncing-and-throttling-in-javascript-a-comprehensive-guide/#respond Tue, 12 Nov 2024 10:56:02 +0000 https://blogs.perficient.com/?p=371786

Throttling and debouncing are two essential optimization strategies. In this comprehensive guide, we will delve into the concepts of debouncing and throttling, explore their use cases, and understand how to implement them in JavaScript.

Debouncing Explained

What is Debouncing?

Debouncing is a programming technique used to prevent time-consuming operations from running too frequently, which might cause a web application’s performance to lag. It forces a function to wait a certain amount after the last invocation before executing.

When to Use Debouncing?

  1. Input Fields: Debouncing is often applied to input fields to delay the execution of a function until the user has stopped typing. This prevents unnecessary API calls or other resource-intensive operations on every keystroke.
  2. Resize and Scroll Events: When handling events like window resizing or scrolling, debouncing helps avoid performance issues by limiting the frequency of function calls.

Debouncing Implementation

Let’s look at a basic implementation of a debounce function in JavaScript:

const debounce = (func, delay) => {
    let timeoutId;
    return (...args) => {
        clearTimeout(timeoutId);
        timeoutId = setTimeout(() => func.apply(this, args), delay);
    };
};

Example usage:

const debouncedFunction = debounce(() => {
console.log("Debounced function called");
}, 300);
// Attach debounced function to an event, e.g., button click
document.getElementById("myButton").addEventListener("click", debouncedFunction);

 

Scenario: Search Input in an E-commerce Site

When a user types in a search input box, you want to wait until they stop typing before sending the search query to the server. This prevents sending a request for every keystroke.

Scenario: Autosaving

When a user writes a document or fills out a form, you might want to autosave their input only after they’ve stopped typing for a certain period.

Throttling Explained

What is Throttling?

Throttling is a technique that ensures a function is only executed at a certain rate, limiting the number of times it can be called over time. Unlike debouncing, throttling guarantees the execution of a function at regular intervals.

When to Use Throttling?

  1. Scrolling: Throttling is beneficial when handling scroll events to control the rate at which a function is executed. This prevents overwhelming the browser with continuous function calls during rapid scrolling.
  2. Mousemove Events: Throttling is useful for mousemove events to prevent excessive calculations when tracking the movement of the mouse.

Throttling Implementation

Here’s a basic implementation of a throttle function in JavaScript:

const throttle = (func, limit) => {
    let throttled = false;
    return (...args) => {
        if (!throttled) {
            func.apply(this, args);
            throttled = true;
            setTimeout(() => {
                throttled = false;
            }, limit);
        }
    };
};

Example usage:

const throttledFunction = throttle(() => {
console.log("Throttled function called");
}, 300);
// Attach throttled function to an event, e.g., window scroll
window.addEventListener("scroll", throttledFunction);

 

Scenario: Window Resize Event

When a user resizes the browser window, the resize event can fire many times per second. Throttling can ensure the event handler executes at most once every 100 milliseconds, reducing the number of times the layout or other elements need to be recalculated.

Scenario: Scrolling Event

When a user scrolls a webpage, the scroll event can fire many times. Throttling can ensure the event handler executes at most once every 200 milliseconds, which is useful for tasks like lazy loading images or infinite scrolling.

 

Debouncing vs. Throttling

Debouncing and Throttling Differences

Execution Guarantee:

  • Debouncing: Ensures that a function won’t be run until a predetermined amount of time has elapsed since its last call.
  • Throttling: Guarantees a maximum number of executions in a given time frame.

Frequency of Execution:

  • Debouncing: Delays a function’s execution until a predetermined amount of time has passed since the last call.
  • Throttling: Ensures a function is not executed more often than once in a specified amount of time.

Use Cases:

  • Debouncing: Ideal for scenarios where you want to wait for a pause in user input, such as typing or resizing.
  • Throttling: Suitable for scenarios where you want to limit the frequency of function calls, such as scroll events.

Choosing Between Debouncing and Throttling

    • Debouncing is suitable when:
      • You want to wait for a pause in user input before taking an action.
      • You want to delay the execution of a function until after a certain time has passed since the last invocation.
    • Throttling is suitable when:
      • You want to ensure a function is not called more frequently than a specified rate.
      • You want to limit the number of times a function can be executed within a given time frame.

Conclusion

Debouncing and throttling in JavaScript are essential tools in a web developer’s kit for optimizing the performance of functions. By understanding these concepts and knowing when to apply them, you can significantly improve the user experience of your web applications. Whether you need to delay API calls during user input or control the rate of function execution during scroll events, debouncing and throttling provide elegant solutions to common challenges in web development.

]]>
https://blogs.perficient.com/2024/11/12/understanding-debouncing-and-throttling-in-javascript-a-comprehensive-guide/feed/ 0 371786
Best Practices for Structuring Redux Applications https://blogs.perficient.com/2024/11/12/best-practices-for-structuring-redux-applications/ https://blogs.perficient.com/2024/11/12/best-practices-for-structuring-redux-applications/#respond Tue, 12 Nov 2024 10:32:25 +0000 https://blogs.perficient.com/?p=371796

Redux has become a staple in state management for React applications, providing a predictable state container that makes it easier to manage your application’s state. However, as applications grow in size and complexity, adopting best practices for structuring your Redux code becomes crucial. In this guide, we’ll explore these best practices and demonstrate how to implement them with code examples.

1. Organize Your Code Around Features

One key principle in Redux application structure is organizing code around features. Each feature should have its own set of actions, reducers, and components, which facilitates codebase maintenance and comprehension.

folder Structure

 

2. Normalize Your State Shape

Consider normalizing your state shape, especially when dealing with relational data. This entails structuring your state to reduce the number of nested structures, which will increase its efficiency and manageability.

//Normalized state shape
{
  entities: {
    users: {
      "1": { id: 1, name: 'Johnny Doe' },
      "2": { id: 2, name: 'Jennifer Doe' }
    },
    posts: {
      "101": { id: 101, userId: 1, title: 'Post 1' },
      "102": { id: 102, userId: 2, title: 'Post 2' }
    }
  },
  result: [101, 102]
}

3. Middleware for Side Effects

Use middleware to manage asynchronous activities and side effects, such as redux-thunk or redux-saga. This keeps your reducers pure and moves complex logic outside of them.

// Using redux-thunk
const fetchUser = (userId) => {
return async (dispatch) => {
dispatch(fetchUserRequest());
try {
const response = await api.fetchUser(userId);
dispatch(fetchUserSuccess(response.data));
} catch (error) {
dispatch(fetchUserFailure(error.message));
}
};
};

4. Selectors for Efficient State Access

Functions known as selectors contain the logic needed to retrieve Redux state slices. Use selectors to efficiently access and compute derived state.

// Selectors
export const selectAllUsers = (state) => Object.values(state.entities.users);
export const getUserById = (state, userId) => state.entities.users[userId];

5. Testing Your Redux Code

Write tests for your actions, reducers, and selectors. Tools like Jest and Enzyme can be invaluable for testing Redux code.

// Example Jest Test
test('should handle FETCH_USER_SUCCESS', () => {
const prevState = { ...initialState };
const action = { type: FETCH_USER_SUCCESS, payload: mockData };
const newState = userReducer(prevState, action);
expect(newState).toEqual({
...initialState,
data: mockData,
error: null,
loading: false,
});
});

 

Conclusion

Adhering to these best practices can ensure a more maintainable and scalable Redux architecture for your React applications. Remember, keeping your code organized, predictable, and efficient is key.

 

]]>
https://blogs.perficient.com/2024/11/12/best-practices-for-structuring-redux-applications/feed/ 0 371796
A Step-by-Step Guide to Extracting Workflow Details for PC-IDMC Migration Without a PC Database https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/ https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/#respond Fri, 08 Nov 2024 06:29:05 +0000 https://blogs.perficient.com/?p=371403

In the PC-IDMC conversion process, it can be challenging to gather detailed information about workflows. Specifically, we often need to determine:

  • The number of transformations used in each mapping.
  • The number of sessions utilized within the workflow.
  • Whether any parameters or variables are being employed in the mappings.
  • The count of reusable versus non-reusable sessions used in the workflow etc.

To obtain these details, we currently have to open each workflow individually, which is time-consuming. Alternatively, we could use complex queries to extract this information from the PowerCenter metadata in the database tables.

This section focuses on XQuery, a versatile language designed for querying and extracting information from XML files. When workflows are exported from the PowerCenter repository or Workflow Manager, the data is generated in XML format. By employing XQuery, we can effectively retrieve the specific details and data associated with the workflow from this XML file.

Step-by-Step Guide to Extracting Workflow Details Using XQuery: –

For instance, if the requirement is to retrieve all reusable and non-reusable sessions for a particular workflow or a set of workflows, we can utilize XQuery to extract this data efficiently.

Step 1:
Begin by exporting the workflows from either the PowerCenter Repository Manager or the Workflow Manager. You have the option to export multiple workflows together as one XML file, or you can export a single workflow and save it as an individual XML file.

Step 1 Pc Xml Files

Step 2:-
Develop the XQuery based on our specific requirements. In this case, we need to fetch all the reusable and non-reusable sessions from the workflows.

let $header := "Folder_Name,Workflow_Name,Session_Name,Mapping_Name"
let $dt := (let $data := 
    ((for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return
        for $w in $f/WORKFLOW
        let $wn:= data($w/@NAME)
        return
            for $s in $w/SESSION
            let $sn:= data($s/@NAME)
            let $mn:= data($s/@MAPPINGNAME)
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>)
    |           
    (for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return          
        for $s in $f/SESSION
        let $sn:= data($s/@NAME)
        let $mn:= data($s/@MAPPINGNAME)
        return
            for $w in $f/WORKFLOW
            let $wn:= data($w/@NAME)
            let $wtn:= data($w/TASKINSTANCE/@TASKNAME)
            where $sn = $wtn
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>))
       for $test in $data
          return
            replace($test/text()," ",""))
      return
 string-join(($header,$dt), "
")

Step 3:
Select the necessary third-party tools to execute the XQuery or opt for online tools if preferred. For example, you can use BaseX, Altova XMLSpy, and others. In this instance, we are using Basex, which is an open-source tool.

Create a database in Basex to run the XQuery.

Step 3 Create Basex Db

Step 4: Enter the created XQuery into the third-party tool or online tool to run it and retrieve the results.

Step 4 Execute XqueryStep 5:
Export the results in the necessary file extensions.

Step 5 Export The Output

Conclusion:
These simple techniques allow you to extract workflow details effectively, aiding in the planning and early detection of complex manual conversion workflows. Many queries exist to fetch different kinds of data. If you need more XQueries, just leave a comment below!

]]>
https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/feed/ 0 371403
The Rise of JSON API: The Key to Seamless API Integration in Modern Technologies https://blogs.perficient.com/2024/11/06/the-rise-of-json-api-the-key-to-seamless-api-integration-in-modern-technologies/ https://blogs.perficient.com/2024/11/06/the-rise-of-json-api-the-key-to-seamless-api-integration-in-modern-technologies/#respond Wed, 06 Nov 2024 09:07:59 +0000 https://blogs.perficient.com/?p=370822

With the growing demand for seamless data exchange between applications, API integration has become a fundamental aspect of modern software development. Whether it’s integrating third-party services, building microservices, or enabling dynamic content for web and mobile applications, APIs are everywhere. Among the many standards, JSON API has emerged as a powerful and widely adopted approach for structuring and communicating data. This blog dives into how JSON API plays a vital role in current technologies and trends around API integration.

What is JSON API?

Start by explaining JSON API as a specification for building APIs in a standardized way. Unlike custom API architectures, JSON API provides rules for how resources are fetched and manipulated over HTTP. It emphasizes reducing the number of requests and optimizing payloads for faster data transfers.

  • Efficiency: JSON API allows bulk updates and fetches data in fewer HTTP requests.
  • Consistency: By following a specification, JSON API ensures the same structure is used across different services.

Why is JSON API Trending in Modern API Integration?

1. Headless CMS Adoption:
The rise of headless CMS (like Drupal, WordPress, Strapi) has led to the decoupling of front-end and back-end systems.
  • Flexibility: JSON API is integral to delivering structured content across various platforms, such as websites, mobile apps, and IoT devices. For instance, Drupal has built-in JSON API support, enabling it to act as a backend for modern front-end frameworks like React and Vue.js.
2. Mobile-First Development:
With mobile app usage dominating, developers need APIs to serve data efficiently to mobile devices.
  • Usage: Mobile apps, which often rely on APIs for data, benefit from JSON API’s ability to send compact, minimal payloads, reducing network usage and improving performance.
3. Microservices and Serverless Architectures:
Modern applications are moving towards distributed systems such as microservices and serverless architectures.
  • Consistency: It simplifies communication between microservices by enforcing consistent payload structures and allowing services to interact efficiently.
4. Real-Time Applications:
APIs are used to power real-time applications like live chat systems, collaborative tools, and data streaming platforms.
  • Usability: The ability to filter, sort, and paginate resources dynamically in JSON API makes it suitable for real-time data fetching in such applications.
5. RESTful API Standardization:
With REST remaining a dominant architecture style for APIs, standardizing how APIs interact has become critical.
  • Easy to build: JSON API follows REST principles and enforces a strong structure, making it easier for teams to build, maintain, and consume APIs.

Implementing JSON API: The Backbone of Trending API Integration

1. Headless CMS Integration
In headless CMS architectures (e.g., Drupal, WordPress), JSON API is vital for separating the backend from the front-end.

Example: Fetching Data from Drupal 10 Using JSON API

// Example of fetching nodes from a Drupal JSON API endpoint
$client = \Drupal::httpClient();
$response = $client->get('http://example.com/jsonapi/node/article');

if ($response->getStatusCode() == 200) {
  $data = json_decode($response->getBody()->getContents());
  foreach ($data->data as $node) {
    echo $node->attributes->title . PHP_EOL;
  }
}

Here, we are making an HTTP GET request to the Drupal JSON API endpoint to fetch article nodes.
Notice how we use Guzzle to handle HTTP requests efficiently in PHP.

2. Mobile-First API Design

Mobile apps need efficient, lightweight data APIs. JSON API allows developers to fine-tune the payloads with features like sparse fieldsets, improving performance for mobile devices.

Example: Fetching Specific Fields (Sparse Fieldsets)

GET /jsonapi/node/article?fields[node]=title,body

This request only retrieves the title and body fields for an article node, minimizing the payload size.

3. Microservices with JSON API
Modern applications rely on distributed microservices that need to communicate efficiently. JSON API’s consistency across services makes it ideal for this architecture.

Example: Filtering and Sorting Resources

GET /jsonapi/node/article?filter[status]=1&sort=-created

This query retrieves all published articles (status=1) and sorts them by creation date in descending order.

 

Code Walkthrough: Implementing a Simple JSON API Client

Let’s say you need to interact with a service using JSON API. Here’s an example in JavaScript using fetch to interact with the API:

Example: Fetching Data Using Fetch API

async function fetchArticles() {
  const response = await fetch('https://example.com/jsonapi/node/article');
  if (response.ok) {
    const data = await response.json();
    data.data.forEach(article => {
      console.log(article.attributes.title);
    });
  } else {
    console.error('Error fetching articles');
  }
}

fetchArticles();

This simple JavaScript example uses the fetch API to get articles from a JSON API-enabled Drupal backend. The result is parsed, and the titles of the articles are logged to the console.

 

Advanced Usage of JSON API

1. Bulk Requests in JSON API
One of the strengths of JSON API is its ability to handle bulk updates and create multiple records in one request.

Example: Bulk Creation of Nodes

POST /jsonapi/node/article HTTP/1.1
Content-Type: application/vnd.api+json

{
  "data": [
    {
      "type": "node--article",
      "attributes": {
        "title": "First Article"
      }
    },
    {
      "type": "node--article",
      "attributes": {
        "title": "Second Article"
      }
    }
  ]
}

This request creates two articles in one HTTP request, demonstrating how JSON API reduces overhead when dealing with bulk data.

2. Pagination in JSON API
When fetching large datasets, you need pagination to prevent overwhelming clients or servers.

Example: Pagination Using JSON API

GET /jsonapi/node/article?page[limit]=10&page[offset]=0

This request fetches 10 articles starting from the first article (offset = 0).

3. Handling Relationships in JSON API

One powerful feature of JSON API is how it handles relationships between resources. Here’s an example of including related data in one request.

Example: Including Related Resources (e.g., Author Information)

GET /jsonapi/node/article?include=author

This query retrieves the articles along with their related author data, reducing the need for additional API calls.

 

Real-World Use Cases

  1. Headless eCommerce: JSON API is heavily used in headless eCommerce architectures to deliver content from the backend (e.g., product catalogs, user data) to various front-end platforms like websites or mobile apps.

Example: Fetching Products

GET /jsonapi/commerce_product/default

This call retrieves product data from an eCommerce system powered by JSON API.

  1. Content Aggregators and Media Services: Applications like Netflix and Spotify use APIs to stream vast amounts of content across devices. JSON API’s efficient filtering and relationship handling streamline this process.
  2. Real-Time Dashboards: JSON API’s ability to filter, paginate, and include related data makes it ideal for real-time analytics dashboards, fetching data efficiently from multiple microservices.

 

Use Cases: How JSON API Powers Modern Solutions

  1. eCommerce: JSON API is used in headless eCommerce solutions to decouple the front-end (React, Angular) from the backend, providing high flexibility in designing shopping experiences.
  2. Content Distribution: Companies like Netflix and Spotify use APIs to distribute content to multiple platforms. JSON API helps in efficiently managing and delivering large amounts of media content.
  3. Third-Party Integrations: With services like Zapier or IFTTT, JSON API helps integrate different platforms by providing a consistent structure for reading and writing data.

 

Conclusion: JSON API as a Driving Force in Modern API Development
In a world where API integration plays a key role in every application, JSON API is emerging as a specification that simplifies and optimizes data communication. Whether you’re building a mobile app, designing microservices, or working with headless CMS architectures, JSON API provides the structure and efficiency needed for modern application development.
Programmatically, JSON API ensures fewer HTTP requests, compact responses, and better performance. Its rise in popularity reflects a growing need for standardization in an increasingly API-driven landscape.

]]>
https://blogs.perficient.com/2024/11/06/the-rise-of-json-api-the-key-to-seamless-api-integration-in-modern-technologies/feed/ 0 370822
Using PyTest with Selenium for Efficient Test Automation https://blogs.perficient.com/2024/11/04/using-pytest-with-selenium-for-efficient-test-automation/ https://blogs.perficient.com/2024/11/04/using-pytest-with-selenium-for-efficient-test-automation/#respond Mon, 04 Nov 2024 06:47:49 +0000 https://blogs.perficient.com/?p=370819

In our previous post, we explored the basics of Selenium with Python, covering the introduction, some pros and cons, and a basic program to get you started. In this post, we’ll delve deeper into the world of test automation by integrating Selenium with PyTest, a popular testing framework in Python. PyTest makes it easier to write simple and scalable test cases, which is crucial for maintaining a robust test suite.

Picture9

What is PyTest?

PyTest is a testing framework that allows you to write simple yet scalable test cases. It is widely used due to its easy syntax, powerful features, and rich plugin architecture. PyTest can run tests, handle setup and teardown, and integrate with various other tools and libraries.

Why Use PyTest with Selenium?

  • Readable and Maintainable Tests: PyTest’s syntax is clean and concise, making tests easier to read and maintain.
  • Powerful Assertions: PyTest provides powerful assertion introspection, which gives more detailed error messages.
  • Fixtures: PyTest fixtures help in setting up preconditions for your tests and can be reused across multiple test functions.
  • Extensible: PyTest’s plugin architecture allows for easy extension and customization of test runs.

Setting Up PyTest with Selenium

Prerequisites

Before you begin, ensure you have the following installed:

  • Python (>= 3.6)
  • Selenium (pip install selenium)
  • PyTest (pip install pytest)

You also need a WebDriver for the browser you intend to automate. For instance, ChromeDriver for Google Chrome.

Basic Test Setup

  • Project Structure

Create a directory structure for your test project:

Picture1

  • Writing Your First Test

In the test_example.py file, write a simple test case:

This simple test opens Google and checks if the page title contains “Google”.

Picture2

  • Using PyTest Fixtures

Fixtures in PyTest are used to manage setup and teardown. Create a fixture in the conftest.py file:

Picture3

Now, update the test to use this fixture:

Picture4

This approach ensures that the WebDriver setup and teardown are handled cleanly.

  • Running Your Tests

To run your tests, navigate to the project directory and use the following command:

Picture7

PyTest will discover and run all the test functions prefixed with test_.

Advanced Usage

  • Parameterized Tests

You can run a test with different sets of data using @pytest.mark.parametrize:

Picture5

  • Custom PyTest Plugins

Extend PyTest functionalities by writing custom plugins. For example, you can create a plugin to generate HTML reports or integrate with CI/CD tools.

  • Headless Browser Testing

Run tests in headless mode to speed up execution:

Picture6

Conclusion

Integrating PyTest with Selenium not only enhances the readability and maintainability of your tests but also provides powerful features to handle complex test scenarios. By using fixtures, parameterization, and other advanced features, you can build a robust and scalable test suite.

In the next post, we will explore the Page Object Model (POM) design pattern, which is a crucial technique for managing large test suites efficiently.

 

]]>
https://blogs.perficient.com/2024/11/04/using-pytest-with-selenium-for-efficient-test-automation/feed/ 0 370819
Empowering Transformation Through Global Expertise https://blogs.perficient.com/2024/10/31/empowering-transformation-through-global-expertise/ https://blogs.perficient.com/2024/10/31/empowering-transformation-through-global-expertise/#respond Thu, 31 Oct 2024 20:29:54 +0000 https://blogs.perficient.com/?p=371328

Companies need more than cost-efficiency to stand out—they need transformative solutions, access to great talent, and continuous innovation. Perficient’s global delivery model is built to provide all of these, offering flexibility and expertise across 40+ locations worldwide. We empower businesses to grow and achieve meaningful outcomes.

In a conversation with Kevin Sheen, Perficient’s Senior Vice President of Global Delivery, he shares his insights on how our global delivery model creates a competitive advantage for businesses worldwide. Below, we’ve highlighted key moments from this conversation, including both on-video insights and additional details shared off-camera. These highlights dive into what makes Perficient’s approach unique, from the benefits of diverse global talent to the strategies we employ to drive transformative results.

Why Companies Can’t Afford to Overlook Global Expertise

Global delivery isn’t just a choice anymore; it’s a necessity. While cost savings are a benefit, the real power lies in access to top talent from around the world. By tapping into diverse expertise, companies gain new perspectives that fuel innovation. That diversity in skills, ideas, and creativity is the key to driving transformational outcomes.

How Has Perficient’s Model Evolved to Meet Global Needs?

Perficient has grown from a niche digital consultancy to a global leader, with 7,000+ experts across the U.S., Latin America, India, and Europe. Our ability to manage complex projects has evolved through the continuous investment in Agile methodologies and tools that enhance collaboration across borders. This evolution allows us to tailor our approach to each client’s unique needs, even adapting to various methodologies like PMO waterfall when necessary, without sacrificing quality or efficiency.

What Sets Perficient’s Global Model Apart from Others?

Our people are at the heart of what we do, regardless of location. We invest in our teams globally, supporting their growth and creating an inclusive environment where their talents are recognized and expanded. At Perficient, this investment isn’t just a slogan—we back it up with opportunities that let team members expand into leadership roles and engage with projects that broaden their expertise.

 

 

How Do Managed Services and Quality Assurance Enhance Perficient’s Global Model?

At Perficient, QA isn’t an afterthought—it’s embedded in every phase. We streamline project timelines by integrating cutting-edge QA automation and AI-driven processes while ensuring consistent, high-quality outcomes. Our managed services allow clients to focus on innovation while we handle maintenance and optimization, even if another provider originally developed the solution.

How Does Perficient Address Cost Pressures?

Perficient brings the right talent, in the right place, at the right time. Whether it’s embedding specialists within a client’s local team or supporting remotely with time-zone-aligned talent, we focus on matching skills and resources to project needs dynamically. Projects have their own lifecycle, and our model allows us to adjust based on those dynamics—optimizing resources and reducing costs without compromising impact.

Why Is Talent Diversity Key to Scaling Operations Globally?

Global operations thrive on diversity—not just in skills but in perspectives. An inclusive, globally integrated team brings fresh ideas and insights that can be pivotal. We actively create environments where the diverse perspectives of each team member are valued, resulting in solutions that are culturally relevant and forward-thinking.

What Real-World Impact Has Perficient Delivered for Clients?

Our deep industry expertise, technology partnerships, and strategic leadership give clients a distinct edge. From streamlining processes to transferring knowledge and training client teams, we ensure lasting benefits that continue well after project completion. By partnering with Perficient, clients don’t just stay competitive; they redefine what’s possible in their industries.

The bottom line is this: Success goes beyond managing costs. It’s about leveraging top talent precisely where and when it’s needed to drive real transformation. Perficient’s global delivery model enables businesses to innovate, scale, and achieve impactful results with tailored solutions and a team passionate about making a difference.

Ready to see what our global expertise can do for your business? Connect with us to explore how we can empower your journey toward growth and innovation.

]]>
https://blogs.perficient.com/2024/10/31/empowering-transformation-through-global-expertise/feed/ 0 371328
Exploring Next.js Conf by Vercel: New Features in Version 15 and Their Significance https://blogs.perficient.com/2024/10/29/next-js-conf-whats-new-in-version-15-and-what-does-it-mean-for-us/ https://blogs.perficient.com/2024/10/29/next-js-conf-whats-new-in-version-15-and-what-does-it-mean-for-us/#respond Tue, 29 Oct 2024 22:31:42 +0000 https://blogs.perficient.com/?p=371211

Img 20241024 180252 1 I’ve been working with Next.Js for a quite a while and have been watching its development with interest all this time. Last week I was happy attended Next.js Conf in San Francisco. Perficient was proud to sponsor the event, allowing us to showcase our Sitecore Practice alongside my colleague David Lewis from the Optimizely Practice.

Vercel released new version 15 of Next.js framework. That was truly innovative day and I’d like to share my takeaways from the event about this new version.

Vercel seriously worked on mistakes

In the past, the Next.js team made some questionable decisions, such as rushing releases and maintaining rigid opinions despite community feedback. This included changes like rewriting the fetch API, implementing hard caching, and introducing numerous bugs, among other issues, while once again overlooking community requests. It took nearly a year for the team to recognize that these approaches were ineffective and to begin addressing the underlying problems. With the release of version 15, there is finally a sense that the framework is truly meeting the needs of its community, much as it has successfully done in previous iterations.

React 19

We are currently facing an unusual situation. Over six months have passed since the release candidate of React.js was introduced, yet the stable version has not been published. This delay directly impacts Next.js, as the two frameworks are closely intertwined. As a result, Next.js is currently utilizing the release candidate version of React.js, but this is only partially accurate. In reality, Next.js employs two different React.js configurations:

  • React.js 19 Canary for the App Router
  • React.js 18 for the Pages Router

Interestingly, there was an initial plan to integrate the React.js 19 version for the Pages Router as well. However, these changes were later rolled back. Full support for React.js version 19 is expected once the stable release is officially launched.

Form component

This next.js innovation is in fact already familiar form from react-dom, but with some improvements. You benefit from Next.Js implementation primarily in cases when a successful form submission involves a transition to another page. In that case, the loading.tsx and layout.tsx abstractions for the following page will get preloaded.

import Form from 'next/form'
 
export default function Page() {
  return (
    <Form action="/search">
      {/* On submission, the input value will be appended to 
          the URL, e.g. /search?query=abc */}
      <input name="query" />
      <button type="submit">Submit</button>
    </Form>
  )
}

Img 20241024 115119

Developer Experience (DX)

When discussing Next.js, the developer experience (DX) is impossible to overlook. Beyond the typical “Faster, Higher, Stronger” claims, Next.js has introduced several meaningful improvements that significantly enhance DX:

  1. Long-awaited support for ESlint v9. Next.js never supported ESlint v9. This is despite the fact that both eslint (v8) and some of its own dependencies were already marked as deprecated. Because of that developers were essentially forced to keep deprecated packages.
  2. The error interface in next.js – which is already clear and convenient – was slightly improved:
    1. Added a button to copy the call stack;
    2. Added the ability to open the source of the error in the editor on a specific line.
  3. Added Static Indicator – an element in the corner of the page showing that the page is built in static mode. The pre-built page indicator has been with us for years so it was slightly updated and adapted for App Router.
  4. Also added a directory with debug information – .next/diagnostics. That’s where one can find information about the build process and all errors that occur (sometimes helps to parse problems).

Versioning the documentation

One particularly valuable enhancement is the ability to access different versions of the documentation. But why is this so crucial for the developer experience?

Updating Next.js to accommodate major changes can be a challenging and time-consuming task. As a result, older versions like Next.js 12 and 13 remain widely used, with over 2 million and 4 million monthly downloads respectively. Developers working with these versions need documentation that is specific to their setup, as the latest documentation may include significant changes that are not compatible with their projects. By providing versioned documentation, Next.js ensures that developers have the reliable resources they need to maintain and update their applications

Turbopack

Probably the biggest news:

  • Turbopack is now fully complete for development mode! “100% of existing tests ran with no errors for Turbopack”
  • Now the turbo team is working on the production version, progressively going through the tests and covering them all (currently about 96%)

Turbopack introduces a range of new features that enhance its functionality and performance:

  1. Setting a memory limit for a Turbopack build;
  2. Tree Shaking (in other words that is removal of the unused code):
  3. const nextConfig = {
      experimental: {
        turbo: {
          treeShaking: true,
          memoryLimit: 1024 * 1024 * 512 // bytes (512MB)
        },
      },
    }

    These turbopack change alone reduced memory usage by 25-30% and speeded up heavy page assembly by 30-50%.”

  4. Fixed significant issues with styles. In version 14, there were often situations when the styles were broken in order during navigation, and because of this, style A became higher than style B, then lower. This changed their priority and, accordingly, the elements looked different.
  5. The next long-awaited improvement. Now you can write the configuration in TypeScript, and the file correspondingly would be next.config.ts:
    import type { NextConfig } from 'next';
     
    const nextConfig: NextConfig = {
      /* here goes you config */
    };
     
    export default nextConfig;

    Same strongly-typed syntax as usual, but very nice to have, finally!

  6. Another interesting innovation is retrying a failed page generation before actually failing the build for static pages. If the page fails the assembly for the connectivity issues, it will try it again:
    const nextConfig = {
      experimental: {
        staticGenerationRetryCount: 3,
      },
    }

Framework API changes

Updating Next.js often involves some of the most challenging aspects, and version 15 is no exception with its critical enhancements.

One significant change in version 15 is the transition of several framework APIs to asynchronous operations. This shift particularly affects the core framework-centric abstractions, including:

  • cookies,
  • headers,
  • params and
  • searchParams (also called Dynamic APIs).
import { cookies } from 'next/headers';
 
export async function AdminPanel() {
  const cookieStore = await cookies();
  const token = cookieStore.get('token');
  // ...
}

The changes are big indeed, but the Next.js team suggests one could update to the new APIs automatically by calling their codemod:

npx @next/codemod@canary next-async-request-api .

Caching

In my opinion, that is where the most important changes have happened. And the most important news is that Caching is now disabled by default!

Let’s take a look on what’s changed:

  • Actually, fetch now uses the no-store value by default instead of force-cache;
  • API routes use force-dynamic mode by default (previously it was force-static by default);
  • Caching in the client router has also been disabled. Previously, if a client visited a page within the path, it was cached on the client and remained in this state until the page reload. Now the current page will be loaded each time. This functionality can be altered via next.config.js:
    const nextConfig = {
      experimental: {
        staleTimes: {
          dynamic: 30 // defaults to 0
        },
      },
    }
  • Moreover, even if client caching is enabled, it most likely will be updated at the correct time. Namely, if the enabled page cache on the server expires.
  • Server components are now cached in development mode. Due to this, updates in development are faster.
  • Following the above, one can reset the cache by just reloading a page or can also completely disable the functionality via next.config.js:
    const nextConfig = {
      experimental: {
        serverComponentsHmrCache: false, // defaults to true
      },
    }
  • You can control the “Cache-Control” header which was previously always overwritten with the internal values ​​of next.js. This caused artifacts with caching via CDN;
  •  next/dynamic caches modules for reuse, rather than loading again each time;

Partial Prerendering (PPR)

This could be the main teaser of the release. PPR is a page assembly mode, in which Next.Js prerenders and caches as much of the route as possible,. while some individual elements are built on each request. In this case, the pre-assembled part is immediately sent to the client, and the remaining are loaded dynamically.

Partially Prerendered Product Page showing static nav and product information, and dynamic cart and recommended products

PPR diagram from the official documentation

The feature existed already six months ago in the release candidate as an experimental API. Previously PPR was enabled for the entire project, but since now it one can enable it for each segment (layout or page):

export const experimental_ppr = true

Another change is Partial Fallback Prerendering (PFPR). Due to this improvement, the pre-assembled part is immediately sent to the client, and the rest are loaded dynamically. At this time, a callback component is shown in place of the dynamic elements.

import { Suspense } from "react"
import { StaticComponent, DynamicComponent } from "@/app/ui"
 
export const experimental_ppr = true
 
export default function Page() {
  return {
     <>
         <StaticComponent />
         <Suspense fallback={...}>
             <DynamicComponent />
         </Suspense>
     </>
  };
}

Instrumentation

Instrumentation comes as a stable API. The instrumentation file allows users to affect Next.js server lifecycle. Works universally with all Pages Router and App Router segments.

Currently, instrumentation supports hooks:

  • register – called once when initializing the next.js server. Can be used for integration with monitoring libraries (OpenTelemetry, datalog) or for specific project tasks.
  • onRequestError – a new hook called on all server errors. Can be used for integration with error tracking libraries (Sentry).

Interceptor

Interceptor is route-level middleware. It feels as something like a full-fledged existing middleware, but, unlike the one:

  • Can work in node.js runtime;
  • Works on the server, therefore has access to the environment and a single cache;
  • Can be added multiple times and is nesting inherited (like middleware worked when it was in beta);
  • Works, among other things, for server functions.

In this case, when creating an interceptor file, all pages underneath the tree become dynamic.

  • If we keep Vercel in mind, now middleware will be effective as a primary simple check at the CDN level (so that it could immediately return redirects if the request is not allowed), and interceptors will work on the server, doing full checks and complex operations.
  • For the self-host, apparently, such a division will be less effective since both abstractions work on the server. Perhaps it will be enough just to use only interceptor.

Welcome v0 – Vercel’s new Generative UI

Img 20241024 154822

Last but not least, Next.js introduces Generative UI (v0), a groundbreaking feature that combines the best practices of frontend development with the full potential of generative AI. Currently in Beta, I had the opportunity to experience Generative UI firsthand at the event. I was thrilled to see how powerful and intuitive it is—from the very first prompt, it successfully generated the configuration for Sitecore!

Img 20241024 155506

I am thrilled to conclude that our toolbelt has been enriched with new, practical tools that enable us to deliver exceptional solutions effortlessly, eliminating the need to reinvent the wheel.

Well done, Vercel! Thanks everyone building this wonderful ecosystem:

Img 20241024 101237

]]>
https://blogs.perficient.com/2024/10/29/next-js-conf-whats-new-in-version-15-and-what-does-it-mean-for-us/feed/ 0 371211