Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/ Expert Digital Insights Wed, 14 May 2025 18:22:09 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/ 32 32 30508587 Helpful Git Aliases To Maximize Developer Productivity https://blogs.perficient.com/2025/05/14/helpful-git-aliases-to-maximize-developer-productivity/ https://blogs.perficient.com/2025/05/14/helpful-git-aliases-to-maximize-developer-productivity/#respond Wed, 14 May 2025 16:14:25 +0000 https://blogs.perficient.com/?p=381308
Git is a powerful tool, but it can sometimes be overwhelming with the number of commands required to perform common tasks. If you’ve ever found yourself typing out long, complex Git commands and wondered if there’s a faster way to get things done, you’re not alone. One way to streamline your workflow and reduce repetitive typing is by using Git aliases. These are shorthand commands that allow you to perform lengthy Git operations with just a few characters.
 
In this post, we’ll explore some useful Git aliases that can help you maximize your productivity, speed up common workflows, and maintain a clean Git history.

How To Add Aliases To Your Git Config File

To start using Git aliases, you need to add them to your .gitconfig file. This file is typically located in your home directory, and it contains various configurations for your Git setup, including user details and aliases.
 
Here’s how to add aliases:
    1. Open the .gitconfig file:
      • On Linux/MacOS, the .gitconfig file is typically located in your home directory (~/.gitconfig).
      • On Windows, it is located at C:\Users\<YourUsername>\.gitconfig.
    2. Edit the .gitconfig file: You can manually add aliases to the [alias] section. If this section doesn’t already exist, simply add it at the top or bottom of the file. Below is an example of how your .gitconfig file should look once you add the aliases that we will cover in this post:
      [alias]
        # --- Branching ---
        co = checkout
        cob = checkout -b
        br = branch
      
        # --- Working Directory Status ---
        st = status
        df = diff
      
        # --- Commit & Push ---
        amod = "!f() { git add -u && git commit -m \"$1\" && git push; }; f"
        acp = "!f() { git add -A && git commit -m \"$1\" && git push; }; f"
      
        # --- Stash ---
        ss = stash
        ssd = stash drop
      
        # --- Reset / Cleanup ---
        nuke = reset --hard
        resetremote = !git reset --hard origin/main
      
        # --- Rebase Helpers ---
        rbc = rebase --continue
        rba = rebase --abort
        rbi = rebase -i
      
        # --- Log / History ---
        hist = log --oneline --graph --decorate --all
        ln = log --name-status
      
        # --- Fetch & Sync ---
        fetch = fetch --all --prune
        pullr = pull --rebase
        up = !git fetch --prune && git rebase origin/$(git rev-parse --abbrev-ref HEAD)
        cp = cherry-pick
    3. Save and close the file: Once you’ve added your aliases, save the file, and your new aliases will be available the next time you run Git commands in your terminal.
    4. Test the aliases: After saving your .gitconfig file, you can use your new aliases immediately. For example, try using git co to switch branches or git amod "your commit message" to commit your changes.

Explanation of the Aliases

I find these to be very helpful in my day-to-day work as a web developer. Here are some explanations of the aliases that I have added:

Branching

co = checkout

When switching between branches, this alias saves you from typing git checkout <branch_name>. With co, switching is as simple as:
git co <branch_name>
 

cob = checkout -b

Creating and switching to a new branch is easier with this alias. Instead of git checkout -b <new_branch_name>, simply use: 
git cob <new_branch_name>
 

br = branch

If you need to quickly list all branches, whether local or remote, this alias is a fast way to do so:
git br
 

Working Directory Status

st = status

One of the most frequently used commands in Git, git status shows the current state of your working directory. By aliasing it as st, you save time while checking what’s been staged or modified:
git st
 

df = diff

If you want to view the changes you’ve made compared to the last commit, use df for a quick comparison:
git df
 

Commit and Push

amod = “!f() { git add -u && git commit -m \”$1\” && git push; }; f”

For quick commits, this alias allows you to add modified and deleted files (but not new untracked files), commit, and push all in one command! It’s perfect for when you want to keep things simple and focus on committing changes:
git amod "Your commit message"
 

acp = “!f() { git add -A && git commit -m \”$1\” && git push; }; f”

Similar to amod, but this version adds all changes, including untracked files, commits them, and pushes to the remote. It’s ideal when you’re working with a full set of changes:
git acp "Your commit message"
 

Stash

ss = stash

When you’re in the middle of something but need to quickly save your uncommitted changes to come back to later, git stash comes to the rescue. With this alias, you can stash your changes with ease:
git ss
 

ssd = stash drop

Sometimes, after stashing, you may want to drop the stashed changes. With ssd, you can easily discard a stash:
git ssd
 

Reset / Cleanup

nuke = reset –hard

This alias will discard all local changes and reset your working directory to the last commit. It’s especially helpful when you want to start fresh or undo your recent changes:
git nuke
 

resetremote = !git reset –hard origin/main

When your local branch has diverged from the remote and you want to match it exactly, this alias will discard local changes and reset to the remote branch. It’s a lifesaver when you need to restore your local branch to match the remote:
git resetremote
 

Rebase Helpers

rbc = rebase –continue

If you’re in the middle of a rebase and have resolved any conflicts, git rebase --continue lets you proceed. The rbc alias lets you continue the rebase without typing the full command: 
git rbc
 

rba = rebase –abort

If something goes wrong during a rebase and you want to abandon the process, git rebase --abort will undo all changes from the rebase. This alias makes it quick and easy to abort a rebase:
git rba
 

rbi = rebase -i

For an interactive rebase, where you can squash or reorder commits, git rebase -i is an essential command. The rbi alias will save you from typing the whole command:
git rbi
 

Log / History

hist = log –oneline –graph –decorate –all

For a good-looking, concise view of your commit history, this alias combines the best of git log. It shows commits in a graph format, with decoration to show branch names and tags, all while keeping the output short:
git hist
 

ln = log –name-status

When you need to see what files were changed in each commit (with their status: added, modified, deleted), git log --name-status is invaluable. The ln alias helps you inspect commit changes more easily:
git ln
 

Fetch and Sync

fetch = fetch –all –prune

Fetching updates from all remotes and cleaning up any deleted branches with git fetch --all --prune is essential for keeping your remotes organized. The fetch alias makes this task a single command:
git fetch
 

pullr = pull –rebase

When removing changes from the remote, rebase are often better than merges. This keeps your history linear and avoids unnecessary merge commits. The pullr alias performs a pull with a rebase:
git pullr

up = !git fetch –prune && git rebase origin/$(git rev-parse –abbrev-ref HEAD)

This alias is a great shortcut if you want to quickly rebase your current branch onto its remote counterpart. It first fetches the latest updates from the remote and prunes any deleted remote-tracking branches, ensuring your local references are clean and up to date. Then it rebases your branch onto the corresponding remote, keeping your history in sync:
git up
 

cp = cherry-pick

Cherry-picking allows you to apply a specific commit from another branch to your current branch. This alias makes it easier to run:
git cp <commit-hash>

Final Thoughts

By setting up these Git aliases, you can reduce repetitive typing, speed up your development process, and make your Git usage more efficient. Once you’ve incorporated a few into your routine, they become second nature. Don’t hesitate to experiment and add your own based on the commands you use most. Put these in your .gitconfig file today and start enjoying the benefits of a more productive workflow!
]]>
https://blogs.perficient.com/2025/05/14/helpful-git-aliases-to-maximize-developer-productivity/feed/ 0 381308
Good Vibes Only: A Vibe Coding Primer https://blogs.perficient.com/2025/05/12/good-vibes-only-a-vibe-coding-primer/ https://blogs.perficient.com/2025/05/12/good-vibes-only-a-vibe-coding-primer/#respond Mon, 12 May 2025 17:35:28 +0000 https://blogs.perficient.com/?p=381298

In the ever-evolving landscape of software development, new terms and methodologies constantly emerge, reshaping how we think about and create technology. Recently, a phrase has been buzzing through the tech world, sparking both excitement and debate: “vibe coding.” While the idea of coding based on intuition or a “feel” isn’t entirely new, the term has gained significant traction and a more specific meaning in early 2025, largely thanks to influential figures in the AI space.

This article will delve into what “vibe coding” means today, explore its origins and core tenets, describe a typical workflow in this new paradigm, and discuss its potential benefits and inherent challenges. Prepare to look beyond the strictures of traditional development and into a more fluid, intuitive, and AI-augmented future.

What Exactly Is Vibe Coding? The Modern Definition

The recent popularization of “vibe coding” is strongly associated with Andrej Karpathy, a co-founder of OpenAI and former AI leader at Tesla. In early 2025, Karpathy described “vibe coding” as an approach that heavily leverages Large Language Models (LLMs). In this model, the developer’s role shifts from meticulously writing every line of code to guiding an AI with natural language prompts, descriptions, and desired outcomes—essentially, conveying the “vibe” of what they want to achieve. The AI then generates the corresponding code.

As Karpathy put it (paraphrasing common interpretations from early 2025 discussions), it’s less about traditional coding and more about a conversational dance with the AI:

“You see things, say things, run things, and copy-paste things, and it mostly works.”

This points to a future where the barrier between idea and functional code becomes increasingly permeable, with the developer acting more as a conductor or a curator of AI-generated software components.

So, is this entirely new? Yes and no.

  • The “New”: The specific definition tying “vibe coding” to the direct, extensive use of advanced LLMs like GitHub Copilot’s agent mode or similar tools is a recent development (as of early 2025). It’s about a human-AI symbiosis where the AI handles much of the syntactical heavy lifting.
  • The “Not So New”: The underlying desire for a more intuitive, less rigidly structured coding experience—coding by “feel” or “flow”—has always been a part of developer culture. Programmers have long talked about being “in the zone,” rapidly prototyping, or using their deep-seated intuition to solve problems, especially in creative coding, game development, or initial exploratory phases. This older, more informal notion of “vibe coding” can be seen as a spiritual precursor. Today’s “vibe coding” takes that innate human approach and supercharges it with powerful AI tools.

Therefore, when we talk about “vibe coding” today (in mid-2025), we’re primarily referring to this AI-assisted paradigm. It’s about effectively communicating your intent—the “vibe”—to an AI, which then translates that intent into code. The focus shifts from syntax to semantics, from meticulous construction to intuitive direction.

The Core Tenets of (AI-Augmented) Vibe Coding

Given this AI-centric understanding, the principles of vibe coding look something like this:

  1. Intuition and Intent as the Primary Driver

    The developer’s main input is their understanding of the problem and the desired “feel” or functionality of the solution. They translate this into natural language prompts or high-level descriptions for the AI. The “how” of the code generation is largely delegated.

  2. Prompt Engineering is Key

    Your ability to “vibe” effectively with the AI depends heavily on how well you can articulate your needs. Crafting clear, concise, and effective prompts becomes a critical skill, replacing some traditional coding skills.

  3. Rapid Iteration and AI-Feedback Loop

    The cycle is: prompt -> AI generates code -> test/review -> refine prompt -> repeat. This loop is incredibly fast. You can see your ideas (or the AI’s interpretation of them) come to life almost instantly, allowing for quick validation or correction of the “vibe.”

  4. Focus on the “What” and “Why,” Less on the “How”

    Developers concentrate on defining the problem, the user experience, and the desired outcome. The AI handles much of the underlying implementation details. The “vibe” is about the end result and its characteristics, not necessarily the elegance of every single line of generated code (though that can also be a goal).

  5. Embracing the “Black Box” (to a degree)

    While reviewing AI-generated code is crucial, there’s an implicit trust in the AI’s capability to handle complex boilerplate or even entire functions. The developer might not always delve into the deepest intricacies of every generated snippet, especially if it “just works” and fits the vibe. This is also a point of contention and risk.

  6. Minimal Upfront Specification, Maximum Exploration

    Detailed, exhaustive spec documents become less critical for the initial generation. You can start with a fuzzy idea, prompt the AI, see what it produces, and iteratively refine the “vibe” and the specifics as you go. It’s inherently exploratory.

  7. Orchestration Over Manual Construction

    The developer acts more like an orchestrator, piecing together AI-generated components, guiding the overall architecture through prompts, and ensuring the different parts harmonize to achieve the intended “vibe.”

A Typical AI-Driven Vibe Coding Workflow

Let’s walk through what a vibe coding session in this AI-augmented era might look like:

  1. The Conceptual Spark

    An idea for an application, feature, or fix emerges. The developer has a general “vibe” of what’s needed – “I need a simple web app to track my reading list, and it should feel clean and modern.”

  2. Choosing the Right AI Tool

    The developer selects their preferred LLM-based coding assistant (e.g., an advanced mode of GitHub Copilot, Cursor Composer, or other emerging tools).

  3. The Initial Prompt & Generation

    The developer crafts an initial prompt.

    Developer:

    Generate a Python Flask backend for a reading list app. It needs a PostgreSQL database with a 'books' table (title, author, status, rating). Create API endpoints for adding a book, listing all books, and updating a book's status.

    The AI generates a significant chunk of code.

  4. Review, Test, and “Vibe Check”

    The developer reviews the generated code. Does it look reasonable? Do the core structures align with the intended vibe? They might run it, test the endpoints (perhaps by asking the AI to generate test scripts too).

    Developer (to self): “Okay, this is a good start, but the ‘status’ should be an enum: ‘to-read’, ‘reading’, ‘read’. And I want a ‘date_added’ field.”

  5. Refinement through Iterative Prompting

    The developer provides feedback and further instructions to the AI.

    Developer:

    Refactor the 'books' model. Change 'status' to an enum with values 'to-read', 'reading', 'read'. Add a 'date_added' field that defaults to the current timestamp. Also, generate a simple HTML frontend using Bootstrap for listing and adding books that calls these APIs.

    The AI revises the code and generates the new parts.

  6. Integration and Manual Tweaks (if necessary)

    The developer might still need to do some light manual coding to connect pieces, adjust styles, or fix minor issues the AI missed. The goal is for the AI to do the bulk of the work.

  7. Achieving the “Vibe” or Reaching a Milestone

    This iterative process continues until the application meets the desired “vibe” and functionality, or a significant milestone is reached. The developer has guided the AI to create something that aligns with their initial, perhaps fuzzy, vision.

This workflow is highly dynamic. The developer is in a constant dialogue with the AI, shaping the output by refining their “vibe” into increasingly specific prompts.

Where AI-Driven Vibe Coding Shines (The Pros)

This new approach to coding offers several compelling advantages:

  • Accelerated Development & Prototyping: Generating boilerplate, standard functions, and even complex algorithms can be drastically faster, allowing for rapid prototyping and quicker MVP releases.
  • Reduced Cognitive Load for Routine Tasks: Developers can offload tedious and repetitive coding tasks to the AI, freeing up mental energy for higher-level architectural thinking, creative problem-solving, and refining the core “vibe.”
  • Lowering Barriers (Potentially): For some, it might lower the barrier to creating software, as deep expertise in a specific syntax might become less critical than the ability to clearly articulate intent.
  • Enhanced Learning and Exploration: Developers can quickly see how different approaches or technologies could be implemented by asking the AI, making it a powerful learning tool.
  • Focus on Creativity and Product Vision: By automating much of the rote coding, developers can spend more time focusing on the user experience, the product’s unique value, and its overall “vibe.”

The Other Side of the Vibe: Challenges and Caveats in the AI Era

Despite its promise, AI-driven vibe coding is not without its significant challenges and concerns:

  • Quality and Reliability of AI-Generated Code: LLMs can still produce code that is subtly flawed, inefficient, insecure, or simply incorrect. Thorough review and testing are paramount.
  • The “Black Box” Problem: Relying heavily on AI-generated code without fully understanding it can lead to maintenance nightmares and difficulty in debugging when things go wrong.
  • Security Vulnerabilities: AI models are trained on vast datasets, which may include insecure code patterns. Generated code could inadvertently introduce vulnerabilities. The “Bad Vibes Only” concern noted in some discussions highlights this risk.
  • Skill Atrophy and the Future of Developer Skills: Over-reliance on AI for core coding tasks could lead to an atrophy of fundamental programming skills. The skill set may shift towards prompt engineering and systems integration.
  • Bias and Homogenization: AI models can perpetuate biases present in their training data, potentially leading to less diverse or innovative solutions if not carefully guided.
  • Intellectual Property and Originality: Questions around the ownership and originality of AI-generated code are still being navigated legally and ethically.
  • Debugging “Vibes”: When the AI consistently misunderstands a complex “vibe” or prompt, debugging the interaction itself can become a new kind of challenge.
  • Not a Silver Bullet: For highly novel, complex, or performance-critical systems, the nuanced understanding and control offered by traditional, human-driven coding remain indispensable. Vibe coding may not be suitable for all types of software development.

Finding the Balance: Integrating Vibes into a Robust Workflow

The rise of AI-driven “vibe coding” doesn’t necessarily mean the end of traditional software development. Instead, it’s more likely to become another powerful tool in the developer’s arsenal. The most effective approaches will likely integrate the strengths of vibe coding—its speed, intuitiveness, and focus on intent—with the rigor, discipline, and deep understanding of established software engineering practices.

Perhaps “vibe coding” will be most potent in the initial phases of development: for brainstorming, rapid prototyping, generating initial structures, and handling common patterns. This AI-generated foundation can then be taken over by developers for refinement, security hardening, performance optimization, and integration into larger, more complex systems, applying critical thinking and deep expertise.

The future isn’t about replacing human developers with AI, but about augmenting them. The “vibe” is the creative human intent, and AI is becoming an increasingly powerful means of translating that vibe into reality. Learning to “vibe” effectively with AI—to communicate intent clearly, critically evaluate AI output, and seamlessly integrate it into robust engineering practices—will likely become a defining skill for the next generation of software creators.

So, as you navigate your coding journey, consider how you can harness this evolving concept. Whether you’re guiding an LLM or simply tapping into your own deep intuition, embracing the “vibe” might just unlock new levels of creativity and productivity. But always remember to pair that vibe with critical thinking and sound engineering judgment.

]]>
https://blogs.perficient.com/2025/05/12/good-vibes-only-a-vibe-coding-primer/feed/ 0 381298
Three.js: The Future of 3D Web Development https://blogs.perficient.com/2025/05/12/threejs-the-future-of-3d-web-development/ https://blogs.perficient.com/2025/05/12/threejs-the-future-of-3d-web-development/#respond Mon, 12 May 2025 16:03:48 +0000 https://blogs.perficient.com/?p=379932

Nowadays, clients and users are more demanding. They want reactive, responsive, and user-friendly web pages where they can interact and “feel” an experience like in the real world. Here is where Three.js comes in, taking web development to the next level.  

Three.js

Three.js is a cross-browser JavaScript library and application programming interface (API) agnostic to the framework used, to create and display animated 3D computer graphics in a web browser using WebGL API. But why should we use it if we already have WebGL API? This has an easy answer: complexity. WebGL is a low-level API, which means that basically we need to do almost everything from scratch, light calculation, model loading, vectors, and a lot more, while, on the other hand, Three.js does all this for you.

This is because it is a high-level Api and it abstracts many of these things to make the development easier and developer-friendly, which allows us to focus on what we really need: productivity and quality. It has a wide range of functionalities, including but not limited to Lights, Materials, 3D models, Camera, and Scene,s which are the ones that we used on this practical concept

    A Real-Life Use Case

    To demonstrate the capabilities of this API, we tried to find a real-life use. After looking into different options, we found that it can be used to improve the car-buying process. Sometimes this process is extensive and difficult to do online. Commonly, a person who makes a car purchase must go directly to the dealership, which sounds okay until a couple of other issues appear: what if there is no dealership in my city? Or even worse, what if the model is not available in my country yet? It makes sense to wait for months just to see if I like the car, color, or interior? Absolutely not.

    How the Process was Improved

    After looking at improvement areas, our objective turned into finding a way to add value to the final user and, also, to our customers/clients. We researched each role and their needs in this context and found that what the final user really needs is to see their car in its final state before purchasing so then they can take a final decision; and on the client side, they want to make sure the final user is satisfied with the final product by providing a preview of what they’re going to get once the purchase process is finalized.

    In the second iteration, the focus changed to a customization experience where the user can, in real time, change car features like color, materials, car kits, interior color, etc. This brought unique design needs, like a new UI focused on customization, a new narrative so the user always knew in which part of the process he was.

    A Technical View of Three.js

    Materials

    Materials are mapped on the 3D model itself, and it can be mapped on different parts, and then we can modify them with ours. This is an example of how we can do that.

    First, we load the model:

    const { nodes, materials } = useGLTF('/model/2015_bugatti_atlantic_-_concept_car.glb') as GLTFResult;

    Then we map the model materials with our own names, which allows us to identify easily which material refers to what in the model:

    const mappedMaterials = {
        carpet: materials.Bugatti_AtlanticConcept_2015BadgeA_Material,
        upholstery: materials.Bugatti_AtlanticConcept_2015Carbon1M_Material,
        grill: materials.Bugatti_AtlanticConcept_2015Grille1A_Material,
        zippers: materials.Bugatti_AtlanticConcept_2015Grille2A_Material,
        doorPanel: materials.Bugatti_AtlanticConcept_2015Grille4A_Material,
        carPaint: carPaint,
        grillDoor: materials.Bugatti_AtlanticConcept_2015Grille5A_Material,
        trunk: materials.Bugatti_AtlanticConcept_2015InteriorColourZoneA_Material,
        interior: materials.Bugatti_AtlanticConcept_2015InteriorA_Material,
        lights: materials.Bugatti_AtlanticConcept_2015LightA_Material,
        plate: materials.Bugatti_AtlanticConcept_2015ManufacturerPlateA_Material,
        belt: materials.PaletteMaterial003,
        frontGrill: materials.Bugatti_AtlanticConcept_2015TexturedA_Material,
        rims: materials.Bugatti_AtlanticConcept_2015_Wheel1A_3D_3DWheel1A_Material,
        brakes: materials.Bugatti_AtlanticConcept_2015_CallipersCalliperA_Zone_Material,
        borderWindows: materials.PaletteMaterial006,
        jointsChasis: materials.PaletteMaterial007,
        gloveHandle: materials.Bugatti_AtlanticConcept_2015Grille3A_Material,
    };

    Finally, we can edit each material as we wish:

    mappedMaterials.brakes.color = new THREE.Color(paint.color);
    mappedMaterials.rims.color = new THREE.Color(rimsPaint.color);
    mappedMaterials.jointsChasis.color = new THREE.Color(rimsPaint.color);

    Cameras and Scene

    Here we load the place where our 3D models are going to be displayed, which is what we call the Scene.

    <Environment
        files="/environment/rooftop_day_2k.hdr"
        ground={{ height, radius, scale }}
        environmentIntensity={0.7}
    />

    Here I want to clarify something, we said that Three.js is agnostic to the framework, so maybe you may ask why the scene/environment looks like a React component, this is because we are using a library that makes the implementation easier on React, since our application is made in this framework. This library is called Three Fiber, and we will have the opportunity to talk about it on another blog. However, you need to know that this library now converts Three.js functions into React components to make the development easier and readable.

    Then we load the 3D model and our cameras, in this case we have 2 cameras. In a few words, a camera is the point in the screen where the user is going to look into our scene.

    <BugattiCarOptimized scale={carScale} rotation-y={rotationY} />
    <PerspectiveCamera
        makeDefault={isExternalCamera}
        position={[
           externalPosition.cameraPositionX,
           externalPosition.cameraPositionY,
           externalPosition.cameraPositionZ,
        ]}
        near={cameraNear}
        far={cameraFar}
        rotation={[externalPosition.cameraRotateX, externalPosition.cameraRotateY, externalPosition.cameraRotateZ]}
    />

    Finally, we add our Lights. It is important to know that adding lights is a must-have; without them, all we will see is an empty black screen.

    <directionalLight position={[5, 10, 12]} intensity={1} castShadow shadow-mapSize={[1024, 1024]} />

    We pack all these items into the scene component just like in React, then everything should look like this:

    return (
        <>
          <Environment
            files="/environment/rooftop_day_2k.hdr"
            ground={{ height, radius, scale }}
            environmentIntensity={0.7}
          />
          <BugattiCarOptimized scale={carScale} rotation-y={rotationY} />
          <ContactShadows
            renderOrder={2}
            frames={1}
            resolution={1024}
            scale={shadowScale}
            blur={1}
            opacity={0.7}
            near={shadowNear}
            far={shadowFar}
            position={[0.2, 0, -0.05]}
          />
          <PerspectiveCamera
            makeDefault={isExternalCamera}
            position={[
              externalPosition.cameraPositionX,
              externalPosition.cameraPositionY,
              externalPosition.cameraPositionZ,
            ]}
            near={cameraNear}
            far={cameraFar}
            rotation={[externalPosition.cameraRotateX, externalPosition.cameraRotateY, externalPosition.cameraRotateZ]}
          />
          <PerspectiveCamera
            makeDefault={!isExternalCamera}
            position={[
              internalPosition.cameraPositionX,
              internalPosition.cameraPositionY,
              internalPosition.cameraPositionZ,
            ]}
            near={cameraNear}
            far={cameraFar}
            rotation={[internalPosition.cameraRotateX, internalPosition.cameraRotateY, internalPosition.cameraRotateZ]}
          />
    
          {showEffects && <BaseEffect />}
          {import.meta.env.DEV && (
            <>
              {/*<OrbitControls />*/}
              <Perf position="top-left" />
            </>
          )}
          <directionalLight position={[5, 10, 12]} intensity={1} castShadow shadow-mapSize={[1024, 1024]} />
        </>
      );

    That’s the big picture of the general implementation. We didn’t explain some things because the blog would be too long, but I tried to explain the most important things. Now comes the most exciting part.

    The result

    Next Steps

    The user needs to feel in control and not feel overwhelmed by all the different options that exist. Because of that, we want to include an AI agent who, according to some user choices in a form, will create and customize the car to simplify part of the process, giving the final user a base car to start working on, or a final state car that fits the user. After this process, the user will be able to download the customized vehicle or share with the sales team the final 3D model, with its respective quote and purchase summary, this is so that the sales team can get in touch with the user as soon as possible.

    Written in collaboration with Miguel Naranjo, Sebastian Corrales ,and Sebastian Castillo.

     

     

    ]]>
    https://blogs.perficient.com/2025/05/12/threejs-the-future-of-3d-web-development/feed/ 0 379932
    How Agile Helps You Improve Your Agility https://blogs.perficient.com/2025/05/12/how-agile-helps-you-improve-your-agility/ https://blogs.perficient.com/2025/05/12/how-agile-helps-you-improve-your-agility/#respond Mon, 12 May 2025 10:35:57 +0000 https://blogs.perficient.com/?p=380766

    The objective of this topic is to explore how the Agile methodology enhances an individual’s agility. This blog highlights how Agile fosters adaptability, responsiveness, and continuous improvement by understanding and implementing Agile principles, practices, and frameworks.

    The goal is to demonstrate how adopting Agile practices enables teams and individuals to:

    • Effectively manage change
    • Increase collaboration
    • Streamline decision-making
    • Improve overall performance and flexibility in dynamic environments

    This study showcases the transformative power of Agile in driving greater efficiency and faster response times in both project management and personal development.

    Let’s Get Started

    In both professional and personal development, asking structured “WH” questions helps in gaining clarity and understanding. Let’s apply that approach to explore the connection between Agile and agility.

    What is Agile?

    Agile is a mindset and a way of thinking, based on its core principles and manifesto. It emphasizes:

    • Flexibility
    • Collaboration
    • Customer feedback
    • Over-rigid planning and control.

    Initially popularized in project management and software development, Agile supports iterative progress and continuous value delivery.

    What is Agility?

    Agility in individuals refers to the ability to adapt and respond to change effectively and efficiently. It means adjusting quickly to:

    • Market conditions
    • Customer needs
    • Emerging technologies

    Agility involves:

    • Flexible processes
    • Quick decision-making
    • Embracing change and innovation

    Key Principles of Agile

    • Iterative Process – Work delivered in small, manageable cycles
    • Collaboration – Strong communication across teams
    • Flexibility & Adaptability – Open to change
    • Customer Feedback – Frequent input from stakeholders
    • Continuous Improvement – Learn and evolve continuously

    Why Agile?

    Every project brings daily challenges: scope changes, last-minute deliveries, unexpected blockers. Agile helps in mitigating these through:

    • Faster Delivery – Short iterations mean quicker output and release cycles
    • Improved Quality – Continuous testing, feedback, and refinements
    • Customer-Centric Approach – Ongoing engagement ensures relevance
    • Greater Flexibility – Agile teams quickly adapt to shifting priorities

    When & Where to Apply Agile?

    The answer is simple — Now and Everywhere.
    Agile isn’t limited to a specific moment or industry. Whenever you experience challenges in:

    • Project delivery
    • Communication gaps
    • Changing requirements

    You can incorporate the Agile principles. Agile is valuable in both reactive and proactive problem-solving.

    How to Implement Agile?

    Applying Agile principles can be a game-changer for both individuals and teams. Here are practical steps that have shown proven results:

    • Divide and do—Break down large features into smaller, manageable tasks. Each task should result in a complete, functional piece of work.
    • Deliver Incrementally – Ensure that you deliver a working product or feature by the end of each iteration.
    • Foster Communication – Encourage frequent collaboration within the team. Regular interactions build trust and increase transparency.
    • Embrace Change – Be open to changing requirements. Agile values responsiveness to feedback, enabling better decision-making.
    • Engage with Customers – Establish feedback loops with stakeholders to stay aligned with customer needs.

    Agile Beyond Software

    While Agile originated in software development, its principles can be applied across a range of industries:

    • Marketing – Running campaigns with short feedback cycles
    • Human Resources – Managing performance and recruitment adaptively
    • Operations – Streamlining processes and boosting team responsiveness

    Agile is more than a methodology; it’s a culture of continuous improvement that extends across all areas of work and life.

    Conclusion

    Adopting Agile is not just about following a process but embracing a mindset. When effectively implemented, Agile can significantly elevate an individual’s and team’s ability to:

    • Respond to change
    • Improve performance
    • Enhance collaboration

    Whether in software, marketing, HR, or personal development, Agile has the power to transform how we work and grow.

    ]]>
    https://blogs.perficient.com/2025/05/12/how-agile-helps-you-improve-your-agility/feed/ 0 380766
    Common Errors When Using GraphQL with Optimizely https://blogs.perficient.com/2025/05/05/common-errors-when-using-graphql-with-optimizely/ https://blogs.perficient.com/2025/05/05/common-errors-when-using-graphql-with-optimizely/#respond Mon, 05 May 2025 17:00:55 +0000 https://blogs.perficient.com/?p=380453

    What is GraphQL?

    GraphQL is a powerful query language for APIs that allows clients to request only the data they need. Optimizely leverages GraphQL to serve content to your platform agnostic presentation layer. This approach to headless architecture with Optimizely CMS is gaining traction in the space and developers often encounter new challenges when transitioning from the more common MVC approach.

    In this blog post, we will explore some common errors you’ll encounter and how to troubleshoot them effectively.

     

    Common Errors

    1. Schema Mismatches

    Description

    Some of the most frequent issues arise from mismatches between the GraphQL schema and the content models in Optimizely. This can occur from mistyping fields in your queries, not synchronizing content in the CMS, or using the wrong authentication key.

    Example Error

    {
    "errors": [
        {
          "message": "Field \"Author\" is not defined by type \"BlogPage\".",
          "locations": [
            {
              "line": 2,
              "column": 28
            }
          ]
        }
      ]
    }
    

    Solution

    • Double check your query for any mismatches between field and type names
      • Case-sensitivity is enforced on Types/Properties
    • Validate that the API Key in your GraphQL Query matches the API Key in the CMS environment you’ve updated
    • Ensure that your GraphQL schema is up-to-date with the latest data model changes in Optimizely.
      • If you are running the CMS with the same Graph API Keys, check the GraphQL Explorer tab and validate that your type shows in the listing
    • Run the ‘Optimizely Graph content synchronization job’ from the CMS Scheduled Jobs page.
      • After you see the Job Status change from ‘Starting execution of ContentTypeIndexingJob’ to ‘Starting execution of ContentIndexingJob’ you can stop the job and re-run your query.
    • Reset the Account
      • If all else fails you may want to try to reset your GraphQL account to clear the indices. (/EPiServer/ContentGraph/GraphQLAdmin)
      • If you are sharing the key with other developers the schema can become mismatched when making local changes and synchronizing your changes to the same index.

    2. Maximum Depth

    Description

    When querying nested content, you may see an empty Content object in the response rather than your typed content.

    Example Error

    In this scenario, we are trying to query an accordion set block which has multiple levels of nested content areas.

    Query

    query MyQuery {
      Accordion {
        items {
          PanelArea{
            ContentLink {
              Expanded {
                ... on AccordionPanel{
                  PanelContent{
                    ContentLink {
                      Expanded {
                        ... on CardGrid{
                          CardArea {
                            ContentLink {
                              Expanded {
                                ... on Card{
                                  CtaArea{                                
                                    ContentLink{
                                      Expanded{
                                        __typename
                                        ...on Button {
                                          __typename
                                        }
    

    Response

    {
      "data": {
        "Accordion": {
          "items": [
            {
              "PanelArea": [
                {
                  "ContentLink": {
                    "Expanded": {
                      "PanelContent": [
                        {
                          "ContentLink": {
                            "Expanded": {
                              "CardGrid": [
                                {
                                  "ContentLink": {
                                    "Expanded": {
                                      "CtaArea": [
                                        {
                                          "ContentLink": {
                                            "Expanded": {
                                              "__typename": "Content"
                                            }
           ...
    }
    

    Solution

    • Configure GraphQL to use higher maximum depth in appsettings
      • The default level of nesting content is 3, but that can be modified in Startup.CS
        services.AddContentGraph(options => { options.ExpandLevel.Default =options.ExpandLevel.ContentArea = 5; });
      • Note that increasing this will increase the document size and make the synchronization job much slower depending on the amount of content and level of nesting in your site.
    • Break-up requests into multiple queries.
      • Instead of expanding the inline fragment (… on Block) instead get the GuidValue of the ContentModelReference and use subsequent queries to get deeply nested content.
      • Consider making this extra request asynchronously on the client-side to minimize performance impact.

    3. Authentication Errors

    Description

    There are a few different scenarios where you can get a 401 Authentication Error response on your GraphQL query.

    {
      "code": "AUTHENTICATION_ERROR",
      "status": 401,
      "details": {
        "correlationId": "1234657890"
      }
    }
    

    Solution

    • Check your authentication tokens and ensure they are valid.
    • If you are querying draft content you need to configure and enable preview tokens Documentation

    4. Unsynchronized Content

    Description

    When making updates to content in the CMS, you will occasionally run into issues where you don’t see the updated content on the page or in the graph response.

    Solution

    • Confirm that Content has been synchronized
      • In the CMS you can determine whether or not Content has been synchronized by the checkmark icon in the Publish Options ‘Synchronize with Optimizely Graph’ button
        Optimizely Graph Publish Options
      • If the ‘Synchronize with Optimizely Graph’ button is not triggering the content to be synced check to see if either of the Optimizley Graph Synchronization Jobs are in progress.  When they are running, manually syncing content will be delayed until job completion.
    • Validate that your CMS Graph API Key matches the API Key in your front-end/graph query
    ]]>
    https://blogs.perficient.com/2025/05/05/common-errors-when-using-graphql-with-optimizely/feed/ 0 380453
    Lit.js: Building Fast, Lightweight, and Scalable Web Components https://blogs.perficient.com/2025/05/05/lit-js-building-fast-lightweight-and-scalable-web-components/ https://blogs.perficient.com/2025/05/05/lit-js-building-fast-lightweight-and-scalable-web-components/#comments Mon, 05 May 2025 12:42:30 +0000 https://blogs.perficient.com/?p=380951

    Introduction

    In today’s era of web development, creating reusable and efficient components is a must. Lit.js is a house of beasts which simplifies building reusable, fast and lightweight components by using Web Component standards.

    What is Lit.js?

    Lit.js is a modern JavaScript library designed to create Web Components effortlessly. It is built on top of standard Web Components APIs, making it a lightweight yet powerful solution for component-based development.

    Benefits:

    • Lightweight – Small bundle size and fast execution.
    • Simple Syntax – Uses declarative templates with JavaScript/TypeScript.
    • Reactive Properties – Built-in reactivity for state management.
    • Scoped Styles – CSS encapsulation for components.
    • Interoperability – Works with any frontend framework or plain JavaScript.

    Why Choose Lit.js?

    With multiple frontend frameworks available, why should you choose Lit.js? Here are some compelling reasons:

    1. Minimal Learning Curve – If you know JavaScript and HTML, you can quickly get started with Lit.js.
    2. Performance-Optimized – Faster rendering due to a virtual DOM-free approach.
    3. Web Standards-Based – Future-proof and framework-agnostic.
    4. Scalability – Ideal for both small projects and large-scale applications.
    5. Framework Agnostic – Can be used with React, Vue, Angular, or standalone

    Key Features of Lit.js

    1. Declarative Rendering

    UI components are defined in a concise and readable way with the help of template literals

    Picture1

    1. Reactive Properties

    Lit.js tracks property changes and automatically updates the DOM when the state changes.

    Picture2

    1. Scoped CSS Styles

    Lit.js ensures styles are encapsulated within the component, preventing global conflicts.Picture3

    1. Lifecycle Methods

    Lit.js components have lifecycle methods similar to React, such as:

    • connectedCallback() – Runs when the component is added to the DOM.
    • disconnectedCallback() – Runs when the component is removed.
    • updated() – Runs when reactive properties change.

     

    1. Event Handling

    Lit.js makes handling events simple using the @event directive.

    Picture4

    How Lit.js Works

    Lit.js leverages shadow DOM, declarative templates, and reactive properties to create fast and efficient components.

    1. Define a Web Component using LitElement.
    2. Use properties to manage state and reactivity.
    3. Apply scoped styles for component isolation.
    4. Handle events and update the DOM efficiently.
    5. Reuse components across multiple projects.

    Best Practices for Using Lit.js

    1. Use Lightweight Components – Keep components small and focused.
    2. Avoid Global Styles – Use scoped styles to prevent conflicts.
    3. Optimize Rendering – Minimize unnecessary DOM updates.
    4. Leverage TypeScript – Use TypeScript for better maintainability.
    5. Follow Web Standards – Ensure compatibility with modern browsers.

     

    Performance Optimization Tips

    • Lazy Loading – Load components only when needed.
    • Efficient Event Handling – Avoid excessive re-renders.
    • Use Static Templates – Avoid regenerating templates on every render.

    When to Use Lit.js

    Lit.js is ideal for:

    • Building UI Libraries – Create reusable, standalone components.
    • Progressive Enhancement – Enhance existing websites without rewriting them.
    • Micro Frontends – Build independent frontend modules.
    • Interoperable Apps – Use with multiple frameworks.
    • Enterprise Applications – Scalable and lightweight UI components.

    Limitations of Lit.js

    • Smaller Community – Compared to React and Vue.
    • Learning Curve for Advanced Concepts – Like decorators and reactivity.
    • Limited Ecosystem – Fewer third-party libraries than React.

     

    Conclusion

    Lit.js gives a lightweight, fast and scalable way to create Web Components while following web standards. Give it a try and experience the future of Web Components!

    Ready to start with Lit.js? 🚀 Try building your first Web Component today!

    ]]>
    https://blogs.perficient.com/2025/05/05/lit-js-building-fast-lightweight-and-scalable-web-components/feed/ 1 380951
    Strapi:Unleash the Power to Build Modern,Highly Customizable Websites with the Ultimate Headless CMS https://blogs.perficient.com/2025/04/30/strapiunleash-the-power-to-build-modernhighly-customizable-websites-with-the-ultimate-headless-cms/ https://blogs.perficient.com/2025/04/30/strapiunleash-the-power-to-build-modernhighly-customizable-websites-with-the-ultimate-headless-cms/#respond Wed, 30 Apr 2025 06:43:10 +0000 https://blogs.perficient.com/?p=380491

    Strapi is the leading open-source headless CMS. It’s 100% JavaScript/TypeScript, fully customizable, and developer-first. Its flexibility and scalability make it an ideal choice for businesses and organizations seeking to create unique digital experiences.

    • Self-hosted or Cloud: We can host and scale Strapi (open-source headless CMS) projects the way you want. We can save time by deploying Strapi Cloud or deploying to the hosting platform of your choice, such as AWS, Azure, Google Cloud, or DigitalOcean.
    • Modern Admin Panel: An elegant, completely customized, and fully expandable admin panel.
    • Multi-database support: We can choose the database: PostgreSQL, MySQL, MariaDB, and SQLite.
    • Customizable: We can easily construct logic by completely changing APIs, routes, or plugins to meet our exact requirements.
    •  Fast and Robust: Strapi’s Node.js and TypeScript-based architecture ensures fast and robust performance.
    •  Front-end Framework: Use any front-end framework (e.g., React, Next.js, Vue, Angular), mobile applications, or even IoT.
    •  Security: Default security features include reusable rules, CORS, CSP, P3P, Xframe, and XSS.
    •  Strapi CLI: Use the powerful CLI to scaffold applications and APIs on the fly.
    • Headless Architecture: Strapi’s headless architecture allows developers to build custom front-end applications using their preferred frameworks and libraries, while Strapi handles content management.
    • Customizable Content Models: Create custom content models to suit your specific needs, including text, images, videos, and more.
    • API-First Approach: Strapi’s API-first strategy enables seamless integration with various front-end frameworks and libraries.

    Strapi Headless CMS Installation

    Strapi projects can be installed either locally on a computer or on a remote server.

    Installation Using CLI

    Create a project on your local machine using the command-line interface (CLI). Before installing Strapi, the following requirements must be installed on your computer:

    • Node.js: Only Active LTS or Maintenance LTS versions are supported (currently v18, v20, and v22). Odd-number releases of Node, known as “current” versions of Node.js, are not supported (e.g., v19, v21).
    • Your preferred Node.js package manager:
    • Python (if using an SQLite database)
    • A supported database is also required for any Strapi project:
    Database Recommended Minimum
    MySQL 8.0 8.0
    MariaDB 10.6 10.5
    PostgreSQL 14.0 12.0
    SQLite 3 3

    Installation Using Docker

    Create a custom Docker container from a local project.

    Getting Started with Strapi

    To start using Strapi, perform these steps:

    yarn create strapi # using yarn
    
    npx create-strapi@latest # using npx
    
    pnpm create strapi # using Pnpm (caution: Strapi Cloud does not support pnpm yet)

    Strapi Installation
    The terminal will ask you whether you want to log in or sign up to Strapi Cloud (and start using your free 14-day trial projects) or skip this step. Use the arrow keys and press Enter to make your choice. If you choose to ignore this step, you will need to host the project yourself. The terminal will prompt you to answer a few questions. For each of them, if you press Enter instead of typing something, the default answer (Yes) will be used:

    Strapi Installation Options

    Running Strapi on the CMD / VSCode terminal

    npm run develop # using npm
    yarn develop # using Yarn

    Strapi Installation Welcome

    Content Modeling

    Strapi’s content modeling system allows you to create custom content types, including text, images, videos, and more. You can also create relationships between content types, enabling complex data structures.

    • Content Types: Create custom content types, including articles, products, and users.
    • Fields: Add fields to your content types, including text, number, date, and more.
    • Relationships: Establish relationships between content types to enable complex data structures.

    Creating Content Models

    1. Create a New Content Model: Create a new content model by clicking the “Create a new content model” button in the admin panel.
    2. Add Fields: Add fields to your content model, including text, numbers, dates, and more.
    3. Establish Relationships: Establish relationships between content models, enabling complex data structures.

    Strapi Content Type Administration Panel

    Strapi headless CMS – From the admin panel, you will be able to manage content types and write their actual content, but also manage users, both administrators and end users of your Strapi application.

     

    Strapi headless CMS - Content Type About

    Content-Type Builder – From the Content-Type Builder, accessible via the main navigation of the admin panel, users can create and edit their content types.

     

    Strapi headless CMS - Content Type Create Field About

    Configuring content-types fields – Content-types are composed of one or several fields. Each field is designed to contain a specific kind of data, filled in the Content Manager.

     

    Strapi headless CMS - Content Type Create A Single Type

    Creating content types: The Content Type Builder allows you to create new content types, including single and collection types, as well as components.

     

    Strapi headless CMS - Content Type About Blog

    Strapi headless CMS – Added Content on About Us that will reflect on API

    API and Integration

    Strapi’s API-first approach enables seamless integration with various front-end frameworks and libraries.

    • RESTful API: Strapi provides a RESTful API for interacting with your content.
    • GraphQL API: Strapi also supports GraphQL, enabling more flexible and efficient data querying.
    • Webhooks: Use webhooks to notify external services of changes to your content.

    API Endpoints

    1. Create API Endpoints: Add API endpoints to your content models to enable CRUD (Create, Read, Update, and Delete) activities.
    2. Use API Endpoints: Use the API endpoints to interact with your content, either through the Strapi API or through your front-end application.

    Authentication and Authorization

    Strapi provides built-in support for authentication and authorization, enabling you to control access to your content.

    • User Management: User management includes the creation and administration of users, roles, and permissions.
    • Authentication: Use JWT or session-based authentication to secure your API.
    • Authorization: Use roles and permissions to manage who can access your material.

    Benefits and Advantages of Strapi Headless CMS

    • Flexibility: Strapi offers scalability and flexibility because of its headless architecture and configurable content modeling.
    • Scalability: Since Strapi is built on Node.js, it can scale and perform efficiently.
    • Ease of Use: Strapi’s intuitive interface and API make it easy to use and integrate with your front-end application.

    Front-end Integration

    • Choose a Front-end Framework: To create your application, choose a front-end framework such as React, Angular, or Vue.js.

    Strapi Supports all Front-end Integration

    1. React CMS
    2. Next.js CMS
    3. Tanstack CMS
    4. Vue.js CMS
    5. Nuxt.js CMS
    6. Astro CMS
    7. Flutter CMS
    8. Svelte CMS
    9. React Native CMS
    • Utilize the Strapi API: Leverage the Strapi headless CMS API to interact with your content and deliver a seamless user experience.

    Example Use Cases

    • Headless Blog: Utilize Strapi as a headless CMS for your blog, enabling the creation of custom content models and API endpoints.
    • Ecommerce Platform: Use Strapi to manage product data, orders, and customers for your e-commerce platform.
    • Portfolio Website: Utilize Strapi to create a portfolio website that showcases your work and projects.

    If you need a comparison of different technologies with Strapi’s headless CMS, please visit this link: Headless CMS Comparison.

    Example Use Case: Building a Simple Ecommerce Platform on Strapi

    Let’s create a simple ecommerce platform using Strapi.

    Content Models

    • Product
    • Name (text)
    • Description (text)
    • Price (number)
    • Image (media)
    • Order
      • Customer Name (text)
      • Order Date (date)
      • Total (number)
      • Products (relation to Product)

    API Endpoints

    • GET /products: Retrieve a list of all products
    • GET /products/:id: Retrieve a single product by ID
    • POST /products: Create a new product
    • PUT /products/:id: Update an existing product
    • DELETE /products/:id: Delete a product
    • GET /orders: Retrieve a list of all orders
    • GET /orders/:id: Retrieve a single order by ID
    • POST /orders: Create a new order
    • PUT /orders/:id: Update an existing order
    • DELETE /orders/:id: Delete an order

    Example Response

    JSON – for accessing the local environment URL:

    localhost:1337/api/products

    Code Examples

    Product JSON Response

    When you fetch a product from the API (GET /products/:id), you might get a response like this:

    {
      "data": {
        "id": 1,
        "attributes": {
          "name": "Wireless Headphones",
          "description": "High-quality wireless headphones with noise cancellation.",
          "price": 129.99,
          "image": {
            "url": "/uploads/headphones_image.jpg",
            "alternativeText": "Wireless Headphones"
          },
          "category": {
            "data": {
              "id": 2,
              "attributes": {
                "name": "Electronics",
                "description": "Electronic gadgets and accessories."
              }
            }
          }
        }
      }
    }
    

    Category JSON Response

    When you fetch a category (GET /categories/:id), the response might look like this:

    {
      "data": {
        "id": 2,
        "attributes": {
          "name": "Electronics",
          "description": "Electronic gadgets and accessories."
        }
      }
    }
    

    Order JSON Response (Example after user places an order)

    When a user places an order (POST /orders), your response could look like this:

    {
      "data": {
        "id": 1,
        "attributes": {
          "customer_name": "John Doe",
          "items": [
            {
              "id": 1,
              "quantity": 2,
              "product": {
                "data": {
                  "id": 1,
                  "attributes": {
                    "name": "Wireless Headphones",
                    "price": 129.99
                  }
                }
              }
            }
          ],
          "total_amount": 259.98,
          "status": "pending"
        }
      }
    }
    

    Additional Resources

    • Strapi Documentation: The official Strapi headless CMS documentation provides a comprehensive guide to getting started with Strapi.
    • Strapi Community: The Strapi community is an excellent resource for getting help and support with your Strapi project.
    • Strapi Tutorials: The Strapi tutorials provide step-by-step guides to building specific applications and integrations with Strapi.

     

    Conclusion

    Strapi is a powerful and flexible headless CMS that enables developers to build modern, customizable websites and applications. Its first approach to API, custom content modeling, and authentication and authorization features makes it an ideal choice for a wide range of use cases.

    ]]>
    https://blogs.perficient.com/2025/04/30/strapiunleash-the-power-to-build-modernhighly-customizable-websites-with-the-ultimate-headless-cms/feed/ 0 380491
    Promises Made Simple: Understanding Async/Await in JavaScript https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/ https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/#respond Tue, 22 Apr 2025 09:42:05 +0000 https://blogs.perficient.com/?p=380376

    JavaScript is single-threaded. That means it runs one task at a time, on one core. But then how does it handle things like API calls, file reads, or user interactions without freezing up?

    That’s where Promises and async/await come into play. They help us handle asynchronous operations without blocking the main thread.

    Let’s break down these concepts in the simplest way possible so whether you’re a beginner or a seasoned dev, it just clicks.

    JavaScript has something called an event loop. It’s always running, checking if there’s work to do—like handling user clicks, network responses, or timers. In the browser, the browser runs it. In Node.js, Node takes care of it.

    When an async function runs and hits an await, it pauses that function. It doesn’t block everything—other code keeps running. When the awaited Promise settles, that async function picks up where it left off.

     

    What is a Promise?

    • ✅ Fulfilled – The operation completed successfully.
    • ❌ Rejected – Something went wrong.
    • ⏳ Pending – Still waiting for the result.

    Instead of using nested callbacks (aka “callback hell”), Promises allow cleaner, more manageable code using chaining.

     Example:

    fetchData()
      .then(data => process(data))
      .then(result => console.log(result))
      .catch(error => console.error(error));
    

     

    Common Promise Methods

    Let’s look at the essential Promise utility methods:

    1. Promise.all()

    Waits for all promises to resolve. If any promise fails, the whole thing fails.

    Promise.all([p1, p2, p3])
      .then(results => console.log(results))
      .catch(error => console.error(error));
    
    • ✅ Resolves when all succeed.
    • ❌ Rejects fast if any fail.
    1. Promise.allSettled()

    Waits for all promises, regardless of success or failure.

    Promise.allSettled([p1, p2, p3])
      .then(results => console.log(results));
    
    • Each result shows { status: “fulfilled”, value } or { status: “rejected”, reason }.
    • Great when you want all results, even the failed ones.
    1. Promise.race()

    Returns as soon as one promise settles (either resolves or rejects).

    Promise.race([p1, p2, p3])
      .then(result => console.log('Fastest:', result))
      .catch(error => console.error('First to fail:', error));
    
    1. Promise.any()

    Returns the first fulfilled promise. Ignores rejections unless all fail.

    Promise.any([p1, p2, p3])
      .then(result => console.log('First success:', result))
      .catch(error => console.error('All failed:', error));
    

    5.Promise.resolve() / Promise.reject

    • resolve(value) creates a resolved promise.
    • reject (value) creates a rejected promise.

    Used for quick returns or mocking async behavior.

     

    Why Not Just Use Callbacks?

    Before Promises, developers relied on callbacks:

    getData(function(response) {
      process(response, function(result) {
        finalize(result);
      });
    });
    

    This worked, but quickly became messy i.e. callback hell.

     

     What is async/await Really Doing?

    Under the hood, async/await is just syntactic sugar over Promises. It makes asynchronous code look synchronous, improving readability and debuggability.

    How it works:

    • When you declare a function with async, it always returns a Promise.
    • When you use await inside an async function, the execution of that function pauses at that point.
    • It waits until the Promise is either resolved or rejected.
    • Once resolved, it returns the value.
    • If rejected, it throws the error, which you can catch using try…catch.
    async function greet() {
      return 'Hello';
    }
    greet().then(msg => console.log(msg)); // Hello
    

    Even though you didn’t explicitly return a Promise, greet() returns one.

     

    Execution Flow: Synchronous vs Async/Await

    Let’s understand how await interacts with the JavaScript event loop.

    console.log("1");
    
    setTimeout(() => console.log("2"), 0);
    
    (async function() {
      console.log("3");
      await Promise.resolve();
      console.log("4");
    })();
    
    console.log("5");
    

    Output:

    Let’s understand how await interacts with the JavaScript event loop.

    1
    3
    5
    4
    2
    

    Explanation:

    • The await doesn’t block the main thread.
    • It puts the rest of the async function in the microtask queue, which runs after the current stack and before setTimeout (macrotask).
    • That’s why “4” comes after “5”.

     

     Best Practices with async/await

    1. Use try/catch for Error Handling

    Avoid unhandled promise rejections by always wrapping await logic inside a try/catch.

    async function getUser() {
      try {
        const res = await fetch('/api/user');
        if (!res.ok) throw new Error('User not found');
        const data = await res.json();
        return data;
      } catch (error) {
        console.error('Error fetching user:', error.message);
        throw error; // rethrow if needed
      }
    }
    
    1. Run Parallel Requests with Promise.all

    Don’t await sequentially unless there’s a dependency between the calls.

    ❌ Bad:

    const user = await getUser();
    const posts = await getPosts(); // waits for user even if not needed
    

    ✅ Better:

    const [user, posts] = await Promise.all([getUser(), getPosts()]);
    1. Avoid await in Loops (when possible)

    ❌ Bad:

    //Each iteration waits for the previous one to complete
    for (let user of users) {
      await sendEmail(user);
    }
    

    ✅ Better:

    //Run in parallel
    await Promise.all(users.map(user => sendEmail(user)));
    

    Common Mistakes

    1. Using await outside async
    const data = await fetch(url); // ❌ SyntaxError
    1. Forgetting to handle rejections
      If your async function throws and you don’t .catch() it (or use try/catch), your app may crash in Node or log warnings in the browser.
    2. Blocking unnecessary operations Don’t await things that don’t need to be awaited. Only await when the next step depends on the result.

     

    Real-World Example: Chained Async Workflow

    Imagine a system where:

    • You authenticate a user,
    • Then fetch their profile,
    • Then load related dashboard data.

    Using async/await:

    async function initDashboard() {
      try {
        const token = await login(username, password);
        const profile = await fetchProfile(token);
        const dashboard = await fetchDashboard(profile.id);
        renderDashboard(dashboard);
      } catch (err) {
        console.error('Error loading dashboard:', err);
        showErrorScreen();
      }
    }
    

    Much easier to follow than chained .then() calls, right?

     

    Converting Promise Chains to Async/Await

    Old way:

    login()
      .then(token => fetchUser(token))
      .then(user => showProfile(user))
      .catch(error => showError(error));
    

    With async/await:

    async function start() {
      try {
        const token = await login();
        const user = await fetchUser(token);
        showProfile(user);
      } catch (error) {
        showError(error);
      }
    }
    

    Cleaner. Clearer. Less nested. Easier to debug.

     

    Bonus utility wrapper for Error Handling

    If you hate repeating try/catch, use a helper:

    const to = promise => promise.then(res => [null, res]).catch(err => [err]);
    
    async function loadData() {
      const [err, data] = await to(fetchData());
      if (err) return console.error(err);
      console.log(data);
    }
    

     

    Final Thoughts

    Both Promises and async/await are powerful tools for handling asynchronous code. Promises came first and are still widely used, especially in libraries. async/awa is now the preferred style in most modern JavaScript apps because it makes the code cleaner and easier to understand.

     

    Tip: You don’t have to choose one forever — they work together! In fact, async/await is built on top of Promises.

     

    ]]>
    https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/feed/ 0 380376
    Exact Match Search with Sitecore Search https://blogs.perficient.com/2025/04/17/exact-match-search-with-sitecore-search/ https://blogs.perficient.com/2025/04/17/exact-match-search-with-sitecore-search/#respond Thu, 17 Apr 2025 12:10:46 +0000 https://blogs.perficient.com/?p=380205

    Searching for content on the web has evolved from basic string based matches to a sophisticated array of approaches including keywords, stemming, synonyms, word order, regular expressions, weights and relevance.  Users expect the highest ranking results to be the most relevant and 75% of users don’t go past the first page of results.  All of these advanced techniques are great to find relevant content.  But sometimes you need to find an exact phrase with the specific words in a specific order.  Many search engines do this by wrapping quote marks around the “search term” to indicate an exact match search. Sitecore Search defaults to relevance based searches, but you can achieve exact match search with some configuration.

    Understanding Sitecore Search

    Let’s take a moment to remember a few concepts in Sitecore Search to understand the configuration better.

    • Index Document – A single piece of indexed content such as a webpage, a word document, a pdf, etc.
    • Attributes – The fields of an indexed document such as title, subtitle, url, content type, etc.
    • Textual Relevance – Defines the attributes used to locate potential results.
    • Weight – Defines a relative value for how important an attribute is within the textual relevance.
    • Analyzers – Convert the original search query into a format that is optimized for search.
    • Token – A chunk of the original search query, usually a single word or phrase that is often modified by the analyzer to include synonyms, remove stop words and reformat to the root word.

    Sitecore Search has a number of predefined analyzers built in.  Each analyzer processes the search query in different ways.

    The default analyzer is the multi local standard analyzer.  This analyzer modifies the search query by making it lower case, splitting the search query into single word tokens, finding the root of each word, applying synonyms, and removing punctuation.  For this reason, it will not find an exact match.  For that we need the keyword analyzer which leaves the search query in a single token without applying any modifications.

    Configure Exact Match Search – Textual Relevance

    In order to configure exact match search, we need to add the keyword analyzer to the textual relevance settings for the desired attribute, in this case the description.

    Navigate to Admin/Domain Settings then click the feature configuration tab.

    Sc Textual Reference A

    Domain Settings

    Edit the Textual Relevance section.

    Sc Textual Referenceb

    Textual Relevance Settings

    Add the keyword analyzer to the description attribute.

    Sc Textual Reference C

    Add Analyzer

    Sc Textual Reference D

    Select the keyword analyzer

    Make sure to save your changes then publish your domain settings for your changes to take effect.

    Configure Exact Match Search – Widget Settings

    Next we need to configure our search widget to use our textual relevance settings.

    Navigate to a widget variation and click add rule.

    Sc Textual Reference 1

    Add rule to a widget Variation

     

    Click the top icon on the left to set the site context.  Add a context rule for Keyword and select the contains option.  In the input box, type a single quote mark.

    Sc Textual Reference 2

    Add keyword rule to the site context

    Click the bottom icon on the left to configure the settings.  Click the tab for Textual Relevance and click the toggle to enable the configuration.  Notice that the description field is listed twice, once for each analyzer. From here you can enable/disable each attribute/analyzer and set its relative weight.  In this example, I’ve set the description-keyword to 3 and the name-multilocal to 1.  This will do the exact match search only on the description attribute.  You could include name-keyword analyzer to do an exact match on the name as well if that is desired.

    Sc Textual Reference 3

    Description keyword rule

    Repeat the process to add or modify a second rule that uses the description-multilocal analyzer.

    Sc Textual Reference 4 Rule2

    Description multilocale rule

    This rule will be the fallback if the search term does not include a quote.

    Sc Textual Reference 5

    Rule order and fallback

    Exact Match Search in Action

    With this configuration in place, you can see the difference in the search results.  In this example, I’ve searched for “proxy statements”.

    When you include a quote mark in the search term, you only get results that have the exact phrase “proxy statements”.  This search returns 12 results.

    Sc Textual Reference B1

    Exact match search with 12 results

    When you do not include the quote mark in the search term, you get results that include proxy, statements and statement.  This search returns 68 results.

    Sc Textual Reference A0

    Relevance search with 68 results

    ]]>
    https://blogs.perficient.com/2025/04/17/exact-match-search-with-sitecore-search/feed/ 0 380205
    Scoping, Hoisting and Temporal Dead Zone in JavaScript https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/ https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/#respond Thu, 17 Apr 2025 11:44:38 +0000 https://blogs.perficient.com/?p=380251

    Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
    In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.

    What is Scope in JavaScript?

    Think of scope like a boundary or container that controls where you can use a variable in your code.

    In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.

    This helps in two big ways:

    • Keeps your code safe – Only the right parts of the code can access the variable.
    • Avoids name clashes – You can use the same variable name in different places without them interfering with each other.

    JavaScript mainly uses two types of scope:

    1.Global Scope – Available everywhere in your code.

    2.Local Scope – Available only inside a specific function or block.

     

    Global Scope

    When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.

    If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.

    var a = 5; // Global variable
    function add() {
      return a + 10; // Using the global variable inside a function
    }
    console.log(window.a); // 5
    

    In this example, a is declared outside of any function, so it’s globally available—even inside add().

    A quick note:

    • If you declare a variable with var, it becomes a property of the window object in browsers.
    • But if you use let or const, the variable is still global, but not attached to window.
    let name = "xyz";
    function changeName() {
      name = "abc";  // Changing the value of the global variable
    }
    changeName();
    console.log(name); // abc
    

    In this example, we didn’t create a new variable—we just changed the value of the existing one.

    👉 Important:
    If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.

     

     Local Scope

    In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.

    There are two types of local scope:

    1.Functional Scope

    Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.

    let firstName = "Shilpa"; // Global
    function changeName() {
      let lastName = "Syal"; // Local to this function
    console.log (`${firstName} ${lastName}`);
    }
    changeName();
    console.log (lastName); // ❌ Error! Not available outside the function
    

    You can even use the same variable name in different functions without any issue:

    function mathMarks() {
      let marks = 80;
      console.log (marks);
    }
    function englishMarks() {
      let marks = 85;
      console.log (marks);
    }
    

    Here, both marks variables are separate because they live in different function scopes.

     

    2.Block Scope

    Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).

     

    function getMarks() {
      let marks = 60;
      if (marks > 50) {
        const points = 10;
        console.log (marks + points); // ✅ Works here
      }
      console.log (points); // ❌ Uncaught Reference Error: points is not defined
    }
    

     As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.

    LEXICAL SCOPING & NESTED SCOPE:

    When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.

    function outerFunction() {
      let outerVar = "I’m outside";
      function innerFunction() {
          console.log (outerVar); // ✅ Can access outerVar
      }
      innerFunction();
    }
    

    In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.

     

    VARIABLE SCOPE OR VARIABLE SHADOWING:

    You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.

    If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.

    let name = "xyz"
    function getName() {
      let name = "abc"            // Redeclaring the name variable
          console.log (name)  ;        //abc
    }
    getName();
    console.log (name) ;          //xyz
    

    To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.

    let bonus = 500;
    function getSalary() {
     if(true) {
         return 10000 + bonus;  // Looks up and finds bonus in the outer scope
      }
    }
       console.log (getSalary()); // 10500
    

     

    Key Takeaways: Scoping Made Simple

    Global Scope: Variables declared outside any function are global and can be used anywhere in your code.

    Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.

    Global Variables Last Longer: They stay alive as long as your program is running.

    Local Variables Are Temporary: They’re created when the function runs and removed once it ends.

    Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.

    Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.

    Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.” 

    Hoisting

    To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.

    It has two main phases:

    1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.

    2.Execution Phase: During this phase, code is executed line by line.

    -When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.

     

    Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:

    1. functions– Functions are fully hoisted. They can invoke before their declaration in code.
    foo (); // Output: "Hello, world!"
     function foo () {
         console.log ("Hello, world!");
     }
    
    1. var – Variables declared with var are hoisted in global scope but initialized with undefined. Accessible before the declaration with undefined.
    console.log (x); // Output: undefined
     var x = 5;
    

    This code seems straightforward, but it’s interpreted as:

    var x;
    console.log (x); // Output: undefined
     x = 5;
    

    3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error

    console.log (x); // Throws Reference Error: Cannot access 'x' before initialization
     let x = 5;
    
    
    

    What is Temporal Dead Zone (TDZ)?

    In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.

    For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.

    This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.

    console.log (x); //x is not defined -- Reference Error.
    let a=10; //b is undefined.
    var b= 100; // you cannot access a before initialization Reference Error.
    

    👉 Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.

     

    🧾 Conclusion

    JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding! 🙌

     

     

    ]]>
    https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/feed/ 0 380251
    ⚡ PERFATHON 2025 – Hackathon at Perficient 👩‍💻 https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/ https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/#respond Tue, 15 Apr 2025 20:30:48 +0000 https://blogs.perficient.com/?p=380047

    April 10–11, 2025, marked an exciting milestone for Perficient India as we hosted our hackathon – Perfathon 2025. Held at our Bangalore office, this thrilling, high-energy event ran non-stop from 12 PM on April 10 to 4 PM on April 11, bringing together 6 enthusiastic teams, creative minds, and some truly impactful ideas.

    Perf7 Perf8

    Setting the Stage

    The excitement wasn’t just limited to the two days — the buzz began a week in advance, with teasers and prep that got everyone curious and pumped. The organizing team went all out to set the vibe right from the moment we stepped in — from vibrant decoration and  music to cool Perfathon hoodies and high spirits all around.

    Perf5 Perf6 Perf11 Perf25

    Our General Manager, Sumantra Nandi, kicked off the event with inspiring words and warm introductions to the teams, setting the tone for what would be a fierce, friendly, and collaborative code fest.

    Meet the Gladiators

    Six teams, each with 3–5 members, jumped into this coding battleground:

    • Bro Code

    • Code Red

    • Ctrl Alt Defeat

    • Code Wizards

    • The Tech Titans

    • Black Pearl

    Each team was given the freedom to either pick a curated list of internal problem statements or come up with their own. Some of the challenge themes included:  Internal Idea & Innovation Hub, Skills & Project Matchmaker , Ready to Integrate AI Package etc. The open-ended format allowed teams to think outside the box, pick what resonated with them, and own the solution-building process.

    Perf12 Perf13  Perf16 Perf21Perf14 Perf15

     Let the Hacking Begin!

    Using a chit system, teams were randomly assigned dedicated spaces to work from, and the presentation order was decided — adding an element of surprise and fun!

    Day 1 saw intense brainstorming, constant collaboration, design sprints, and non-stop coding. Teams powered through challenges, pivoted when needed, and showcased problem-solving spirit.

    Evaluation with Impact

    Everyone presented their solutions to our esteemed judges, who evaluated them across several crucial dimensions: tech stack used, task distribution among team members, solution complexity, optimization and relevance, future scope and real-world impact, scalability and deployment plans, UI designs, AI component etc.

    The judging wasn’t just about scoring — it was about constructive insights. Judges offered thought-provoking feedback and suggestions, pushing teams to reflect more deeply on their solutions and discover new layers of improvement. A heartfelt thank you to each judge for their valuable time and perspectives.

    This marked the official beginning of the code battle — from here on, it was about execution, collaboration, and pushing through to build something meaningful.

    Perf1 Perf2 Perf3 Perf24 Perf27 Perf28 Perf29

    Time to Shine (Day 2)

    As Day 2 commenced, the teams picked up right where they left off — crushing it with creativity and clean code. The GitHub repository was set up by the organizing team, allowing all code commits and pushes to be tracked live right from the start of the event. The Final Showdown kicked off around 4 PM on April 11, with the spotlight on each team to demo their working prototypes.

    A team representative collected chits to decide the final presentation order. In the audience this time were not just internal leaders, but also a special client guest , Sravan Vashista, (IT CX Director and IT Country GM, Keysight Technologies) and our GM Sumantra Nandi, adding more weight to the final judgment.

    Each team presented with full energy, integrated judge and audience feedback, and answered queries with clarity and confidence. The tension was real, and the performances were exceptional.

     And the Winners Are…

    Before the grand prize distribution, our guest speaker, Sravan Vashista delivered an insightful and encouraging address. He applauded the energy in the room, appreciated the quality of solutions, and emphasized the importance of owning challenges and solving from within. The prize distribution was a celebration in itself — beaming faces, loud cheers, proud smiles, and a sense of fulfillment that only comes from doing something truly impactful.

    After two action-packed days of code, creativity, and collaboration , it was finally time to crown our champions.

    🥇 Code Red emerged victorious as the Perfathon 2025 Champions, thanks to their standout performance, technical depth, clear problem-solving approach, and powerful teamwork.

    🥈 Code Wizards claimed the First Runners-Up spot with their solution and thoughtful execution.

    🥉 Black Pearl took home the Second Runners-Up title, impressing everyone with their strong team synergy.

    Each team received trophies and appreciation, but more importantly, they took home the experience of being real solution creators.

    Perf10  Perf19 Perf23 Perf18 Perf30

    🙌 Thank You, Team Perfathon!

    A massive shoutout to our organizers, volunteers, and judges who made Perfathon a reality. Huge thanks to our leadership and HR team for their continuous support and encouragement, and to every participant who made the event what it was — memorable, meaningful, and magical.

    Perf17 Perf33

    Perf32 Perf9  Perf31

    We’re already looking forward to Perfathon 2026. Until then, let’s keep the hacker spirit alive and continue being the solution-makers our organization needs.

    ]]>
    https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/feed/ 0 380047
    Convert a Text File from UTF-8 Encoding to ANSI using Python in AWS Glue https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/ https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/#respond Mon, 14 Apr 2025 19:35:22 +0000 https://blogs.perficient.com/?p=379867

    To convert a text file from UTF-8 encoded data to ANSI using AWS Glue, you will typically work with Python or PySpark. However, it’s important to understand that ANSI is not a specific encoding but often refers to Windows-1252 (or similar 8-bit encodings) in a Windows context.

    AWS Glue, running on Apache Spark, uses UTF-8 as the default encoding. Converting to ANSI requires handling the character encoding during the writing phase, because Spark itself doesn’t support writing files in encodings other than UTF-8 natively. But there are a few workarounds.

    Here’s a step-by-step guide to converting a text file from UTF-8 to ANSI using Python in AWS Glue. Assume you’re working with a plain text file and want to output a similarly formatted file in ANSI encoding.

    General Process Flow

    Technical Approach Step-By-Step Guide

    Step 1: Add the import statements to the code

    import boto3
    import codecs
    

    Step 2: Specify the source/target file paths & S3 bucket details

    # Initialize S3 client
    s3_client = boto3.client('s3')
    s3_key_utf8 = ‘utf8_file_path/filename.txt’
    s3_key_ansi = 'ansi_file_path/filename.txt'
    
    # Specify S3 bucket and file paths
    bucket_name = outgoing_bucket #'your-s3-bucket-name'
    input_key = s3_key_utf8   #S3Path/name of input UTF-8 encoded file in S3
    output_key = s3_key_ansi  #S3 Path/name to save the ANSI encoded file
    

    Step 3: Write a function to convert the text file from UTF-8 to ANSI, based on the parameters supplied (S3 bucket name, source-file, target-file)

    # Function to convert UTF-8 file to ANSI (Windows-1252) and upload back to S3
    def convert_utf8_to_ansi(bucket_name, input_key, output_key):
        # Download the UTF-8 encoded file from S3
        response = s3_client.get_object(Bucket=bucket_name, Key=input_key)
        # Read the file content from the response body (UTF-8 encoded)
        utf8_content = response['Body'].read().decode('utf-8')
        # Convert the content to ANSI encoding (Windows-1252)
        ansi_content = utf8_content.encode('windows-1252', 'ignore')  # 'ignore' to handle invalid characters
        # Upload the converted file to S3 (in ANSI encoding)
        s3_client.put_object(Bucket=bucket_name, Key=output_key, Body=ansi_content) 
    

    Step 4: Call the function that converts the text file from UTF-8 to ANSI

    # Call the function to convert the file 
    convert_utf8_to_ansi(bucket_name, input_key, output_key) 
    

     

    ]]>
    https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/feed/ 0 379867