Back-End Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/back-end-development/ Expert Digital Insights Thu, 12 Jun 2025 05:18:13 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Back-End Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/back-end-development/ 32 32 30508587 YAML files in DBT https://blogs.perficient.com/2025/06/12/yaml-files-in-dbt/ https://blogs.perficient.com/2025/06/12/yaml-files-in-dbt/#respond Thu, 12 Jun 2025 05:18:13 +0000 https://blogs.perficient.com/?p=382730

To make streamline project development and maintenance, in any programming language, we need the support of metadata, configuration, and documentation. Project configurations can be done using configuration files. Configuration files are easy to use and make it user friendly to interact with developer. One such type of configuration files used in DBT are the YAML files.
In this blog, will go through the required YAML files in DBT.
Let’s understand first what YAML is and DBT

DBT (Data Build Tool) :
Data transformation is the important process in modern analytics. DBT is a system to transform, clean and aggregate data within data warehouse. The power of DBT lies in its utilization of YAML files for both configuration and transformation.
Note:
Please go through link for DBT(DBT)
What is YAML file:
YAML acronym as “Yet Another Markup Language.” It is easy to read and understand. YAML is superset of JSON.
Common use of YAML file:
– Configuration Management:
Use to define configuration like roles, environment.
– CI/CD Pipeline:
CI/CD tools depend on YAML file to describe their pipeline.
– Data Serialization:
YAML can manage complex data types such as linked list, arrays, etc.
– API:
YAML can be use in defining API contracts and specification.

Sample Example of YAML file:
Pictureyaml
YAML files are the core of defining configuration and transformation in DBT. YAML files have “.yml” extension.

The most important YAML file is
profiles.yml:
This file needs to be locally. It contains sensitive that can be used to connect with target data warehouse.
Purpose:
It consists of main configuration details to which connect with data warehouse(Snowflake, Postgres, etc.)
profile configuration looks like as :
Picturedbtdemo
Note:
We should not share profiles.yml file with anyone because it consists of target data warehouse information. This file will be used in DBT core and not  in DBT cloud.
YAML file classification according to DBT component:
Let us go through different components of DBT with corresponding YAML files:

1.dbt_project.yml:
This is the most important configuration file in DBT. This file tells DBT what configuration
need to use for projects. By default, dbt_project.yml is the current directory structure

For Example:

name: string

config-version: 2
version: version

profile: profilename

model-paths: [directorypath]
seed-paths: [directorypath]
test-paths: [directorypath]
analysis-paths: [directorypath]
macro-paths: [directorypath]
snapshot-paths: [directorypath]
docs-paths: [directorypath]
asset-paths: [directorypath]

packages-install-path: directorypath

clean targets: [directorypath]

query-comment: string

require-dbt-version: version-range | [version-range]

flags:
  <global-configs>

dbt-cloud:
  project-id: project_id # Required
  defer-env-id: environment # Optional

exposures:
  +enabled: true | false.

quoting:
  database: true | false
  schema: true | false
  identifier: true | false

metrics:
  <metric-configs>

models:
  <model-configs>

seeds:
  <seed-configs>

semantic-models:
  <semantic-model-configs>

saved-queries:
  <saved-queries-configs>

snapshots:
  <snapshot-configs>

sources:
  <source-configs>
  
tests:
  <test-configs>

vars:
  <variables>

on-run-start: sql-statement | [sql-statement]
on-run-end: sql-statement | [sql-statement]

dispatch:
  - macro_namespace: packagename
    search_order: [packagename]

restrict-access: true | false

 

Model:
Models use SQL language that defines how your data is transformed .In a model, configuration file, you define the source and the target tables and their transformations. It is under the model directory of DBT project, and we can give name as per our convenience.
Below is the example:
Picturemodel   This is the YAML file in model. Given name as “schema.yml”
Purpose of model YML file:
It configures the model level metadata such as tags, materialization, name, column which use for transforming the data
It looks like as below:

version: 2

models:
  - name: my_first_dbt_model
    description: "A starter dbt model"
    columns:
      - name: id
        description: "The primary key for this table"
        data_tests:
          - unique
          - not_null

  - name: my_second_dbt_model
    description: "A starter dbt model"
    columns:
      - name: id
        description: "The primary key for this table"
        data_tests:
          - unique
          - not_null


2.Seed:
Seeds used to load CSV files into data model. This is useful for staging before applying any
transformation.
Below is the example:
Pictureseeds

Purpose of Seeds YAML file:
To define the path of CSV file under seed directory and which column needs to transform in CSV file and load into the data warehouse tables.

Configuration file looks like as below:

version: 2
seeds:
  - name: <name>
    description: Raw data from a source
    database: <database name>
    schema: <database schema>
    materialized: table
    sql: |-
      SELECT
        id,
        name
      FROM <source_table>

Testing:
Testing is a key step in any project. Similarly, DBT create test folder to test unique constraints, not null values.

Create dbtTest.yml file under test folder of DBT project

And it looks like as below:

Picturetest
Purpose of test YML file as:
It helps to check data integrity quality and separates from the business logic
It looks like as below:

columns:
  - name: order_id
    tests:
      - not_null
      - unique

As we go through different YAML files in DBT and purpose for the same.

Conclusion:
dbt and its YAML files provide human readable way to manage data transformation. With dbt, we can easily create, transform, and test the data models and make valuable tools for data professionals. With both DBT and YAML, it empowers you to work more efficiently as data analyst. Data engineers or business analysts

Thanks for reading.

 

 

 

]]>
https://blogs.perficient.com/2025/06/12/yaml-files-in-dbt/feed/ 0 382730
Developing a Serverless Blogging Platform with AWS Lambda and Python https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/ https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/#respond Thu, 12 Jun 2025 04:55:52 +0000 https://blogs.perficient.com/?p=382159

Introduction

Serverless is changing the game—no need to manage servers anymore. In this blog, we’ll see how to build a serverless blogging platform using AWS Lambda and Python. It’s scalable, efficient, and saves cost—perfect for modern apps.

How It Works

 

Lalit Serverless

Prerequisites

Before starting the demo, make sure you have: an AWS account, basic Python knowledge, AWS CLI and Boto3 installed.

Demonstration: Step-by-Step Guide

Step 1: Create a Lambda Function

Open the Lambda service and click “Create function.” Choose “Author from scratch,” name it something like BlogPostHandler, select Python 3.x, and give it a role with access to DynamoDB and S3. Then write your code using Boto3 to handle CRUD operations for blog posts stored in DynamoDB.

Lamda_Function.txt

Step 2: Set Up API Gateway

First, go to REST API and click “Build.” Choose “New API,” name it something like BlogAPI, and select “Edge optimized” for global access. Then create a resource like /posts, add methods like GET or POST, and link them to your Lambda function (e.g. BlogPostHandler) using Lambda Proxy integration. After setting up all methods, deploy it by creating a stage like prod. You’ll get an Invoke URL which you can test using Postman or curl.

Picture1

 

Step 3: Configure DynamoDB

Open DynamoDB and click “Create table.” Name it something like BlogPosts, set postId as the partition key. If needed, add a sort key like category for filtering. Default on-demand capacity is fine—it scales automatically. You can also add extra attributes like timestamp or tags for sorting and categorizing. Once done, hit “Create.”

.

 

Picture2

Step 4: Deploy Static Content on S3

First, make your front-end files—HTML, CSS, maybe some JavaScript. Then go to AWS S3, create a new bucket with a unique name, and upload your files like index.html. This will host your static website.

Index.html

After uploading, set the bucket policy to allow public read access so anyone can view your site. That’s it—your static website will now be live from S3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your-bucket-name/*"
        }
    ]
}

After uploading, don’t forget to replace your-bucket-name in the bucket policy with your actual S3 bucket name. This makes sure the permissions work properly. Now your static site is live—S3 will serve your HTML, CSS, and JS smoothly and reliably.

Step 5: Distribute via CloudFront

Go to CloudFront and create a new Web distribution. Set the origin to your S3 website URL (like your-bucket-name.s3-website.region.amazonaws.com, not the ARN). For Viewer Protocol Policy, choose “Redirect HTTP to HTTPS” for secure access. Leave other settings as-is unless you want to tweak cache settings. Then click “Create Distribution”—your site will now load faster worldwide.

Picture3

To let your frontend talk to the backend, you need to enable CORS in API Gateway. Just open the console, go to each method (like GET, POST, DELETE), click “Actions,” and select “Enable CORS.” That’s it—your frontend and backend can now communicate properly.

Picture4

Additionally, in your Lambda function responses.(We already added in our lambda function), make sure to include the following headers.

 

Results

That’s it—your serverless blogging platform is ready! API Gateway gives you the endpoints, Lambda handles the logic, DynamoDB stores your blog data, and S3 + CloudFront serve your frontend fast and globally. Fully functional, scalable, and no server headaches!

 

Picture5

Conclusion

Building a serverless blog with AWS Lambda and Python shows how powerful and flexible serverless really is. It’s low-maintenance, cost-effective, and scales easily perfect for anything from a personal blog to a full content site. A solid setup for modern web apps!

]]>
https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/feed/ 0 382159
How to Add Product Badges in Optimizely Configured Commerce Spire https://blogs.perficient.com/2025/06/06/how-to-add-product-badges-in-optimizely-configured-commerce-spire/ https://blogs.perficient.com/2025/06/06/how-to-add-product-badges-in-optimizely-configured-commerce-spire/#respond Fri, 06 Jun 2025 14:33:54 +0000 https://blogs.perficient.com/?p=381932

This blog is written for developers, merchandisers, or client teams looking to display visual indicators (e.g., “New”, “Sale”, “Non-Returnable”, “Best Seller”) on products within the storefront. In Ecommerce, badges are small visual cues that communicate important product information to customers, such as “New Arrival”, “Sale”, or “Limited Stock”. In Optimizely Configured Commerce (Spire), product badges can be a powerful way to highlight key promotions or product statuses, thereby improving the user experience.

This blog post walks through how to enable and customize badges within Spire-based sites.

What Are Product Badges?

Badges are visual elements displayed over product images or titles to indicate special status or promotions. Common use cases include:

  • New – recently added products
  • Sale – discounted items
  • Non-Returnable – not returnable items (innerwear, Digital downloads, Razors or blades, etc.)
  • Best Seller – top-performing SKUs
  • Limited Stock – low inventory

Step 1: Enable and Configure Badges in the Admin Console

  • Log in to the Admin Console
  • Go to Admin Console > Marketing > Product Badges

Picture1

  • Click Add Badge

Picture2

  • Fill in the fields:
    • Name: e.g., “Sale”
    • Activated On: Start date of showing the “Sale” product badge on the Product. The default current date.
    • Deactivated On: End date of showing “Sale” product badge on Product.
    • Sort Order: Sort Order determines badge display order in cases where multiple badges are displayed. Items with the same sort order will be displayed alphabetically by Badge Name.
    • Display Locations:
      1. Overlay – On/Off: Shows badge as an overlay on main product images
      2. Badge Widget – On/Off: Shows badge wherever the badge widget is displayed. You can assign badges manually or automate them based on rules via custom logic.
      3. Badge Styling:

        Badge Type: Text or Image

        Text

              • Display Text: Add display text on product e.g “Sale”. Translate this text in other languages.
              • Text Color Hex Code: Enter hex code without #. e.g. 000000
              • Badge Color Hex Code: Enter hex code without #. e.g. fffffff
              • Badge Style: Select “Round” and “Rectangle”
              • Picture3

        Image

              • Large Image Badge Path: Browse the image path and preview it.
              • Image Alt Text: Add image alternate text so that text will be shown if the image is not available.
              • Picture4
          • Badge Positioning:
            1. Large Image Placement: Default “None”. Other Top Center, Top Left, Top Right, Bottom Center, Bottom Right, and Bottom Left
            2. Large Image Text Size: Select Large, Mediu,m and Small

        Create a new badge as “Sale,” and the badge styling is text.

        Picture5

Step 2: Assign Badges to Products, Product Rules, and Product Attributes

Products

Click on the “Assign Products” button

Picture6

  • Open the pop-up and search for products.

Picture7

  • Select products. Click on the “Assign” and “Done” buttons.

Picture8

  • This badge is assigned to selected products. Picture9

Product Rules

You can create product rules based on “Product Custom Properties” and “Product fields”.

Picture10

Product Attributes

You can assign multiple product attributes for this badge.

Picture11

Step 3: Enable Product Badges in CMS

  1. Go to Content Admin
  2. Go to any product list page
  3. Click on the Edit icon

Picture12

4. Click on the Edit icon on “ProductList/ProductListCardList” widget

    • Show Image Badges and set the maximum number of image badges
    • Show Text Badges and set the maximum number of text badges

Picture13

 

Step 5: Display Product Badges in Spire Frontend

Text Product Badges example:

Product List Page:

Picture14

Product Detail Page:

Picture15

Conclusion

Badges in Optimizely Configured Commerce are a simple yet effective way to elevate merchandising on your storefront. By combining back-office configuration with simple frontend customizations, you can create a more engaging and informative shopping experience.

]]>
https://blogs.perficient.com/2025/06/06/how-to-add-product-badges-in-optimizely-configured-commerce-spire/feed/ 0 381932
Simplify Cloud-Native Development with Quarkus Extensions https://blogs.perficient.com/2025/06/06/simplify-cloud-native-development-with-quarkus-extensions/ https://blogs.perficient.com/2025/06/06/simplify-cloud-native-development-with-quarkus-extensions/#respond Fri, 06 Jun 2025 07:31:46 +0000 https://blogs.perficient.com/?p=382535

The gradients that developers in the modern world experience when building cloud native applications often include the challenge of figuring out the right set of libraries and integrations to use. Quarkus alleviates this pain point and makes this experience a more seamless and faster experience to develop thanks to the rich set of extensions built into the Quarkus ecosystem. Extensions are pre-integrated capabilities that help to maximize developer delight and runtime performance. In my previous blog, I discussed how Quarkus live coding enhances the dev experience . Today, let’s dive deeper into Quarkus Extensions .

Why Extensions Matter

The traditional layers of a Java stack often include some manual configuration and glue code to piece together the various libraries, as well as interceptors that need to be integrated too. Quarkus changes the game by providing extensions that are:

  • Optimized for build time and runtime performance

  • Preconfigured to reduce boilerplate

  • Integrated seamlessly with Quarkus dev services

  • Compatible with native compilation via GraalVM

This means you have less setup, faster feedback loops, and more time to write business logic.

Top Extensions to Explore

 RESTEasy Reactive

Create RESTful APIs with minimal configuration and blazing-fast performance. Quarkus supports both classic RESTEasy and the newer RESTEasy Reactive, which is designed for reactive programming models.

@Path("/hello")
public class HelloResource {
    @GET
    public String hello() {
        return "Hello from Quarkus!";
    }
}

Hibernate ORM with Panache

Panache simplifies JPA by reducing boilerplate code and making your data layer expressive and concise.

@Entity
public class Person extends PanacheEntity {
    public String name;
    public static Person findByName(String name) {
        return find("name", name).firstResult();
    }
}

Kubernetes & OpenShift

Quarkus offers native support to generate deployment YAMLs, making it cloud-native out of the box.

./mvnw clean package -Dquarkus.kubernetes.deploy=true

You can also configure deployment details using application properties like:

quarkus.kubernetes.name=my-app
quarkus.kubernetes.replicas=3

SmallRye (MicroProfile)

Need configuration, health checks, metrics, or OpenAPI? Just add the right SmallRye extension.

./mvnw quarkus:add-extension -Dextensions="smallrye-health"

Then add a health endpoint:

@Health
@ApplicationScoped
public class LivenessCheck implements HealthCheck {
    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.up("I'm alive!");
    }
}

Getting Started

Adding extensions is a breeze using the Quarkus CLI or Maven plugin:

quarkus ext add 'hibernate-orm-panache'

Or:

./mvnw quarkus:add-extension -Dextensions="rest-easy-reactive, kubernetes"

Conclusion

The Quarkus Extensions are a great way to include common features in your application without worrying about how all the complicated pieces fit together. If you’re building REST APIs, integrating with databases, deploying Kubernetes applications, there is likely an extension that can help. It is a very exciting time if you’re trying to upgrade your Java technology stack for the cloud.

]]>
https://blogs.perficient.com/2025/06/06/simplify-cloud-native-development-with-quarkus-extensions/feed/ 0 382535
Fastify (Node.Js Framework): The Secret to Creating Scalable and Secure Business Applications https://blogs.perficient.com/2025/05/28/fastify-node-js-framework-the-secret-to-creating-scalable-and-secure-business-applications/ https://blogs.perficient.com/2025/05/28/fastify-node-js-framework-the-secret-to-creating-scalable-and-secure-business-applications/#respond Wed, 28 May 2025 06:16:22 +0000 https://blogs.perficient.com/?p=381656

Introduction to Fastify (Node.Js Framework)

Fastify is a fast and low-overhead web framework for Node.js that has gained popularity among developers in recent years. With its lightweight architecture and rich feature set, Fastify is an excellent platform for developing high-performance online apps. As with JavaScript, where everything is an object, with Fastify, everything is a plugin. In this guide, we’ll explore the features, benefits, and use cases of Fastify and provide examples to help you get started.

Fastify Home

Key Features of Fastify

Fastify offers several key features that make it an attractive choice for building web applications:

  1. Fast and Lightweight: Fastify was designed to be quick and lightweight, making it ideal for developing high-performance online applications.
  2. Async/Await Support: Fastify supports async/await syntax, making it easier to write asynchronous code that’s easier to read and maintain.
  3. Robust Error Handling: Fastify has an error management system that enables developers to handle mistakes in a centralized manner.
  4. Extensive Plugin Ecosystem: Fastify boasts a growing ecosystem of plugins that offer additional functionality, including support for WebSockets, GraphQL, and more.
  5. Support for HTTPS: Fastify’s built-in support for HTTPS ensures that user data is secure and protected.

Becnhmark Fastify

Getting Started with Fastify

npm install fastify

Installation


C:\projects\fastify-demo>npm init -y
Wrote to C:\projects\fastify-demo\package.json:

{
  "name": "fastify-demo",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": ""
}

C:\projects\fastify-demo>npm i fastify
added 47 packages, and audited 48 packages in 2s
14 packages are looking for funding

Once installed, you can create a simple Fastify application:

Fastify Hello

Output:

Fastify Rest Api

Building RESTful APIs with Fastify

Fastify offers a straightforward and intuitive API for developing RESTful APIs. Here’s an example:


import fastify from 'fastify';

const app = fastify();

app.get('/users', async () => {
  return [{ id: 1, name: 'John Doe' }];
});

app.post('/users', async (request, reply) => {
  const { name } = request.body;
  // Create a new user
  return { id: 2, name };
});

app.listen({ port: 3000 }, () => {
  console.log(`Server listening on port 3000`);
});

Rest Api User Fastify

This example creates a simple RESTful API that responds to GET and POST requests to the  /users endpoint.

Error-Handling in Fastify

Fastify provides a built-in error-handling mechanism that allows developers to handle errors in a centralized manner. Here’s an example:


import fastify from 'fastify';

const app = fastify();

app.get('/users', async () => {
  throw new Error('Something went wrong');
});

app.setErrorHandler((error, request, reply) => {
  // Handle error
  reply.code(500).send({ message: 'Internal Server Error' });
});

app.listen({ port: 3000 }, () => {
  console.log(`Server listening on port 3000`);
});

This example creates a simple error-handling mechanism that catches and handles errors in a centralized manner.

Real-World Use Cases for Fastify

Fastify is perfect for developing high-performance web-based applications with real-time updates, such as:

  1. Real-time chat applications: Fastify’s WebSocket support makes it suitable for developing real-time chat applications
  2. Live updates: Fastify’s support for WebSockets enables real-time updates, making it ideal for applications that require live updates.
  3. High-traffic web applications: Fastify’s low-overhead design enables web applications to handle high traffic and large datasets.
  4. Microservices architecture: Fastify’s lightweight architecture makes it excellent for developing microservices.

Best Practices for Using Fastify

  1. Use async/await syntax: Fastify supports the async/await syntax, which makes it easier to write asynchronous code.
  2. Use plugins: Fastify’s plugin ecosystem offers additional functionality, including support for WebSockets and GraphQL.
  3. Use error handling: Fastify’s built-in error handling mechanism allows developers to handle errors in a centralized manner.
  4. Optimize performance: Fastify’s low-overhead design enables web applications to handle high traffic and large datasets.
  5. Use HTTPS: Fastify’s built-in support for HTTPS ensures that user data is secure and protected.

Benefits of Using Fastify

Fastify offers several benefits that make it an attractive choice for building web applications:

  1. Improved Performance: Fastify’s low-overhead design enables web applications to handle high traffic and large datasets without significant performance degradation.
  2. Faster Development: Fastify’s lightweight architecture and minimalistic approach enable developers to build web applications quickly.
  3. Enhanced Security: Fastify’s built-in support for HTTPS and robust error handling ensures that user data is secure and protected.
  4. Real-time Updates: Fastify’s support for WebSockets enables real-time updates, making it an ideal choice for applications that require live data.

The Disadvantages of Using Fastify(Node.js framework)

  • Fastify has a unique architecture and plugin system that developers who are familiar with other frameworks, such as Express.js, may find challenging to learn.
  • Although Fastify’s ecosystem is expanding, it remains smaller than Express.js. This implies that fewer plugins and integrations are available.

Conclusion

Fastify is a powerful and efficient web framework that offers various advantages for developing high-performance web apps. Fastify’s lightweight architecture, extensive feature set, and low-overhead design make it perfect for creating online applications that require real-time updates and great performance.

By following best practices and using Fastify’s built-in features, developers can build fast, secure, and scalable web applications.

Additional Resources

 

 

]]>
https://blogs.perficient.com/2025/05/28/fastify-node-js-framework-the-secret-to-creating-scalable-and-secure-business-applications/feed/ 0 381656
October CMS: A Modern CMS for Developers and Creators https://blogs.perficient.com/2025/05/27/october-cms-a-modern-cms-for-developers-and-creators/ https://blogs.perficient.com/2025/05/27/october-cms-a-modern-cms-for-developers-and-creators/#comments Tue, 27 May 2025 12:45:00 +0000 https://blogs.perficient.com/?p=381988

In the ever-evolving world of content management systems (CMS), there are many options to choose from—WordPress, Joomla, Drupal, and others. But for developers who love clean code, flexibility, and control, October CMS stands out as a modern, elegant solution built on the popular Laravel PHP framework.

What is October CMS?

October CMS is an open-source, free CMS that makes web building easier without compromising power or flexibility. It provides a developer-friendly platform for creating anything from basic websites to intricate web applications by utilizing Laravel, one of the most popular PHP frameworks.

Key Features of October CMS

1. The Laravel Framework

  • Since October CMS is based on Laravel, you may take advantage of Laravel’s extensive ecosystem, which includes Artisan commands, Blade-inspired architecture, routing, and eloquent ORM.

2. Twig for Flat-File Templating

  • Version control is easy because templates are saved as files rather than in the database. Its templating engine, Twig, is simple and easy for beginners to use.

3. Strong Admin Interface

  • Clients and content editors will adore October’s clear, user-friendly backend interface. It’s simple to manage pages, media, blog entries, and plugins.

4. Theme and Plugin System

  • October’s architecture is modular, so you may add or create plugins to increase functionality. From eCommerce to SEO to static pages, the marketplace offers a vast array of options.

5. Headless and API-Friendly Features

  • Do you want to use Vue, React, or another JS framework to create a headless CMS with a frontend? Laravel’s API tools enable October CMS to achieve that as well.

Use Cases for October CMS

1. Business Websites

Ideal For: Small to large businesses looking for a professional online presence.

  • October CMS allows agencies and developers to build custom, brand-aligned websites with features tailored to the client’s needs.
  • Business owners benefit from a clean, easy-to-use admin interface to manage their content, announcements, services, or blog posts.
  • Built-in SEO and plugin integrations help with online visibility.
  • Can include forms, analytics, and customer portals as needed.

2. Personal Websites and Blogs

Ideal For: Writers, bloggers, or professionals maintaining a personal web presence.

  • Using the Blog plugin, individuals can publish articles, share insights, and engage with readers.
  • The backend is simple enough for non-technical users to create and manage posts.
  • It’s easy to style the site to match personal branding or a unique aesthetic.

3. Portfolios

Ideal For: Designers, developers, photographers, and creatives.

  • You can showcase projects, images, and case studies in a visually appealing format.
  • The CMS allows easy updates to content and media without relying on a developer.
  • Supports galleries, animations, and custom templates to help creatives stand out.
  • Extendable with contact forms and user interactions (e.g., testimonials or client inquiries).

4. eCommerce Sites

Ideal For: Online stores of any size.

  • October CMS doesn’t have built-in eCommerce but works beautifully with plugins like:
    • Mall – a full-featured, customizable eCommerce solution.
    • Shopaholic – another powerful plugin with extensible architecture and marketing features.
  • Both plugins allow product management, order processing, payment integration, and customer management.
  • Ideal for developers building tailored eCommerce sites with full control over layout and functionality.

5. Headless CMS Configurations

Ideal For: Projects using frontend frameworks like Vue.js, React, or mobile apps needing backend content.

  • October CMS can act as a headless CMS, serving content via APIs to external frontends.
  • Developers can use the backend for content management and expose data through custom endpoints or REST APIs.
  • Allows you to decouple the frontend from the backend for more dynamic and modern interfaces.

6. Custom Online Applications with a Backend CMS

Ideal For: Any web application that needs custom logic and a robust content management backend.

  • October CMS, built on Laravel, is perfect for custom web apps—like booking systems, dashboards, client portals, learning platforms, etc.
  • You can create plugins to define business logic and use October’s admin panel to manage users, data, or workflows.
  • Gives you full access to Laravel’s features—middleware, service providers, queues, etc.—while still offering a CMS interface for content.

Who Is It For?

October CMS is ideal for:

  • Developers: October CMS gives you complete control over the architecture and code. It’s perfect for developers that desire a simple, adaptable framework that works well with Laravel and don’t want to deal with bulky systems.
  • Agencies: October CMS’s adaptability and modular design make it ideal for digital agencies creating custom websites or web apps for customers. It facilitates clean code management, quick development, and the production of unique plugins.
  • Freelancers: October’s well-known development environment would be appreciated by freelancers searching for a CMS built on Laravel. It enables them to construct client projects effectively without sacrificing performance or flexibility.
  • Content Creators: Non-technical individuals may easily manage content because to the backend’s ease of use and intuitiveness. Without the assistance of developers, editors and marketers can swiftly publish and update content.

If you’re already comfortable with PHP or Laravel and want more control over your projects than standard drag-and-drop CMSs offer, this is a wonderful option.

Getting Started with October CMS

You can install October CMS using Composer:

composer create-project october/october my-project

Then point your local server (e.g., Laravel Valet, XAMPP, Homestead) to the /public directory and follow the installation wizard in your browser.

Plugins Worth Exploring

1. Blog

Purpose: Add a full-featured blogging system to your website.

  • This plugin allows you to publish and manage blog posts easily.
  • Supports categories, tags, featured images, and author management.
  • Integrates with the WYSIWYG editor, making content creation intuitive.
  • Includes components for displaying recent posts, post lists, and single posts.
  • Ideal for websites that need a news section, articles, or traditional blog features.

2. Static Pages

Purpose: Build and manage pages using a visual, drag-and-drop interface.

  • Part of the Pages & Menu plugin by RainLab, it enables content editors to create pages without coding.
  • Includes a visual layout builder and menu editor for quick site structure management.
  • Pages can be organized hierarchically, making it perfect for brochure websites and landing pages.
  • Supports reusable content blocks and layout templates to speed up development.

3. Mall

Purpose: Add eCommerce functionality to your site.

  • A comprehensive online store plugin built specifically for October CMS.
  • Features include product management, categories, inventory tracking, orders, payments, shipping, and tax rules.
  • Supports digital and physical products, custom attributes, and variants (like sizes or colors).
  • Offers extensibility through additional plugins and API hooks.
  • Works well for small businesses or developers building custom eCommerce solutions.

4. User

Purpose: Enable frontend user registration, login, and profiles.

  • Also developed by RainLab, this plugin adds user account functionality to your site.
  • Includes login, registration, password reset, account management, and permissions.
  • You can extend it to allow users to interact with other parts of your site (e.g., comment on blog posts or purchase products).
  • Useful for community-based websites, membership sites, and client dashboards.

5. SEO Extension

Purpose: Improve your site’s visibility in search engines.

  • Allows you to manage SEO metadata such as titles, descriptions, canonical URLs, and Open Graph tags.
  • Generates XML sitemaps automatically.
  • Works well with Static Pages, Blog, and other content plugins.
  • Gives you control over how your content appears in search engine results and on social media.

Conclusion

October CMS is a breath of fresh air in the world of CMS platforms. It doesn’t try to be everything for everyone—but what it does, it does exceptionally well. If you value clean architecture, developer freedom, and modern PHP practices, October CMS might just be your new favorite tool.

Whether you’re building a website for a client or crafting your own web application, October CMS gives you the power to do it efficiently and elegantly.

]]>
https://blogs.perficient.com/2025/05/27/october-cms-a-modern-cms-for-developers-and-creators/feed/ 4 381988
Promises Made Simple: Understanding Async/Await in JavaScript https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/ https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/#respond Tue, 22 Apr 2025 09:42:05 +0000 https://blogs.perficient.com/?p=380376

JavaScript is single-threaded. That means it runs one task at a time, on one core. But then how does it handle things like API calls, file reads, or user interactions without freezing up?

That’s where Promises and async/await come into play. They help us handle asynchronous operations without blocking the main thread.

Let’s break down these concepts in the simplest way possible so whether you’re a beginner or a seasoned dev, it just clicks.

JavaScript has something called an event loop. It’s always running, checking if there’s work to do—like handling user clicks, network responses, or timers. In the browser, the browser runs it. In Node.js, Node takes care of it.

When an async function runs and hits an await, it pauses that function. It doesn’t block everything—other code keeps running. When the awaited Promise settles, that async function picks up where it left off.

 

What is a Promise?

  • ✅ Fulfilled – The operation completed successfully.
  • ❌ Rejected – Something went wrong.
  • ⏳ Pending – Still waiting for the result.

Instead of using nested callbacks (aka “callback hell”), Promises allow cleaner, more manageable code using chaining.

 Example:

fetchData()
  .then(data => process(data))
  .then(result => console.log(result))
  .catch(error => console.error(error));

 

Common Promise Methods

Let’s look at the essential Promise utility methods:

  1. Promise.all()

Waits for all promises to resolve. If any promise fails, the whole thing fails.

Promise.all([p1, p2, p3])
  .then(results => console.log(results))
  .catch(error => console.error(error));
  • ✅ Resolves when all succeed.
  • ❌ Rejects fast if any fail.
  1. Promise.allSettled()

Waits for all promises, regardless of success or failure.

Promise.allSettled([p1, p2, p3])
  .then(results => console.log(results));
  • Each result shows { status: “fulfilled”, value } or { status: “rejected”, reason }.
  • Great when you want all results, even the failed ones.
  1. Promise.race()

Returns as soon as one promise settles (either resolves or rejects).

Promise.race([p1, p2, p3])
  .then(result => console.log('Fastest:', result))
  .catch(error => console.error('First to fail:', error));
  1. Promise.any()

Returns the first fulfilled promise. Ignores rejections unless all fail.

Promise.any([p1, p2, p3])
  .then(result => console.log('First success:', result))
  .catch(error => console.error('All failed:', error));

5.Promise.resolve() / Promise.reject

  • resolve(value) creates a resolved promise.
  • reject (value) creates a rejected promise.

Used for quick returns or mocking async behavior.

 

Why Not Just Use Callbacks?

Before Promises, developers relied on callbacks:

getData(function(response) {
  process(response, function(result) {
    finalize(result);
  });
});

This worked, but quickly became messy i.e. callback hell.

 

 What is async/await Really Doing?

Under the hood, async/await is just syntactic sugar over Promises. It makes asynchronous code look synchronous, improving readability and debuggability.

How it works:

  • When you declare a function with async, it always returns a Promise.
  • When you use await inside an async function, the execution of that function pauses at that point.
  • It waits until the Promise is either resolved or rejected.
  • Once resolved, it returns the value.
  • If rejected, it throws the error, which you can catch using try…catch.
async function greet() {
  return 'Hello';
}
greet().then(msg => console.log(msg)); // Hello

Even though you didn’t explicitly return a Promise, greet() returns one.

 

Execution Flow: Synchronous vs Async/Await

Let’s understand how await interacts with the JavaScript event loop.

console.log("1");

setTimeout(() => console.log("2"), 0);

(async function() {
  console.log("3");
  await Promise.resolve();
  console.log("4");
})();

console.log("5");

Output:

Let’s understand how await interacts with the JavaScript event loop.

1
3
5
4
2

Explanation:

  • The await doesn’t block the main thread.
  • It puts the rest of the async function in the microtask queue, which runs after the current stack and before setTimeout (macrotask).
  • That’s why “4” comes after “5”.

 

 Best Practices with async/await

  1. Use try/catch for Error Handling

Avoid unhandled promise rejections by always wrapping await logic inside a try/catch.

async function getUser() {
  try {
    const res = await fetch('/api/user');
    if (!res.ok) throw new Error('User not found');
    const data = await res.json();
    return data;
  } catch (error) {
    console.error('Error fetching user:', error.message);
    throw error; // rethrow if needed
  }
}
  1. Run Parallel Requests with Promise.all

Don’t await sequentially unless there’s a dependency between the calls.

❌ Bad:

const user = await getUser();
const posts = await getPosts(); // waits for user even if not needed

✅ Better:

const [user, posts] = await Promise.all([getUser(), getPosts()]);
  1. Avoid await in Loops (when possible)

❌ Bad:

//Each iteration waits for the previous one to complete
for (let user of users) {
  await sendEmail(user);
}

✅ Better:

//Run in parallel
await Promise.all(users.map(user => sendEmail(user)));

Common Mistakes

  1. Using await outside async
const data = await fetch(url); // ❌ SyntaxError
  1. Forgetting to handle rejections
    If your async function throws and you don’t .catch() it (or use try/catch), your app may crash in Node or log warnings in the browser.
  2. Blocking unnecessary operations Don’t await things that don’t need to be awaited. Only await when the next step depends on the result.

 

Real-World Example: Chained Async Workflow

Imagine a system where:

  • You authenticate a user,
  • Then fetch their profile,
  • Then load related dashboard data.

Using async/await:

async function initDashboard() {
  try {
    const token = await login(username, password);
    const profile = await fetchProfile(token);
    const dashboard = await fetchDashboard(profile.id);
    renderDashboard(dashboard);
  } catch (err) {
    console.error('Error loading dashboard:', err);
    showErrorScreen();
  }
}

Much easier to follow than chained .then() calls, right?

 

Converting Promise Chains to Async/Await

Old way:

login()
  .then(token => fetchUser(token))
  .then(user => showProfile(user))
  .catch(error => showError(error));

With async/await:

async function start() {
  try {
    const token = await login();
    const user = await fetchUser(token);
    showProfile(user);
  } catch (error) {
    showError(error);
  }
}

Cleaner. Clearer. Less nested. Easier to debug.

 

Bonus utility wrapper for Error Handling

If you hate repeating try/catch, use a helper:

const to = promise => promise.then(res => [null, res]).catch(err => [err]);

async function loadData() {
  const [err, data] = await to(fetchData());
  if (err) return console.error(err);
  console.log(data);
}

 

Final Thoughts

Both Promises and async/await are powerful tools for handling asynchronous code. Promises came first and are still widely used, especially in libraries. async/awa is now the preferred style in most modern JavaScript apps because it makes the code cleaner and easier to understand.

 

Tip: You don’t have to choose one forever — they work together! In fact, async/await is built on top of Promises.

 

]]>
https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/feed/ 0 380376
Scoping, Hoisting and Temporal Dead Zone in JavaScript https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/ https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/#respond Thu, 17 Apr 2025 11:44:38 +0000 https://blogs.perficient.com/?p=380251

Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.

What is Scope in JavaScript?

Think of scope like a boundary or container that controls where you can use a variable in your code.

In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.

This helps in two big ways:

  • Keeps your code safe – Only the right parts of the code can access the variable.
  • Avoids name clashes – You can use the same variable name in different places without them interfering with each other.

JavaScript mainly uses two types of scope:

1.Global Scope – Available everywhere in your code.

2.Local Scope – Available only inside a specific function or block.

 

Global Scope

When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.

If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.

var a = 5; // Global variable
function add() {
  return a + 10; // Using the global variable inside a function
}
console.log(window.a); // 5

In this example, a is declared outside of any function, so it’s globally available—even inside add().

A quick note:

  • If you declare a variable with var, it becomes a property of the window object in browsers.
  • But if you use let or const, the variable is still global, but not attached to window.
let name = "xyz";
function changeName() {
  name = "abc";  // Changing the value of the global variable
}
changeName();
console.log(name); // abc

In this example, we didn’t create a new variable—we just changed the value of the existing one.

👉 Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.

 

 Local Scope

In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.

There are two types of local scope:

1.Functional Scope

Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.

let firstName = "Shilpa"; // Global
function changeName() {
  let lastName = "Syal"; // Local to this function
console.log (`${firstName} ${lastName}`);
}
changeName();
console.log (lastName); // ❌ Error! Not available outside the function

You can even use the same variable name in different functions without any issue:

function mathMarks() {
  let marks = 80;
  console.log (marks);
}
function englishMarks() {
  let marks = 85;
  console.log (marks);
}

Here, both marks variables are separate because they live in different function scopes.

 

2.Block Scope

Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).

 

function getMarks() {
  let marks = 60;
  if (marks > 50) {
    const points = 10;
    console.log (marks + points); // ✅ Works here
  }
  console.log (points); // ❌ Uncaught Reference Error: points is not defined
}

 As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.

LEXICAL SCOPING & NESTED SCOPE:

When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.

function outerFunction() {
  let outerVar = "I’m outside";
  function innerFunction() {
      console.log (outerVar); // ✅ Can access outerVar
  }
  innerFunction();
}

In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.

 

VARIABLE SCOPE OR VARIABLE SHADOWING:

You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.

If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.

let name = "xyz"
function getName() {
  let name = "abc"            // Redeclaring the name variable
      console.log (name)  ;        //abc
}
getName();
console.log (name) ;          //xyz

To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.

let bonus = 500;
function getSalary() {
 if(true) {
     return 10000 + bonus;  // Looks up and finds bonus in the outer scope
  }
}
   console.log (getSalary()); // 10500

 

Key Takeaways: Scoping Made Simple

Global Scope: Variables declared outside any function are global and can be used anywhere in your code.

Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.

Global Variables Last Longer: They stay alive as long as your program is running.

Local Variables Are Temporary: They’re created when the function runs and removed once it ends.

Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.

Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.

Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.” 

Hoisting

To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.

It has two main phases:

1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.

2.Execution Phase: During this phase, code is executed line by line.

-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.

 

Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:

  1. functions– Functions are fully hoisted. They can invoke before their declaration in code.
foo (); // Output: "Hello, world!"
 function foo () {
     console.log ("Hello, world!");
 }
  1. var – Variables declared with var are hoisted in global scope but initialized with undefined. Accessible before the declaration with undefined.
console.log (x); // Output: undefined
 var x = 5;

This code seems straightforward, but it’s interpreted as:

var x;
console.log (x); // Output: undefined
 x = 5;

3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error

console.log (x); // Throws Reference Error: Cannot access 'x' before initialization
 let x = 5;


What is Temporal Dead Zone (TDZ)?

In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.

For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.

This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.

console.log (x); //x is not defined -- Reference Error.
let a=10; //b is undefined.
var b= 100; // you cannot access a before initialization Reference Error.

👉 Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.

 

🧾 Conclusion

JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding! 🙌

 

 

]]>
https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/feed/ 0 380251
Convert a Text File from UTF-8 Encoding to ANSI using Python in AWS Glue https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/ https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/#respond Mon, 14 Apr 2025 19:35:22 +0000 https://blogs.perficient.com/?p=379867

To convert a text file from UTF-8 encoded data to ANSI using AWS Glue, you will typically work with Python or PySpark. However, it’s important to understand that ANSI is not a specific encoding but often refers to Windows-1252 (or similar 8-bit encodings) in a Windows context.

AWS Glue, running on Apache Spark, uses UTF-8 as the default encoding. Converting to ANSI requires handling the character encoding during the writing phase, because Spark itself doesn’t support writing files in encodings other than UTF-8 natively. But there are a few workarounds.

Here’s a step-by-step guide to converting a text file from UTF-8 to ANSI using Python in AWS Glue. Assume you’re working with a plain text file and want to output a similarly formatted file in ANSI encoding.

General Process Flow

Technical Approach Step-By-Step Guide

Step 1: Add the import statements to the code

import boto3
import codecs

Step 2: Specify the source/target file paths & S3 bucket details

# Initialize S3 client
s3_client = boto3.client('s3')
s3_key_utf8 = ‘utf8_file_path/filename.txt’
s3_key_ansi = 'ansi_file_path/filename.txt'

# Specify S3 bucket and file paths
bucket_name = outgoing_bucket #'your-s3-bucket-name'
input_key = s3_key_utf8   #S3Path/name of input UTF-8 encoded file in S3
output_key = s3_key_ansi  #S3 Path/name to save the ANSI encoded file

Step 3: Write a function to convert the text file from UTF-8 to ANSI, based on the parameters supplied (S3 bucket name, source-file, target-file)

# Function to convert UTF-8 file to ANSI (Windows-1252) and upload back to S3
def convert_utf8_to_ansi(bucket_name, input_key, output_key):
    # Download the UTF-8 encoded file from S3
    response = s3_client.get_object(Bucket=bucket_name, Key=input_key)
    # Read the file content from the response body (UTF-8 encoded)
    utf8_content = response['Body'].read().decode('utf-8')
    # Convert the content to ANSI encoding (Windows-1252)
    ansi_content = utf8_content.encode('windows-1252', 'ignore')  # 'ignore' to handle invalid characters
    # Upload the converted file to S3 (in ANSI encoding)
    s3_client.put_object(Bucket=bucket_name, Key=output_key, Body=ansi_content) 

Step 4: Call the function that converts the text file from UTF-8 to ANSI

# Call the function to convert the file 
convert_utf8_to_ansi(bucket_name, input_key, output_key) 

Summary:

The above steps are useful for any role (developer, tester, analyst…etc.) who needs to convert an UTF-8 encoded TXT file to ANSI format, make their job easy part of file validation.  

 

]]>
https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/feed/ 0 379867
Boost Developer Productivity with Quarkus Live Coding https://blogs.perficient.com/2025/03/14/boost-developer-productivity-with-quarkus-live-coding/ https://blogs.perficient.com/2025/03/14/boost-developer-productivity-with-quarkus-live-coding/#comments Fri, 14 Mar 2025 21:22:57 +0000 https://blogs.perficient.com/?p=378687

Quarkus has gained traction as a modern Java framework designed for cloud-native development. In my previous blog, I discussed why learning Quarkus is a great choice. Today, let’s dive deeper into one of its standout features: Live Coding.

What is Quarkus Live Coding?

Live Coding in Quarkus provides an instant development experience where changes to your application’s code, configuration, and even dependencies are reflected in real time without restarting the application. This eliminates the need for slow rebuild-restart cycles, significantly improving productivity.

How Does Live Coding Work?

Quarkus automatically watches for file changes and reloads the necessary components without restarting the entire application. This feature is enabled by default in dev mode and can be triggered using:

mvn quarkus:dev

or if you are using Gradle:

gradle quarkusDev

Once the development server is running, any modifications to your application will be instantly reflected when you refresh the browser or make an API request.

Benefits of Live Coding

  1. Faster Development: Eliminates long wait times associated with traditional Java application restarts.
  2. Enhanced Feedback Loop: See the impact of code changes immediately, improving debugging and fine-tuning.
  3. Seamless Config and Dependency Updates: Application configurations and dependencies can be modified dynamically.
  4. Works with REST APIs, UI, and Persistence Layer: Whether you’re building RESTful services, working with frontend code, or handling database transactions, changes are instantly visible.

Live Coding in Action

Imagine you are developing a REST API with Quarkus and need to update an endpoint. With Live Coding enabled, you simply modify the resource class:

@Path("/hello")
public class GreetingResource {

    @GET
    public String hello() {
        return "Hello, Quarkus!";
    }
}

Change the return message to:

    return "Hello, Live Coding!";

Without restarting the application, refresh the browser or send an API request, and the change is immediately visible. No waiting, no downtime.

Enabling Live Coding in Remote Environments

While Live Coding is enabled by default in dev mode, you can also enable it in remote environments using:

mvn quarkus:remote-dev -Dquarkus.live-reload.url=<remote-server>

This allows developers working in distributed teams or cloud environments to take advantage of fast feedback cycles.

Conclusion

Quarkus Live Coding is a game-changer for Java development, reducing turnaround time and enhancing the overall developer experience. If you’re transitioning to Quarkus, leveraging this feature can significantly improve your workflow.

Have you tried Quarkus Live Coding? Share your experience in the comments!
Stay tuned for more features on security and reactive programming with quarkus.

]]>
https://blogs.perficient.com/2025/03/14/boost-developer-productivity-with-quarkus-live-coding/feed/ 1 378687
Optimizely Configured Commerce and Spire CMS – Figuring out Handlers https://blogs.perficient.com/2025/03/10/optimizely-b2b-commerce-and-spire-cms-figuring-out-handlers/ https://blogs.perficient.com/2025/03/10/optimizely-b2b-commerce-and-spire-cms-figuring-out-handlers/#comments Mon, 10 Mar 2025 17:16:11 +0000 https://blogs.perficient.com/?p=378314

I’m now a couple months into exploring Optimizely Configured Commerce and Spire CMS.  As much as I’m up to speed with the Configured Commerce side of things (having past experience with Customized commerce), the Spire CMS side is a bit daunting, having worked with traditional Optimizely CMS for a while. We face challenges in figuring out handlers, a key concept in both Customized Commerce and Spire CMS.

And yes there is documentation, but its more high level and not enough to understand the inner functioning of the code (or maybe I just haven’t had the patience to go through it all yet :)).

Needless to say, I took a rather “figure it out by myself” approach here. I find that this is a much better way to learn and remember stuff :).

Here’s to figuring out handlers

In a commerce site, there is Order History for every customer, with a “Reorder” capability. I will tweak the behavior of this Reorder action and prevent adding a specific SKU to cart again when user clicks “Reorder”.

Challenge #1 – Where does the code tied to reorder live?

Depending on what you are looking for and what you need to change, this can be different files in the Frontend source code.

Challenge #2 – How do I find the right file?

I start by searching on keywords like “reorder” which do lead me to some files but they are mostly .tsx files aka React components that had the Reorder button on them. What I’m looking for instead is the actual method that passes the current order lines to add to cart, in order to intercept and tweak.

Challenge #3 – How do I find the file which takes in Order Lines and adds to cart?

I decided it was time to put my browser skills to good use. I launch the site, open Dev tools, and hit Reorder to monitor all the Network calls that occur. And bravo.. I see the api call to Cart API for bulk load, which is what this action does. Here’s what that looks like :

api/v1/carts/current/cartlines/batch

with a Payload of cartlines sent to add to Cart.

Reverse engineering in action

Step #1 – I traced this back in code. Looked for “cartlines/batch” and found 1 file – CartService.ts

Its OOTB code, but for people new to this like me, we don’t know which folder has what. So, I’ll make this one step easier for you by telling you exactly where this file lives. You will find it at

FrontEnd\modules\client-framework\src\Services\CartService.ts

The method that makes the api call is addLineCollection(parameter: AddCartLinesApiParameter).

Step #2 – I now search for files that called this method. I found quite a few files that call this, but for my specific scenario, I stuck to the ones that said “reorder” specifically. These are the Frontend Handlers in Spire CMS.

Here’s the list and paths of the files that are relevant to the context here :

  • FrontEnd\modules\client-framework\src\{blueprintName}\Pages\OrderDetails\Handlers\Reorder.ts
  • FrontEnd\modules\client-framework\src\{blueprintName}\Pages\OrderHistory\Handlers\Reorder.ts
  • FrontEnd\modules\client-framework\src\{blueprintName}\Pages\OrderStatus\Handlers\Reorder.ts

Once I see the line that makes the call to addLineCollection() method, I check how the parameter is being set.

Step #3 – All that’s left now is to update the code that sets the AddCartLinesApiParameter for this call, from the existing Order’s order lines. I add a filter to exclude the one specific SKU that I don’t want re-added to cart on reorder, on the OrderLines collection. Looks something like this :

cartLines: order.value.orderLines!.filter(o => o.productErpNumber != ‘{my specific SKU}’)
And that was it. I save the files. Webpack rebuilds, I refresh my site, hit Reorder on the order that had this SKU and it no longer gets added to cart.

Conclusion

In theory, it sounds pretty straightforward. You should know the api that gets called, where the calls live in code, which handlers make these calls for each action etc.
But for beginners like me, it really isn’t. You don’t always know the structure of Spire CMS codebase, concept of blueprints or handlers, the API calls that are made per action, or how to work with react/typescript code. So in my opinion, this is a helpful little exercise, learning from which now sticks in memory for other similar use cases.
Hope you find it helpful too!
]]>
https://blogs.perficient.com/2025/03/10/optimizely-b2b-commerce-and-spire-cms-figuring-out-handlers/feed/ 1 378314
Optimizing Experiences with Optimizely: Custom Audience Criteria for Mobile Visitors https://blogs.perficient.com/2025/03/05/optimizing-experiences-with-optimizely-custom-audience-criteria-for-mobile-visitors/ https://blogs.perficient.com/2025/03/05/optimizing-experiences-with-optimizely-custom-audience-criteria-for-mobile-visitors/#comments Wed, 05 Mar 2025 22:06:56 +0000 https://blogs.perficient.com/?p=378170

In today’s mobile-first world, delivering personalized experiences to visitors using mobile devices is crucial for maximizing engagement and conversions. Optimizely’s powerful experimentation and personalization platform allows you to define custom audience criteria to target mobile users effectively.

By leveraging Optimizely’s audience segmentation, you can create tailored experiences based on factors such as device type, operating system, screen size, and user behavior. Whether you want to optimize mobile UX, test different layouts, or personalize content for Android vs. iOS users, understanding how to define mobile-specific audience criteria can help you drive better results.

In this blog, we’ll explore how to set up simple custom audience criteria for mobile visitors in Optimizely, the key benefits of mobile targeting, and the best practices to enhance user experiences across devices. Let’s dive in!

This solution is based on Example – Create audience criteria, which you can find in the Optimizely documentation.

Create the settings and criterion classes

First, we need to create two classes in our solution:

Class VisitorDeviceTypeCriterionSettings needs to inherit CriterionModelBase class, and we need only one property (settings) to determine if the visitor is using a desktop or a mobile device.

public bool IsMobile { get; set; }

The abstract CriterionModelBase class requires you to implement the Copy() method. Because you are not using complex reference types, you can implement it by returning a shallow copy as shown (see Create custom audience criteria):

public override ICriterionModel Copy()
{
    return base.ShallowCopy();
}

The entire class will look something like this:

using EPiServer.Data.Dynamic;
using EPiServer.Personalization.VisitorGroups;

namespace AlloyTest.Personalization.Criteria
{
    [EPiServerDataStore(AutomaticallyRemapStore = true)]
    public class VisitorDeviceTypeCriterionSettings : CriterionModelBase
    {
        public bool IsMobile { get; set; }

        public override ICriterionModel Copy()
        {
            // if this class has reference types that require deep copying, then
            // that implementation belongs here. Otherwise, you can just rely on
            // shallow copy from the base class
            return base.ShallowCopy();
        }
    }
}

Now, we need to implement the criterion class VisitorDeviceTypeCriterion and inherit the abstract CriterionBase class with the settings class as the type parameter:

public class VisitorDeviceTypeCriterion : CriterionBase<VisitorDeviceTypeCriterionSettings>

Add a VisitorGroupCriterion attribute to set the category, name, and description of the criterion (for more available VisitorGroupCriterion properties, see Create custom audience criteria:

[VisitorGroupCriterion(
    Category = "MyCustom",
    DisplayName = "Device Type",
    Description = "Criterion that matches type of the user's device"
)]

The abstract CriterionBase class requires you to implement an IsMatch() method that determines whether the current user matches this audience criterion. In this case, we need to determine from which device the visitor is accessing our site. Because Optimizely doesn’t provide this out of the box, we need to figure out that part.

One of the solutions is to use information from the request header, from the User-Agent field and analyze it to determine the OS and device type. We can do that by writing our match method:

public virtual bool MatchBrowserType(string userAgent)
{
    var os =
        new Regex(
            @"(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od|ad)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino",
            RegexOptions.IgnoreCase | RegexOptions.Multiline);
    var device =
        new Regex(
            @"1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-",
            RegexOptions.IgnoreCase | RegexOptions.Multiline);
    var deviceInfo = string.Empty;

    if (os.IsMatch(userAgent))
    {
        deviceInfo = os.Match(userAgent).Groups[0].Value;
    }

    if (device.IsMatch(userAgent.Substring(0, 4)))
    {
        deviceInfo += device.Match(userAgent).Groups[0].Value;
    }

    if (!string.IsNullOrEmpty(deviceInfo))
    {
        return true;
    }

    return false;
}

Now, we can go back and implement the IsMatch() method that is required by CriterionBase abstract class.

public override bool IsMatch(IPrincipal principal, HttpContext httpContext)
{
    return MatchBrowserType(httpContext.Request.Headers["User-Agent"].ToString());
}

 

Test the criterion

In the CMS we need to create a new audience criterion. When you click on the ‘Add Criteria’ button, there will be ‘MyCustom’ criteria group with our criteria:

When you select the ‘Device Type’ criteria, you will see something like this:

We can easily add a label for the checkbox by using Optimizely’s translation functionality. Create a new XML file VisitorGroupCriterion.xml and place it in your translations folder where your translation files are, like this:

Put this into the file that you created:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<languages>
  <language name="English" id="en-us">
    <visitorgroups>
      <criteria>
        <ismobile>
          <key>Is Mobile Device (Use this setting to show content only on Mobile)</key>
        </ismobile>
      </criteria>
    </visitorgroups>
  </language>
</languages>

 

There is one more thing to do. In VisitorDeviceTypeCriterionSettings.cs, decorate the IsMobile property with the translation definition. Add this attribute:

[CriterionPropertyEditor(LabelTranslationKey = "/visitorgroups/criteria/ismobile/key")]

It should look like this:

Now, in the editor view, we have a label for the checkbox.

 

Personalize the content by setting the content for this visitor group.

Desktop view:

 

Mobile view:

You can see that there is content that is only visible if you access the site with a mobile device.

 

And that’s it!

]]>
https://blogs.perficient.com/2025/03/05/optimizing-experiences-with-optimizely-custom-audience-criteria-for-mobile-visitors/feed/ 1 378170