To make streamline project development and maintenance, in any programming language, we need the support of metadata, configuration, and documentation. Project configurations can be done using configuration files. Configuration files are easy to use and make it user friendly to interact with developer. One such type of configuration files used in DBT are the YAML files.
In this blog, will go through the required YAML files in DBT.
Let’s understand first what YAML is and DBT
DBT (Data Build Tool) :
Data transformation is the important process in modern analytics. DBT is a system to transform, clean and aggregate data within data warehouse. The power of DBT lies in its utilization of YAML files for both configuration and transformation.
Note:
Please go through link for DBT(DBT)
What is YAML file:
YAML acronym as “Yet Another Markup Language.” It is easy to read and understand. YAML is superset of JSON.
Common use of YAML file:
– Configuration Management:
Use to define configuration like roles, environment.
– CI/CD Pipeline:
CI/CD tools depend on YAML file to describe their pipeline.
– Data Serialization:
YAML can manage complex data types such as linked list, arrays, etc.
– API:
YAML can be use in defining API contracts and specification.
Sample Example of YAML file:
YAML files are the core of defining configuration and transformation in DBT. YAML files have “.yml” extension.
The most important YAML file is
profiles.yml:
This file needs to be locally. It contains sensitive that can be used to connect with target data warehouse.
Purpose:
It consists of main configuration details to which connect with data warehouse(Snowflake, Postgres, etc.)
profile configuration looks like as :
Note:
We should not share profiles.yml file with anyone because it consists of target data warehouse information. This file will be used in DBT core and not in DBT cloud.
YAML file classification according to DBT component:
Let us go through different components of DBT with corresponding YAML files:
1.dbt_project.yml:
This is the most important configuration file in DBT. This file tells DBT what configuration
need to use for projects. By default, dbt_project.yml is the current directory structure
For Example:
name: string config-version: 2 version: version profile: profilename model-paths: [directorypath] seed-paths: [directorypath] test-paths: [directorypath] analysis-paths: [directorypath] macro-paths: [directorypath] snapshot-paths: [directorypath] docs-paths: [directorypath] asset-paths: [directorypath] packages-install-path: directorypath clean targets: [directorypath] query-comment: string require-dbt-version: version-range | [version-range] flags: <global-configs> dbt-cloud: project-id: project_id # Required defer-env-id: environment # Optional exposures: +enabled: true | false. quoting: database: true | false schema: true | false identifier: true | false metrics: <metric-configs> models: <model-configs> seeds: <seed-configs> semantic-models: <semantic-model-configs> saved-queries: <saved-queries-configs> snapshots: <snapshot-configs> sources: <source-configs> tests: <test-configs> vars: <variables> on-run-start: sql-statement | [sql-statement] on-run-end: sql-statement | [sql-statement] dispatch: - macro_namespace: packagename search_order: [packagename] restrict-access: true | false
Model:
Models use SQL language that defines how your data is transformed .In a model, configuration file, you define the source and the target tables and their transformations. It is under the model directory of DBT project, and we can give name as per our convenience.
Below is the example:
This is the YAML file in model. Given name as “schema.yml”
Purpose of model YML file:
It configures the model level metadata such as tags, materialization, name, column which use for transforming the data
It looks like as below:
version: 2 models: - name: my_first_dbt_model description: "A starter dbt model" columns: - name: id description: "The primary key for this table" data_tests: - unique - not_null - name: my_second_dbt_model description: "A starter dbt model" columns: - name: id description: "The primary key for this table" data_tests: - unique - not_null
2.Seed:
Seeds used to load CSV files into data model. This is useful for staging before applying any
transformation.
Below is the example:
Purpose of Seeds YAML file:
To define the path of CSV file under seed directory and which column needs to transform in CSV file and load into the data warehouse tables.
Configuration file looks like as below:
version: 2 seeds: - name: <name> description: Raw data from a source database: <database name> schema: <database schema> materialized: table sql: |- SELECT id, name FROM <source_table>
Testing:
Testing is a key step in any project. Similarly, DBT create test folder to test unique constraints, not null values.
Create dbtTest.yml file under test folder of DBT project
And it looks like as below:
Purpose of test YML file as:
It helps to check data integrity quality and separates from the business logic
It looks like as below:
columns: - name: order_id tests: - not_null - unique
As we go through different YAML files in DBT and purpose for the same.
Conclusion:
dbt and its YAML files provide human readable way to manage data transformation. With dbt, we can easily create, transform, and test the data models and make valuable tools for data professionals. With both DBT and YAML, it empowers you to work more efficiently as data analyst. Data engineers or business analysts
Thanks for reading.
]]>
Serverless is changing the game—no need to manage servers anymore. In this blog, we’ll see how to build a serverless blogging platform using AWS Lambda and Python. It’s scalable, efficient, and saves cost—perfect for modern apps.
Before starting the demo, make sure you have: an AWS account, basic Python knowledge, AWS CLI and Boto3 installed.
Open the Lambda service and click “Create function.” Choose “Author from scratch,” name it something like BlogPostHandler, select Python 3.x, and give it a role with access to DynamoDB and S3. Then write your code using Boto3 to handle CRUD operations for blog posts stored in DynamoDB.
First, go to REST API and click “Build.” Choose “New API,” name it something like BlogAPI, and select “Edge optimized” for global access. Then create a resource like /posts, add methods like GET or POST, and link them to your Lambda function (e.g. BlogPostHandler) using Lambda Proxy integration. After setting up all methods, deploy it by creating a stage like prod. You’ll get an Invoke URL which you can test using Postman or curl.
Open DynamoDB and click “Create table.” Name it something like BlogPosts, set postId as the partition key. If needed, add a sort key like category for filtering. Default on-demand capacity is fine—it scales automatically. You can also add extra attributes like timestamp or tags for sorting and categorizing. Once done, hit “Create.”
.
First, make your front-end files—HTML, CSS, maybe some JavaScript. Then go to AWS S3, create a new bucket with a unique name, and upload your files like index.html. This will host your static website.
After uploading, set the bucket policy to allow public read access so anyone can view your site. That’s it—your static website will now be live from S3.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::your-bucket-name/*" } ] }
After uploading, don’t forget to replace your-bucket-name in the bucket policy with your actual S3 bucket name. This makes sure the permissions work properly. Now your static site is live—S3 will serve your HTML, CSS, and JS smoothly and reliably.
Go to CloudFront and create a new Web distribution. Set the origin to your S3 website URL (like your-bucket-name.s3-website.region.amazonaws.com, not the ARN). For Viewer Protocol Policy, choose “Redirect HTTP to HTTPS” for secure access. Leave other settings as-is unless you want to tweak cache settings. Then click “Create Distribution”—your site will now load faster worldwide.
To let your frontend talk to the backend, you need to enable CORS in API Gateway. Just open the console, go to each method (like GET, POST, DELETE), click “Actions,” and select “Enable CORS.” That’s it—your frontend and backend can now communicate properly.
Additionally, in your Lambda function responses.(We already added in our lambda function), make sure to include the following headers.
That’s it—your serverless blogging platform is ready! API Gateway gives you the endpoints, Lambda handles the logic, DynamoDB stores your blog data, and S3 + CloudFront serve your frontend fast and globally. Fully functional, scalable, and no server headaches!
Building a serverless blog with AWS Lambda and Python shows how powerful and flexible serverless really is. It’s low-maintenance, cost-effective, and scales easily perfect for anything from a personal blog to a full content site. A solid setup for modern web apps!
]]>This blog is written for developers, merchandisers, or client teams looking to display visual indicators (e.g., “New”, “Sale”, “Non-Returnable”, “Best Seller”) on products within the storefront. In Ecommerce, badges are small visual cues that communicate important product information to customers, such as “New Arrival”, “Sale”, or “Limited Stock”. In Optimizely Configured Commerce (Spire), product badges can be a powerful way to highlight key promotions or product statuses, thereby improving the user experience.
This blog post walks through how to enable and customize badges within Spire-based sites.
Badges are visual elements displayed over product images or titles to indicate special status or promotions. Common use cases include:
Badge Type: Text or Image
Text
Image
Create a new badge as “Sale,” and the badge styling is text.
Click on the “Assign Products” button
You can create product rules based on “Product Custom Properties” and “Product fields”.
You can assign multiple product attributes for this badge.
Badges in Optimizely Configured Commerce are a simple yet effective way to elevate merchandising on your storefront. By combining back-office configuration with simple frontend customizations, you can create a more engaging and informative shopping experience.
]]>The gradients that developers in the modern world experience when building cloud native applications often include the challenge of figuring out the right set of libraries and integrations to use. Quarkus alleviates this pain point and makes this experience a more seamless and faster experience to develop thanks to the rich set of extensions built into the Quarkus ecosystem. Extensions are pre-integrated capabilities that help to maximize developer delight and runtime performance. In my previous blog, I discussed how Quarkus live coding enhances the dev experience . Today, let’s dive deeper into Quarkus Extensions .
The traditional layers of a Java stack often include some manual configuration and glue code to piece together the various libraries, as well as interceptors that need to be integrated too. Quarkus changes the game by providing extensions that are:
Optimized for build time and runtime performance
Preconfigured to reduce boilerplate
Integrated seamlessly with Quarkus dev services
Compatible with native compilation via GraalVM
This means you have less setup, faster feedback loops, and more time to write business logic.
Create RESTful APIs with minimal configuration and blazing-fast performance. Quarkus supports both classic RESTEasy and the newer RESTEasy Reactive, which is designed for reactive programming models.
@Path("/hello") public class HelloResource { @GET public String hello() { return "Hello from Quarkus!"; } }
Panache simplifies JPA by reducing boilerplate code and making your data layer expressive and concise.
@Entity public class Person extends PanacheEntity { public String name; public static Person findByName(String name) { return find("name", name).firstResult(); } }
Quarkus offers native support to generate deployment YAMLs, making it cloud-native out of the box.
./mvnw clean package -Dquarkus.kubernetes.deploy=true
You can also configure deployment details using application properties like:
quarkus.kubernetes.name=my-app quarkus.kubernetes.replicas=3
Need configuration, health checks, metrics, or OpenAPI? Just add the right SmallRye extension.
./mvnw quarkus:add-extension -Dextensions="smallrye-health"
Then add a health endpoint:
@Health @ApplicationScoped public class LivenessCheck implements HealthCheck { @Override public HealthCheckResponse call() { return HealthCheckResponse.up("I'm alive!"); } }
Adding extensions is a breeze using the Quarkus CLI or Maven plugin:
quarkus ext add 'hibernate-orm-panache'
Or:
./mvnw quarkus:add-extension -Dextensions="rest-easy-reactive, kubernetes"
The Quarkus Extensions are a great way to include common features in your application without worrying about how all the complicated pieces fit together. If you’re building REST APIs, integrating with databases, deploying Kubernetes applications, there is likely an extension that can help. It is a very exciting time if you’re trying to upgrade your Java technology stack for the cloud.
]]>Fastify is a fast and low-overhead web framework for Node.js that has gained popularity among developers in recent years. With its lightweight architecture and rich feature set, Fastify is an excellent platform for developing high-performance online apps. As with JavaScript, where everything is an object, with Fastify, everything is a plugin. In this guide, we’ll explore the features, benefits, and use cases of Fastify and provide examples to help you get started.
Fastify offers several key features that make it an attractive choice for building web applications:
npm install fastify
C:\projects\fastify-demo>npm init -y
Wrote to C:\projects\fastify-demo\package.json:
{
"name": "fastify-demo",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": ""
}
C:\projects\fastify-demo>npm i fastify
added 47 packages, and audited 48 packages in 2s
14 packages are looking for funding
Once installed, you can create a simple Fastify application:
Output:
Fastify offers a straightforward and intuitive API for developing RESTful APIs. Here’s an example:
import fastify from 'fastify';
const app = fastify();
app.get('/users', async () => {
return [{ id: 1, name: 'John Doe' }];
});
app.post('/users', async (request, reply) => {
const { name } = request.body;
// Create a new user
return { id: 2, name };
});
app.listen({ port: 3000 }, () => {
console.log(`Server listening on port 3000`);
});
This example creates a simple RESTful API that responds to GET and POST requests to the /users
endpoint.
Fastify provides a built-in error-handling mechanism that allows developers to handle errors in a centralized manner. Here’s an example:
import fastify from 'fastify';
const app = fastify();
app.get('/users', async () => {
throw new Error('Something went wrong');
});
app.setErrorHandler((error, request, reply) => {
// Handle error
reply.code(500).send({ message: 'Internal Server Error' });
});
app.listen({ port: 3000 }, () => {
console.log(`Server listening on port 3000`);
});
This example creates a simple error-handling mechanism that catches and handles errors in a centralized manner.
Fastify is perfect for developing high-performance web-based applications with real-time updates, such as:
Fastify offers several benefits that make it an attractive choice for building web applications:
Fastify is a powerful and efficient web framework that offers various advantages for developing high-performance web apps. Fastify’s lightweight architecture, extensive feature set, and low-overhead design make it perfect for creating online applications that require real-time updates and great performance.
By following best practices and using Fastify’s built-in features, developers can build fast, secure, and scalable web applications.
]]>
In the ever-evolving world of content management systems (CMS), there are many options to choose from—WordPress, Joomla, Drupal, and others. But for developers who love clean code, flexibility, and control, October CMS stands out as a modern, elegant solution built on the popular Laravel PHP framework.
What is October CMS?
October CMS is an open-source, free CMS that makes web building easier without compromising power or flexibility. It provides a developer-friendly platform for creating anything from basic websites to intricate web applications by utilizing Laravel, one of the most popular PHP frameworks.
Key Features of October CMS
1. The Laravel Framework
2. Twig for Flat-File Templating
3. Strong Admin Interface
4. Theme and Plugin System
5. Headless and API-Friendly Features
Use Cases for October CMS
1. Business Websites
Ideal For: Small to large businesses looking for a professional online presence.
2. Personal Websites and Blogs
Ideal For: Writers, bloggers, or professionals maintaining a personal web presence.
3. Portfolios
Ideal For: Designers, developers, photographers, and creatives.
4. eCommerce Sites
Ideal For: Online stores of any size.
5. Headless CMS Configurations
Ideal For: Projects using frontend frameworks like Vue.js, React, or mobile apps needing backend content.
6. Custom Online Applications with a Backend CMS
Ideal For: Any web application that needs custom logic and a robust content management backend.
Who Is It For?
October CMS is ideal for:
If you’re already comfortable with PHP or Laravel and want more control over your projects than standard drag-and-drop CMSs offer, this is a wonderful option.
Getting Started with October CMS
You can install October CMS using Composer:
composer create-project october/october my-project
Then point your local server (e.g., Laravel Valet, XAMPP, Homestead) to the /public directory and follow the installation wizard in your browser.
Plugins Worth Exploring
1. Blog
Purpose: Add a full-featured blogging system to your website.
2. Static Pages
Purpose: Build and manage pages using a visual, drag-and-drop interface.
3. Mall
Purpose: Add eCommerce functionality to your site.
4. User
Purpose: Enable frontend user registration, login, and profiles.
5. SEO Extension
Purpose: Improve your site’s visibility in search engines.
Conclusion
October CMS is a breath of fresh air in the world of CMS platforms. It doesn’t try to be everything for everyone—but what it does, it does exceptionally well. If you value clean architecture, developer freedom, and modern PHP practices, October CMS might just be your new favorite tool.
Whether you’re building a website for a client or crafting your own web application, October CMS gives you the power to do it efficiently and elegantly.
]]>JavaScript is single-threaded. That means it runs one task at a time, on one core. But then how does it handle things like API calls, file reads, or user interactions without freezing up?
That’s where Promises and async/await come into play. They help us handle asynchronous operations without blocking the main thread.
Let’s break down these concepts in the simplest way possible so whether you’re a beginner or a seasoned dev, it just clicks.
JavaScript has something called an event loop. It’s always running, checking if there’s work to do—like handling user clicks, network responses, or timers. In the browser, the browser runs it. In Node.js, Node takes care of it.
When an async function runs and hits an await, it pauses that function. It doesn’t block everything—other code keeps running. When the awaited Promise settles, that async function picks up where it left off.
Instead of using nested callbacks (aka “callback hell”), Promises allow cleaner, more manageable code using chaining.
Example:
fetchData() .then(data => process(data)) .then(result => console.log(result)) .catch(error => console.error(error));
Let’s look at the essential Promise utility methods:
Waits for all promises to resolve. If any promise fails, the whole thing fails.
Promise.all([p1, p2, p3]) .then(results => console.log(results)) .catch(error => console.error(error));
Waits for all promises, regardless of success or failure.
Promise.allSettled([p1, p2, p3]) .then(results => console.log(results));
Returns as soon as one promise settles (either resolves or rejects).
Promise.race([p1, p2, p3]) .then(result => console.log('Fastest:', result)) .catch(error => console.error('First to fail:', error));
Returns the first fulfilled promise. Ignores rejections unless all fail.
Promise.any([p1, p2, p3]) .then(result => console.log('First success:', result)) .catch(error => console.error('All failed:', error));
5.Promise.resolve() / Promise.reject
Used for quick returns or mocking async behavior.
Before Promises, developers relied on callbacks:
getData(function(response) { process(response, function(result) { finalize(result); }); });
This worked, but quickly became messy i.e. callback hell.
Under the hood, async/await is just syntactic sugar over Promises. It makes asynchronous code look synchronous, improving readability and debuggability.
How it works:
async function greet() { return 'Hello'; } greet().then(msg => console.log(msg)); // Hello
Even though you didn’t explicitly return a Promise, greet() returns one.
Let’s understand how await interacts with the JavaScript event loop.
console.log("1"); setTimeout(() => console.log("2"), 0); (async function() { console.log("3"); await Promise.resolve(); console.log("4"); })(); console.log("5");
Output:
Let’s understand how await interacts with the JavaScript event loop.
1 3 5 4 2
Explanation:
Avoid unhandled promise rejections by always wrapping await logic inside a try/catch.
async function getUser() { try { const res = await fetch('/api/user'); if (!res.ok) throw new Error('User not found'); const data = await res.json(); return data; } catch (error) { console.error('Error fetching user:', error.message); throw error; // rethrow if needed } }
Don’t await sequentially unless there’s a dependency between the calls.
Bad:
const user = await getUser(); const posts = await getPosts(); // waits for user even if not needed
Better:
const [user, posts] = await Promise.all([getUser(), getPosts()]);
Bad:
//Each iteration waits for the previous one to complete for (let user of users) { await sendEmail(user); }
Better:
//Run in parallel await Promise.all(users.map(user => sendEmail(user)));
const data = await fetch(url); //SyntaxError
Imagine a system where:
Using async/await:
async function initDashboard() { try { const token = await login(username, password); const profile = await fetchProfile(token); const dashboard = await fetchDashboard(profile.id); renderDashboard(dashboard); } catch (err) { console.error('Error loading dashboard:', err); showErrorScreen(); } }
Much easier to follow than chained .then() calls, right?
Old way:
login() .then(token => fetchUser(token)) .then(user => showProfile(user)) .catch(error => showError(error));
With async/await:
async function start() { try { const token = await login(); const user = await fetchUser(token); showProfile(user); } catch (error) { showError(error); } }
Cleaner. Clearer. Less nested. Easier to debug.
If you hate repeating try/catch, use a helper:
const to = promise => promise.then(res => [null, res]).catch(err => [err]); async function loadData() { const [err, data] = await to(fetchData()); if (err) return console.error(err); console.log(data); }
Both Promises and async/await are powerful tools for handling asynchronous code. Promises came first and are still widely used, especially in libraries. async/awa is now the preferred style in most modern JavaScript apps because it makes the code cleaner and easier to understand.
]]>
Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.
Think of scope like a boundary or container that controls where you can use a variable in your code.
In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.
This helps in two big ways:
JavaScript mainly uses two types of scope:
1.Global Scope – Available everywhere in your code.
2.Local Scope – Available only inside a specific function or block.
Global Scope
When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.
If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.
var a = 5; // Global variable function add() { return a + 10; // Using the global variable inside a function } console.log(window.a); // 5
In this example, a is declared outside of any function, so it’s globally available—even inside add().
A quick note:
let name = "xyz"; function changeName() { name = "abc"; // Changing the value of the global variable } changeName(); console.log(name); // abc
In this example, we didn’t create a new variable—we just changed the value of the existing one.
Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.
Local Scope
In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.
There are two types of local scope:
1.Functional Scope
Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.
let firstName = "Shilpa"; // Global function changeName() { let lastName = "Syal"; // Local to this function console.log (`${firstName} ${lastName}`); } changeName(); console.log (lastName); //Error! Not available outside the function
You can even use the same variable name in different functions without any issue:
function mathMarks() { let marks = 80; console.log (marks); } function englishMarks() { let marks = 85; console.log (marks); }
Here, both marks variables are separate because they live in different function scopes.
2.Block Scope
Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).
function getMarks() { let marks = 60; if (marks > 50) { const points = 10; console.log (marks + points); //Works here } console.log (points); //
Uncaught Reference Error: points is not defined }
As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.
LEXICAL SCOPING & NESTED SCOPE:
When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.
function outerFunction() { let outerVar = "I’m outside"; function innerFunction() { console.log (outerVar); //Can access outerVar } innerFunction(); }
In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.
VARIABLE SCOPE OR VARIABLE SHADOWING:
You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.
If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.
let name = "xyz" function getName() { let name = "abc" // Redeclaring the name variable console.log (name) ; //abc } getName(); console.log (name) ; //xyz
To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.
let bonus = 500; function getSalary() { if(true) { return 10000 + bonus; // Looks up and finds bonus in the outer scope } } console.log (getSalary()); // 10500
Key Takeaways: Scoping Made Simple
Global Scope: Variables declared outside any function are global and can be used anywhere in your code.
Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.
Global Variables Last Longer: They stay alive as long as your program is running.
Local Variables Are Temporary: They’re created when the function runs and removed once it ends.
Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.
Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.
Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.”
To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.
It has two main phases:
1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.
2.Execution Phase: During this phase, code is executed line by line.
-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.
Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:
foo (); // Output: "Hello, world!" function foo () { console.log ("Hello, world!"); }
console.log (x); // Output: undefined var x = 5;
This code seems straightforward, but it’s interpreted as:
var x; console.log (x); // Output: undefined x = 5;
3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error
console.log (x); // Throws Reference Error: Cannot access 'x' before initialization let x = 5;
In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.
For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.
This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.
console.log (x); //x is not defined -- Reference Error. let a=10; //b is undefined. var b= 100; // you cannot access a before initialization Reference Error.
Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.
Conclusion
JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding!
]]>
To convert a text file from UTF-8 encoded data to ANSI using AWS Glue, you will typically work with Python or PySpark. However, it’s important to understand that ANSI is not a specific encoding but often refers to Windows-1252 (or similar 8-bit encodings) in a Windows context.
AWS Glue, running on Apache Spark, uses UTF-8 as the default encoding. Converting to ANSI requires handling the character encoding during the writing phase, because Spark itself doesn’t support writing files in encodings other than UTF-8 natively. But there are a few workarounds.
Here’s a step-by-step guide to converting a text file from UTF-8 to ANSI using Python in AWS Glue. Assume you’re working with a plain text file and want to output a similarly formatted file in ANSI encoding.
Step 1: Add the import statements to the code
import boto3 import codecs
Step 2: Specify the source/target file paths & S3 bucket details
# Initialize S3 client s3_client = boto3.client('s3') s3_key_utf8 = ‘utf8_file_path/filename.txt’ s3_key_ansi = 'ansi_file_path/filename.txt' # Specify S3 bucket and file paths bucket_name = outgoing_bucket #'your-s3-bucket-name' input_key = s3_key_utf8 #S3Path/name of input UTF-8 encoded file in S3 output_key = s3_key_ansi #S3 Path/name to save the ANSI encoded file
Step 3: Write a function to convert the text file from UTF-8 to ANSI, based on the parameters supplied (S3 bucket name, source-file, target-file)
# Function to convert UTF-8 file to ANSI (Windows-1252) and upload back to S3 def convert_utf8_to_ansi(bucket_name, input_key, output_key): # Download the UTF-8 encoded file from S3 response = s3_client.get_object(Bucket=bucket_name, Key=input_key) # Read the file content from the response body (UTF-8 encoded) utf8_content = response['Body'].read().decode('utf-8') # Convert the content to ANSI encoding (Windows-1252) ansi_content = utf8_content.encode('windows-1252', 'ignore') # 'ignore' to handle invalid characters # Upload the converted file to S3 (in ANSI encoding) s3_client.put_object(Bucket=bucket_name, Key=output_key, Body=ansi_content)
Step 4: Call the function that converts the text file from UTF-8 to ANSI
# Call the function to convert the file convert_utf8_to_ansi(bucket_name, input_key, output_key)
Summary:
The above steps are useful for any role (developer, tester, analyst…etc.) who needs to convert an UTF-8 encoded TXT file to ANSI format, make their job easy part of file validation.
]]>
Quarkus has gained traction as a modern Java framework designed for cloud-native development. In my previous blog, I discussed why learning Quarkus is a great choice. Today, let’s dive deeper into one of its standout features: Live Coding.
Live Coding in Quarkus provides an instant development experience where changes to your application’s code, configuration, and even dependencies are reflected in real time without restarting the application. This eliminates the need for slow rebuild-restart cycles, significantly improving productivity.
Quarkus automatically watches for file changes and reloads the necessary components without restarting the entire application. This feature is enabled by default in dev mode and can be triggered using:
mvn quarkus:dev
or if you are using Gradle:
gradle quarkusDev
Once the development server is running, any modifications to your application will be instantly reflected when you refresh the browser or make an API request.
Imagine you are developing a REST API with Quarkus and need to update an endpoint. With Live Coding enabled, you simply modify the resource class:
@Path("/hello")
public class GreetingResource {
@GET
public String hello() {
return "Hello, Quarkus!";
}
}
Change the return message to:
return "Hello, Live Coding!";
Without restarting the application, refresh the browser or send an API request, and the change is immediately visible. No waiting, no downtime.
While Live Coding is enabled by default in dev mode, you can also enable it in remote environments using:
mvn quarkus:remote-dev -Dquarkus.live-reload.url=<remote-server>
This allows developers working in distributed teams or cloud environments to take advantage of fast feedback cycles.
Quarkus Live Coding is a game-changer for Java development, reducing turnaround time and enhancing the overall developer experience. If you’re transitioning to Quarkus, leveraging this feature can significantly improve your workflow.
Have you tried Quarkus Live Coding? Share your experience in the comments!
Stay tuned for more features on security and reactive programming with quarkus.
I’m now a couple months into exploring Optimizely Configured Commerce and Spire CMS. As much as I’m up to speed with the Configured Commerce side of things (having past experience with Customized commerce), the Spire CMS side is a bit daunting, having worked with traditional Optimizely CMS for a while. We face challenges in figuring out handlers, a key concept in both Customized Commerce and Spire CMS.
And yes there is documentation, but its more high level and not enough to understand the inner functioning of the code (or maybe I just haven’t had the patience to go through it all yet :)).
Needless to say, I took a rather “figure it out by myself” approach here. I find that this is a much better way to learn and remember stuff :).
In a commerce site, there is Order History for every customer, with a “Reorder” capability. I will tweak the behavior of this Reorder action and prevent adding a specific SKU to cart again when user clicks “Reorder”.
Depending on what you are looking for and what you need to change, this can be different files in the Frontend source code.
I start by searching on keywords like “reorder” which do lead me to some files but they are mostly .tsx files aka React components that had the Reorder button on them. What I’m looking for instead is the actual method that passes the current order lines to add to cart, in order to intercept and tweak.
I decided it was time to put my browser skills to good use. I launch the site, open Dev tools, and hit Reorder to monitor all the Network calls that occur. And bravo.. I see the api call to Cart API for bulk load, which is what this action does. Here’s what that looks like :
api/v1/carts/current/cartlines/batch
with a Payload of cartlines sent to add to Cart.
Step #1 – I traced this back in code. Looked for “cartlines/batch” and found 1 file – CartService.ts
Its OOTB code, but for people new to this like me, we don’t know which folder has what. So, I’ll make this one step easier for you by telling you exactly where this file lives. You will find it at
FrontEnd\modules\client-framework\src\Services\CartService.ts
The method that makes the api call is addLineCollection(parameter: AddCartLinesApiParameter).
Step #2 – I now search for files that called this method. I found quite a few files that call this, but for my specific scenario, I stuck to the ones that said “reorder” specifically. These are the Frontend Handlers in Spire CMS.
Here’s the list and paths of the files that are relevant to the context here :
Once I see the line that makes the call to addLineCollection() method, I check how the parameter is being set.
Step #3 – All that’s left now is to update the code that sets the AddCartLinesApiParameter for this call, from the existing Order’s order lines. I add a filter to exclude the one specific SKU that I don’t want re-added to cart on reorder, on the OrderLines collection. Looks something like this :
In today’s mobile-first world, delivering personalized experiences to visitors using mobile devices is crucial for maximizing engagement and conversions. Optimizely’s powerful experimentation and personalization platform allows you to define custom audience criteria to target mobile users effectively.
By leveraging Optimizely’s audience segmentation, you can create tailored experiences based on factors such as device type, operating system, screen size, and user behavior. Whether you want to optimize mobile UX, test different layouts, or personalize content for Android vs. iOS users, understanding how to define mobile-specific audience criteria can help you drive better results.
In this blog, we’ll explore how to set up simple custom audience criteria for mobile visitors in Optimizely, the key benefits of mobile targeting, and the best practices to enhance user experiences across devices. Let’s dive in!
This solution is based on Example – Create audience criteria, which you can find in the Optimizely documentation.
First, we need to create two classes in our solution:
Class VisitorDeviceTypeCriterionSettings
needs to inherit CriterionModelBase
class, and we need only one property (settings) to determine if the visitor is using a desktop or a mobile device.
public bool IsMobile { get; set; }
The abstract CriterionModelBase
class requires you to implement the Copy()
method. Because you are not using complex reference types, you can implement it by returning a shallow copy as shown (see Create custom audience criteria):
public override ICriterionModel Copy() { return base.ShallowCopy(); }
The entire class will look something like this:
using EPiServer.Data.Dynamic; using EPiServer.Personalization.VisitorGroups; namespace AlloyTest.Personalization.Criteria { [EPiServerDataStore(AutomaticallyRemapStore = true)] public class VisitorDeviceTypeCriterionSettings : CriterionModelBase { public bool IsMobile { get; set; } public override ICriterionModel Copy() { // if this class has reference types that require deep copying, then // that implementation belongs here. Otherwise, you can just rely on // shallow copy from the base class return base.ShallowCopy(); } } }
Now, we need to implement the criterion class VisitorDeviceTypeCriterion
and inherit the abstract CriterionBase
class with the settings class as the type parameter:
public class VisitorDeviceTypeCriterion : CriterionBase<VisitorDeviceTypeCriterionSettings>
Add a VisitorGroupCriterion
attribute to set the category, name, and description of the criterion (for more available VisitorGroupCriterion
properties, see Create custom audience criteria:
[VisitorGroupCriterion( Category = "MyCustom", DisplayName = "Device Type", Description = "Criterion that matches type of the user's device" )]
The abstract CriterionBase
class requires you to implement an IsMatch()
method that determines whether the current user matches this audience criterion. In this case, we need to determine from which device the visitor is accessing our site. Because Optimizely doesn’t provide this out of the box, we need to figure out that part.
One of the solutions is to use information from the request header, from the User-Agent
field and analyze it to determine the OS and device type. We can do that by writing our match method:
public virtual bool MatchBrowserType(string userAgent) { var os = new Regex( @"(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od|ad)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino", RegexOptions.IgnoreCase | RegexOptions.Multiline); var device = new Regex( @"1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-", RegexOptions.IgnoreCase | RegexOptions.Multiline); var deviceInfo = string.Empty; if (os.IsMatch(userAgent)) { deviceInfo = os.Match(userAgent).Groups[0].Value; } if (device.IsMatch(userAgent.Substring(0, 4))) { deviceInfo += device.Match(userAgent).Groups[0].Value; } if (!string.IsNullOrEmpty(deviceInfo)) { return true; } return false; }
Now, we can go back and implement the IsMatch()
method that is required by CriterionBase
abstract class.
public override bool IsMatch(IPrincipal principal, HttpContext httpContext) { return MatchBrowserType(httpContext.Request.Headers["User-Agent"].ToString()); }
In the CMS we need to create a new audience criterion. When you click on the ‘Add Criteria’ button, there will be ‘MyCustom’ criteria group with our criteria:
When you select the ‘Device Type’ criteria, you will see something like this:
We can easily add a label for the checkbox by using Optimizely’s translation functionality. Create a new XML file VisitorGroupCriterion.xml
and place it in your translations folder where your translation files are, like this:
Put this into the file that you created:
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <languages> <language name="English" id="en-us"> <visitorgroups> <criteria> <ismobile> <key>Is Mobile Device (Use this setting to show content only on Mobile)</key> </ismobile> </criteria> </visitorgroups> </language> </languages>
There is one more thing to do. In VisitorDeviceTypeCriterionSettings.cs,
decorate the IsMobile
property with the translation definition. Add this attribute:
[CriterionPropertyEditor(LabelTranslationKey = "/visitorgroups/criteria/ismobile/key")]
It should look like this:
Now, in the editor view, we have a label for the checkbox.
Personalize the content by setting the content for this visitor group.
Desktop view:
Mobile view:
You can see that there is content that is only visible if you access the site with a mobile device.
And that’s it!
]]>