JavaScript is single-threaded. That means it runs one task at a time, on one core. But then how does it handle things like API calls, file reads, or user interactions without freezing up?
That’s where Promises and async/await come into play. They help us handle asynchronous operations without blocking the main thread.
Let’s break down these concepts in the simplest way possible so whether you’re a beginner or a seasoned dev, it just clicks.
JavaScript has something called an event loop. It’s always running, checking if there’s work to do—like handling user clicks, network responses, or timers. In the browser, the browser runs it. In Node.js, Node takes care of it.
When an async function runs and hits an await, it pauses that function. It doesn’t block everything—other code keeps running. When the awaited Promise settles, that async function picks up where it left off.
Instead of using nested callbacks (aka “callback hell”), Promises allow cleaner, more manageable code using chaining.
Example:
fetchData() .then(data => process(data)) .then(result => console.log(result)) .catch(error => console.error(error));
Let’s look at the essential Promise utility methods:
Waits for all promises to resolve. If any promise fails, the whole thing fails.
Promise.all([p1, p2, p3]) .then(results => console.log(results)) .catch(error => console.error(error));
Waits for all promises, regardless of success or failure.
Promise.allSettled([p1, p2, p3]) .then(results => console.log(results));
Returns as soon as one promise settles (either resolves or rejects).
Promise.race([p1, p2, p3]) .then(result => console.log('Fastest:', result)) .catch(error => console.error('First to fail:', error));
Returns the first fulfilled promise. Ignores rejections unless all fail.
Promise.any([p1, p2, p3]) .then(result => console.log('First success:', result)) .catch(error => console.error('All failed:', error));
5.Promise.resolve() / Promise.reject
Used for quick returns or mocking async behavior.
Before Promises, developers relied on callbacks:
getData(function(response) { process(response, function(result) { finalize(result); }); });
This worked, but quickly became messy i.e. callback hell.
Under the hood, async/await is just syntactic sugar over Promises. It makes asynchronous code look synchronous, improving readability and debuggability.
How it works:
async function greet() { return 'Hello'; } greet().then(msg => console.log(msg)); // Hello
Even though you didn’t explicitly return a Promise, greet() returns one.
Let’s understand how await interacts with the JavaScript event loop.
console.log("1"); setTimeout(() => console.log("2"), 0); (async function() { console.log("3"); await Promise.resolve(); console.log("4"); })(); console.log("5");
Output:
Let’s understand how await interacts with the JavaScript event loop.
1 3 5 4 2
Explanation:
Avoid unhandled promise rejections by always wrapping await logic inside a try/catch.
async function getUser() { try { const res = await fetch('/api/user'); if (!res.ok) throw new Error('User not found'); const data = await res.json(); return data; } catch (error) { console.error('Error fetching user:', error.message); throw error; // rethrow if needed } }
Don’t await sequentially unless there’s a dependency between the calls.
Bad:
const user = await getUser(); const posts = await getPosts(); // waits for user even if not needed
Better:
const [user, posts] = await Promise.all([getUser(), getPosts()]);
Bad:
//Each iteration waits for the previous one to complete for (let user of users) { await sendEmail(user); }
Better:
//Run in parallel await Promise.all(users.map(user => sendEmail(user)));
const data = await fetch(url); //SyntaxError
Imagine a system where:
Using async/await:
async function initDashboard() { try { const token = await login(username, password); const profile = await fetchProfile(token); const dashboard = await fetchDashboard(profile.id); renderDashboard(dashboard); } catch (err) { console.error('Error loading dashboard:', err); showErrorScreen(); } }
Much easier to follow than chained .then() calls, right?
Old way:
login() .then(token => fetchUser(token)) .then(user => showProfile(user)) .catch(error => showError(error));
With async/await:
async function start() { try { const token = await login(); const user = await fetchUser(token); showProfile(user); } catch (error) { showError(error); } }
Cleaner. Clearer. Less nested. Easier to debug.
If you hate repeating try/catch, use a helper:
const to = promise => promise.then(res => [null, res]).catch(err => [err]); async function loadData() { const [err, data] = await to(fetchData()); if (err) return console.error(err); console.log(data); }
Both Promises and async/await are powerful tools for handling asynchronous code. Promises came first and are still widely used, especially in libraries. async/awa is now the preferred style in most modern JavaScript apps because it makes the code cleaner and easier to understand.
]]>
Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.
Think of scope like a boundary or container that controls where you can use a variable in your code.
In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.
This helps in two big ways:
JavaScript mainly uses two types of scope:
1.Global Scope – Available everywhere in your code.
2.Local Scope – Available only inside a specific function or block.
Global Scope
When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.
If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.
var a = 5; // Global variable function add() { return a + 10; // Using the global variable inside a function } console.log(window.a); // 5
In this example, a is declared outside of any function, so it’s globally available—even inside add().
A quick note:
let name = "xyz"; function changeName() { name = "abc"; // Changing the value of the global variable } changeName(); console.log(name); // abc
In this example, we didn’t create a new variable—we just changed the value of the existing one.
Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.
Local Scope
In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.
There are two types of local scope:
1.Functional Scope
Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.
let firstName = "Shilpa"; // Global function changeName() { let lastName = "Syal"; // Local to this function console.log (`${firstName} ${lastName}`); } changeName(); console.log (lastName); //Error! Not available outside the function
You can even use the same variable name in different functions without any issue:
function mathMarks() { let marks = 80; console.log (marks); } function englishMarks() { let marks = 85; console.log (marks); }
Here, both marks variables are separate because they live in different function scopes.
2.Block Scope
Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).
function getMarks() { let marks = 60; if (marks > 50) { const points = 10; console.log (marks + points); //Works here } console.log (points); //
Uncaught Reference Error: points is not defined }
As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.
LEXICAL SCOPING & NESTED SCOPE:
When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.
function outerFunction() { let outerVar = "I’m outside"; function innerFunction() { console.log (outerVar); //Can access outerVar } innerFunction(); }
In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.
VARIABLE SCOPE OR VARIABLE SHADOWING:
You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.
If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.
let name = "xyz" function getName() { let name = "abc" // Redeclaring the name variable console.log (name) ; //abc } getName(); console.log (name) ; //xyz
To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.
let bonus = 500; function getSalary() { if(true) { return 10000 + bonus; // Looks up and finds bonus in the outer scope } } console.log (getSalary()); // 10500
Key Takeaways: Scoping Made Simple
Global Scope: Variables declared outside any function are global and can be used anywhere in your code.
Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.
Global Variables Last Longer: They stay alive as long as your program is running.
Local Variables Are Temporary: They’re created when the function runs and removed once it ends.
Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.
Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.
Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.”
To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.
It has two main phases:
1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.
2.Execution Phase: During this phase, code is executed line by line.
-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.
Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:
foo (); // Output: "Hello, world!" function foo () { console.log ("Hello, world!"); }
console.log (x); // Output: undefined var x = 5;
This code seems straightforward, but it’s interpreted as:
var x; console.log (x); // Output: undefined x = 5;
3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error
console.log (x); // Throws Reference Error: Cannot access 'x' before initialization let x = 5;
In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.
For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.
This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.
console.log (x); //x is not defined -- Reference Error. let a=10; //b is undefined. var b= 100; // you cannot access a before initialization Reference Error.
Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.
Conclusion
JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding!
]]>
To convert a text file from UTF-8 encoded data to ANSI using AWS Glue, you will typically work with Python or PySpark. However, it’s important to understand that ANSI is not a specific encoding but often refers to Windows-1252 (or similar 8-bit encodings) in a Windows context.
AWS Glue, running on Apache Spark, uses UTF-8 as the default encoding. Converting to ANSI requires handling the character encoding during the writing phase, because Spark itself doesn’t support writing files in encodings other than UTF-8 natively. But there are a few workarounds.
Here’s a step-by-step guide to converting a text file from UTF-8 to ANSI using Python in AWS Glue. Assume you’re working with a plain text file and want to output a similarly formatted file in ANSI encoding.
Step 1: Add the import statements to the code
import boto3 import codecs
Step 2: Specify the source/target file paths & S3 bucket details
# Initialize S3 client s3_client = boto3.client('s3') s3_key_utf8 = ‘utf8_file_path/filename.txt’ s3_key_ansi = 'ansi_file_path/filename.txt' # Specify S3 bucket and file paths bucket_name = outgoing_bucket #'your-s3-bucket-name' input_key = s3_key_utf8 #S3Path/name of input UTF-8 encoded file in S3 output_key = s3_key_ansi #S3 Path/name to save the ANSI encoded file
Step 3: Write a function to convert the text file from UTF-8 to ANSI, based on the parameters supplied (S3 bucket name, source-file, target-file)
# Function to convert UTF-8 file to ANSI (Windows-1252) and upload back to S3 def convert_utf8_to_ansi(bucket_name, input_key, output_key): # Download the UTF-8 encoded file from S3 response = s3_client.get_object(Bucket=bucket_name, Key=input_key) # Read the file content from the response body (UTF-8 encoded) utf8_content = response['Body'].read().decode('utf-8') # Convert the content to ANSI encoding (Windows-1252) ansi_content = utf8_content.encode('windows-1252', 'ignore') # 'ignore' to handle invalid characters # Upload the converted file to S3 (in ANSI encoding) s3_client.put_object(Bucket=bucket_name, Key=output_key, Body=ansi_content)
Step 4: Call the function that converts the text file from UTF-8 to ANSI
# Call the function to convert the file convert_utf8_to_ansi(bucket_name, input_key, output_key)
]]>
Quarkus has gained traction as a modern Java framework designed for cloud-native development. In my previous blog, I discussed why learning Quarkus is a great choice. Today, let’s dive deeper into one of its standout features: Live Coding.
Live Coding in Quarkus provides an instant development experience where changes to your application’s code, configuration, and even dependencies are reflected in real time without restarting the application. This eliminates the need for slow rebuild-restart cycles, significantly improving productivity.
Quarkus automatically watches for file changes and reloads the necessary components without restarting the entire application. This feature is enabled by default in dev mode and can be triggered using:
mvn quarkus:dev
or if you are using Gradle:
gradle quarkusDev
Once the development server is running, any modifications to your application will be instantly reflected when you refresh the browser or make an API request.
Imagine you are developing a REST API with Quarkus and need to update an endpoint. With Live Coding enabled, you simply modify the resource class:
@Path("/hello")
public class GreetingResource {
@GET
public String hello() {
return "Hello, Quarkus!";
}
}
Change the return message to:
return "Hello, Live Coding!";
Without restarting the application, refresh the browser or send an API request, and the change is immediately visible. No waiting, no downtime.
While Live Coding is enabled by default in dev mode, you can also enable it in remote environments using:
mvn quarkus:remote-dev -Dquarkus.live-reload.url=<remote-server>
This allows developers working in distributed teams or cloud environments to take advantage of fast feedback cycles.
Quarkus Live Coding is a game-changer for Java development, reducing turnaround time and enhancing the overall developer experience. If you’re transitioning to Quarkus, leveraging this feature can significantly improve your workflow.
Have you tried Quarkus Live Coding? Share your experience in the comments!
Stay tuned for more features on security and reactive programming with quarkus.
I’m now a couple months into exploring Optimizely Configured Commerce and Spire CMS. As much as I’m up to speed with the Configured Commerce side of things (having past experience with Customized commerce), the Spire CMS side is a bit daunting, having worked with traditional Optimizely CMS for a while. We face challenges in figuring out handlers, a key concept in both Customized Commerce and Spire CMS.
And yes there is documentation, but its more high level and not enough to understand the inner functioning of the code (or maybe I just haven’t had the patience to go through it all yet :)).
Needless to say, I took a rather “figure it out by myself” approach here. I find that this is a much better way to learn and remember stuff :).
In a commerce site, there is Order History for every customer, with a “Reorder” capability. I will tweak the behavior of this Reorder action and prevent adding a specific SKU to cart again when user clicks “Reorder”.
Depending on what you are looking for and what you need to change, this can be different files in the Frontend source code.
I start by searching on keywords like “reorder” which do lead me to some files but they are mostly .tsx files aka React components that had the Reorder button on them. What I’m looking for instead is the actual method that passes the current order lines to add to cart, in order to intercept and tweak.
I decided it was time to put my browser skills to good use. I launch the site, open Dev tools, and hit Reorder to monitor all the Network calls that occur. And bravo.. I see the api call to Cart API for bulk load, which is what this action does. Here’s what that looks like :
api/v1/carts/current/cartlines/batch
with a Payload of cartlines sent to add to Cart.
Step #1 – I traced this back in code. Looked for “cartlines/batch” and found 1 file – CartService.ts
Its OOTB code, but for people new to this like me, we don’t know which folder has what. So, I’ll make this one step easier for you by telling you exactly where this file lives. You will find it at
FrontEnd\modules\client-framework\src\Services\CartService.ts
The method that makes the api call is addLineCollection(parameter: AddCartLinesApiParameter).
Step #2 – I now search for files that called this method. I found quite a few files that call this, but for my specific scenario, I stuck to the ones that said “reorder” specifically. These are the Frontend Handlers in Spire CMS.
Here’s the list and paths of the files that are relevant to the context here :
Once I see the line that makes the call to addLineCollection() method, I check how the parameter is being set.
Step #3 – All that’s left now is to update the code that sets the AddCartLinesApiParameter for this call, from the existing Order’s order lines. I add a filter to exclude the one specific SKU that I don’t want re-added to cart on reorder, on the OrderLines collection. Looks something like this :
In today’s mobile-first world, delivering personalized experiences to visitors using mobile devices is crucial for maximizing engagement and conversions. Optimizely’s powerful experimentation and personalization platform allows you to define custom audience criteria to target mobile users effectively.
By leveraging Optimizely’s audience segmentation, you can create tailored experiences based on factors such as device type, operating system, screen size, and user behavior. Whether you want to optimize mobile UX, test different layouts, or personalize content for Android vs. iOS users, understanding how to define mobile-specific audience criteria can help you drive better results.
In this blog, we’ll explore how to set up simple custom audience criteria for mobile visitors in Optimizely, the key benefits of mobile targeting, and the best practices to enhance user experiences across devices. Let’s dive in!
This solution is based on Example – Create audience criteria, which you can find in the Optimizely documentation.
First, we need to create two classes in our solution:
Class VisitorDeviceTypeCriterionSettings
needs to inherit CriterionModelBase
class, and we need only one property (settings) to determine if the visitor is using a desktop or a mobile device.
public bool IsMobile { get; set; }
The abstract CriterionModelBase
class requires you to implement the Copy()
method. Because you are not using complex reference types, you can implement it by returning a shallow copy as shown (see Create custom audience criteria):
public override ICriterionModel Copy() { return base.ShallowCopy(); }
The entire class will look something like this:
using EPiServer.Data.Dynamic; using EPiServer.Personalization.VisitorGroups; namespace AlloyTest.Personalization.Criteria { [EPiServerDataStore(AutomaticallyRemapStore = true)] public class VisitorDeviceTypeCriterionSettings : CriterionModelBase { public bool IsMobile { get; set; } public override ICriterionModel Copy() { // if this class has reference types that require deep copying, then // that implementation belongs here. Otherwise, you can just rely on // shallow copy from the base class return base.ShallowCopy(); } } }
Now, we need to implement the criterion class VisitorDeviceTypeCriterion
and inherit the abstract CriterionBase
class with the settings class as the type parameter:
public class VisitorDeviceTypeCriterion : CriterionBase<VisitorDeviceTypeCriterionSettings>
Add a VisitorGroupCriterion
attribute to set the category, name, and description of the criterion (for more available VisitorGroupCriterion
properties, see Create custom audience criteria:
[VisitorGroupCriterion( Category = "MyCustom", DisplayName = "Device Type", Description = "Criterion that matches type of the user's device" )]
The abstract CriterionBase
class requires you to implement an IsMatch()
method that determines whether the current user matches this audience criterion. In this case, we need to determine from which device the visitor is accessing our site. Because Optimizely doesn’t provide this out of the box, we need to figure out that part.
One of the solutions is to use information from the request header, from the User-Agent
field and analyze it to determine the OS and device type. We can do that by writing our match method:
public virtual bool MatchBrowserType(string userAgent) { var os = new Regex( @"(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od|ad)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino", RegexOptions.IgnoreCase | RegexOptions.Multiline); var device = new Regex( @"1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-", RegexOptions.IgnoreCase | RegexOptions.Multiline); var deviceInfo = string.Empty; if (os.IsMatch(userAgent)) { deviceInfo = os.Match(userAgent).Groups[0].Value; } if (device.IsMatch(userAgent.Substring(0, 4))) { deviceInfo += device.Match(userAgent).Groups[0].Value; } if (!string.IsNullOrEmpty(deviceInfo)) { return true; } return false; }
Now, we can go back and implement the IsMatch()
method that is required by CriterionBase
abstract class.
public override bool IsMatch(IPrincipal principal, HttpContext httpContext) { return MatchBrowserType(httpContext.Request.Headers["User-Agent"].ToString()); }
In the CMS we need to create a new audience criterion. When you click on the ‘Add Criteria’ button, there will be ‘MyCustom’ criteria group with our criteria:
When you select the ‘Device Type’ criteria, you will see something like this:
We can easily add a label for the checkbox by using Optimizely’s translation functionality. Create a new XML file VisitorGroupCriterion.xml
and place it in your translations folder where your translation files are, like this:
Put this into the file that you created:
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <languages> <language name="English" id="en-us"> <visitorgroups> <criteria> <ismobile> <key>Is Mobile Device (Use this setting to show content only on Mobile)</key> </ismobile> </criteria> </visitorgroups> </language> </languages>
There is one more thing to do. In VisitorDeviceTypeCriterionSettings.cs,
decorate the IsMobile
property with the translation definition. Add this attribute:
[CriterionPropertyEditor(LabelTranslationKey = "/visitorgroups/criteria/ismobile/key")]
It should look like this:
Now, in the editor view, we have a label for the checkbox.
Personalize the content by setting the content for this visitor group.
Desktop view:
Mobile view:
You can see that there is content that is only visible if you access the site with a mobile device.
And that’s it!
]]>As an AEM author, updating existing page content is a routine task. However, manual updates, like rolling out a new template, can become tedious and costly when dealing with thousands of pages.
Fortunately, automation scripts can save the day. Using Groovy scripts within AEM can streamline the content update process, reducing time and costs. In this blog, we’ll outline the key steps and best practices for using Groovy scripts to automate content updates.
Groovy is a powerful scripting language that integrates seamlessly with AEM. It allows developers to perform complex operations with minimal code, making it an excellent tool for tasks such as:
The Groovy Console for AEM provides an intuitive interface for running scripts, enabling rapid development and testing without redeploying code.
To illustrate how to use Groovy, let’s learn how to update templates for existing web pages authored inside AEM.
Our first step is to identify the following:
You should have source and destination template component mappings and page paths.
As a pre-requisite for this solution, you will need to have JDK 11, Groovy 3.0.9, and Maven 3.6.3.
1. Create a CSV File
The CSV file should contain two columns:
Save this file as template-map.csv.
Source,Target "/apps/legacy/templates/page-old","/apps/new/templates/page-new" "/apps/legacy/templates/article-old","/apps/new/templates/article-new"v
2. Load the Mapping File in migrate.groovy
In your migrate.groovy script, insert the following code to load the mapping file:
def templateMapFile = new File("work${File.separator}config${File.separator}template-map.csv") assert templateMapFile.exists() : "Template Mapping File not found!"
3. Implement the Template Mapping Logic
Next, we create a function to map source templates to target templates by utilizing the CSV file.
String mapTemplate(sourceTemplateName, templateMapFile) { /*this function uses the sourceTemplateName to look up the template we will use to create new XML*/ def template = '' assert templateMapFile : "Template Mapping File not found!" for (templateMap in parseCsv(templateMapFile.getText(ENCODING), separator: SEPARATOR)) { def sourceTemplate = templateMap['Source'] def targetTemplate = templateMap['Target'] if (sourceTemplateName.equals(sourceTemplate)) { template = targetTemplate } } assert template : "Template ${sourceTemplateName} not found!" return template }
After creating a package using Groovy script on your local machine, you can directly install it through the Package Manager. This package can be installed on both AEM as a Cloud Service (AEMaaCS) and on-premises AEM.
Execute the script in a non-production environment, verify that templates are correctly updated, and review logs for errors or skipped nodes. After running the script, check content pages to ensure they render as expected, validate that new templates are functioning correctly, and test associated components for compatibility.
Leveraging automation through scripting languages like Groovy can significantly simplify and accelerate AEM migrations. By following a structured approach, you can minimize manual effort, reduce errors, and ensure a smooth transition to the new platform, ultimately improving overall maintainability.
Don’t miss out on more AEM insights and follow our Adobe blog!
]]>Python is an incredibly powerful and easy-to-use programming language. However, it can be slow if not optimized properly! This guide will teach you how to turbocharge your code, making it faster, leaner, and more efficient. Buckle up, and let’s dive into some epic optimization hacks!
For more on Python basics, check out our Beginner’s Guide to Python Programming.
Picking the right data structure is like choosing the right tool for a job—do it wrong, and you’ll be banging a nail with a screwdriver!
# List (mutable)
my_list = [1, 2, 3]
# Tuple (immutable, faster)
my_tuple = (1, 2, 3)
# Slow list lookup (O(n))
numbers = [1, 2, 3, 4, 5]
print(3 in numbers) # Yawn... Slow!
# Fast set lookup (O(1))
numbers_set = {1, 2, 3, 4, 5}
print(3 in numbers_set) # Blink and you'll miss it! ⚡
# Generator (better memory usage)
def squared_numbers(n):
for i in range(n):
yield i * i
squares = squared_numbers(1000000) # No memory explosion! 💥
# Inefficient
for i in range(10000):
result = expensive_function() # Ugh! Repeating this is a performance killer 😩
process(result)
# Optimized
cached_result = expensive_function() # Call it once and chill 😎
for i in range(10000):
process(cached_result)
# Traditional loop (meh...)
squares = []
for i in range(10):
squares.append(i * i)
# Optimized list comprehension (so sleek! 😍)
squares = [i * i for i in range(10)]
# Inefficient (Creates too many temporary strings 🤯)
words = ["Hello", "world", "Python"]
sentence = ""
for word in words:
sentence += word + " "
# Optimized (Effortless and FAST 💨)
sentence = " ".join(words)
name = "Alice"
age = 25
# Old formatting (Ew 🤢)
print("My name is {} and I am {} years old.".format(name, age))
# Optimized f-string (Sleek & stylish 😎)
print(f"My name is {name} and I am {age} years old.")
import timeit
print(timeit.timeit("sum(range(1000))", number=10000)) # How fast is your code? 🚀
import cProfile
cProfile.run('my_function()') # Find bottlenecks like a pro! 🔍
For more on profiling, see our Guide to Python Profiling Tools.
import sys
my_list = [1, 2, 3, 4, 5]
print(sys.getsizeof(my_list)) # How big is that object? 🤔
import gc
large_object = [i for i in range(1000000)]
del large_object # Say bye-bye to memory hog! 👋
gc.collect() # Cleanup crew 🧹
from multiprocessing import Pool
def square(n):
return n * n
with Pool(4) as p: # Use 4 CPU cores 🏎
results = p.map(square, range(100))
import threading
def print_numbers():
for i in range(10):
print(i)
thread = threading.Thread(target=print_numbers)
thread.start()
thread.join()
For more on parallel processing, check out our Introduction to Python Multithreading.
Congratulations! You’ve unlocked Python’s full potential by learning these killer optimization tricks. Now go forth and write blazing-fast, memory-efficient, and clean Python code.
Got any favorite optimization hacks? Drop them in the comments!
For more in-depth information on Python optimization, check out these resources:
Nine years ago, I was eager to be a developer but found no convincing platform. Luckily, the smartphone world was booming, and its extraordinary growth immediately caught my eye. This led to my career as an Android developer, where I had the opportunity to learn the nuances of building mobile applications. The time I went along helped me expand my reach into hybrid mobile app development, allowing me to smoothly adapt to various platforms.
I also know the struggles of countless aspiring developers dilemma with uncertainty about which direction to head and which technology to pursue. Hence, the idea of writing this blog stemmed from my experiences and insights while making my own way through mobile app development. It is geared toward those beginning to learn this subject or adding to current knowledge.
Choosing the right development framework depends on your interests, career goals, and project requirements. If you enjoy building interactive user experiences, Web Development with React, Angular, or Vue.js could be your path. If you prefer handling server-side logic, Backend Development with Node.js, Python, or Java might be ideal. Those fascinated by mobile applications can explore Native (Kotlin, Swift) or Cross-Platform (React Native, Flutter) Development.
For those drawn to game development, Unity and Unreal Engine provide powerful tools, while Data Science & Machine Learning enthusiasts can leverage Python and frameworks like TensorFlow and PyTorch. If you’re passionate about infrastructure and automation, DevOps & Cloud Development with Docker, Kubernetes, and AWS is a strong choice. Meanwhile, Embedded Systems & IoT Development appeals to those interested in hardware-software integration using Arduino, Raspberry Pi, and C/C++.
Path | Pros | Cons |
Web Development | High-demand, fast-paced, large community | Frequent technology changes |
Backend Development | Scalable applications, strong job market | Can be complex, requires database expertise |
Mobile Development | Booming industry, native vs. cross-platform options | Requires platform-specific knowledge |
Game Development | Creative field, engaging projects | Competitive market, longer development cycles |
Data Science & ML | High-paying field, innovative applications | Requires strong math and programming skills |
DevOps & Cloud | Essential for modern development, automation focus | Can be complex, requires networking knowledge |
Embedded Systems & IoT | Hardware integration, real-world applications | Limited to specialized domains |
No matter which path you choose, the key is continuous learning and hands-on experience. Stay curious, build projects, and embrace challenges on your journey to becoming a skilled developer, check out Developer Roadmaps for further insights and guidance. Happy coding!
In this blog post, I will share my journey of implementing an automated solution to publish release notes for service deployments to Confluence using Bitbucket Pipelines. This aimed to streamline our release process and ensure all relevant information was easily accessible to our team. By leveraging tools like Bitbucket and Confluence, we achieved a seamless integration that enhanced our workflow.
We configured our Bitbucket pipeline to include a new step for publishing release notes. This involved writing a script in the bitbucket-pipelines.yml file to gather the necessary information (SHA, build number, and summary of updates).
We pulled the summary of updates from our commit messages and release notes. To ensure the quality of the summaries, we emphasized the importance of writing detailed and informative commit messages.
Using the Confluence Cloud REST API, we automated the creation of Confluence pages. We made a parent page titled “Releases” and configured the script to publish a new page.
We used several repository variables to keep sensitive information secure and make the script more maintainable:
Here is the script we used in our bitbucket-pipelines.yml file, along with an explanation of each part:
- step: &release-notes name: Publish Release Notes image: atlassian/default-image:3
script: - ls -la /src/main/resources/
ls -la
command lists the files in the specified directory to ensure the necessary files are present.- RELEASE_NUMBER=$(grep '{application_name}.version' /src/main/resources/application.properties | cut -d'=' -f2)
grep
command extracts the release number from the application.properties file where the property {application_name}.version
should be present.- RELEASE_TITLE="Release - $RELEASE_NUMBER Build- $BITBUCKET_BUILD_NUMBER Commit- $BITBUCKET_COMMIT"
- COMMIT_MESSAGE=$(git log --format=%B -n 1 ${BITBUCKET_COMMIT})
- | if [[ $COMMIT_MESSAGE =~ pull\ request\ #([0-9]+) ]]; then PR_NUMBER=$(echo "$COMMIT_MESSAGE" | grep -o -E 'pull\ request\ \#([0-9]+)' | sed 's/[^0-9]*//g')
RAW_RESPONSE=$(wget --no-hsts -qO- --header="Authorization: Bearer $REPO_TOKEN" "https://api.bitbucket.org/2.0/repositories/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pullrequests/${PR_NUMBER}") PR_DESCRIPTION=$(echo "$RAW_RESPONSE" | jq -r '.description') echo "$PR_DESCRIPTION" > description.txt
AUTH_HEADER=$(echo -n "$CONFLUENCE_USERNAME:$CONFLUENCE_TOKEN" | base64 | tr -d '\n') JSON_DATA=$(jq -n --arg title "$RELEASE_TITLE" \ --arg type "page" \ --arg space_key "$CONFLUENCE_SPACE_KEY" \ --arg ancestor_id "$CONFLUENCE_ANCESTOR_ID" \ --rawfile pr_description description.txt \ '{ title: $title, type: $type, space: { key: $space_key }, ancestors: [{ id: ($ancestor_id | tonumber) }], body: { storage: { value: $pr_description, representation: "storage" } } }') echo "$JSON_DATA" > json_data.txt
wget --no-hsts --method=POST --header="Content-Type: application/json" \ --header="Authorization: Basic $AUTH_HEADER" \ --body-file="json_data.txt" \ "$CONFLUENCE_API_URL" -q -O - if [[ $? -ne 0 ]]; then echo "HTTP request failed" exit 1 fi
# Service for publishing release notes - step: &release-notes name: Publish Release Notes image: atlassian/default-image:3 script: - ls -la /src/main/resources/ - RELEASE_NUMBER=$(grep '{application_name}.version' /src/main/resources/application.properties | cut -d'=' -f2) - RELEASE_TITLE="Release - $RELEASE_NUMBER Build- $BITBUCKET_BUILD_NUMBER Commit- $BITBUCKET_COMMIT" - COMMIT_MESSAGE=$(git log --format=%B -n 1 ${BITBUCKET_COMMIT}) - | if [[ $COMMIT_MESSAGE =~ pull\ request\ #([0-9]+) ]]; then PR_NUMBER=$(echo "$COMMIT_MESSAGE" | grep -o -E 'pull\ request\ \#([0-9]+)' | sed 's/[^0-9]*//g') RAW_RESPONSE=$(wget --no-hsts -qO- --header="Authorization: Bearer $REPO_TOKEN" "https://api.bitbucket.org/2.0/repositories/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pullrequests/${PR_NUMBER}") PR_DESCRIPTION=$(echo "$RAW_RESPONSE" | jq -r '.description') echo "$PR_DESCRIPTION" > description.txt AUTH_HEADER=$(echo -n "$CONFLUENCE_USERNAME:$CONFLUENCE_TOKEN" | base64 | tr -d '\n') JSON_DATA=$(jq -n --arg title "$RELEASE_TITLE" \ --arg type "page" \ --arg space_key "$CONFLUENCE_SPACE_KEY" \ --arg ancestor_id "$CONFLUENCE_ANCESTOR_ID" \ --rawfile pr_description description.txt \ '{ title: $title, type: $type, space: { key: $space_key }, ancestors: [{ id: ($ancestor_id | tonumber) }], body: { storage: { value: $pr_description, representation: "storage" } } }') echo "$JSON_DATA" > json_data.txt wget --no-hsts --method=POST --header="Content-Type: application/json" \ --header="Authorization: Basic $AUTH_HEADER" \ --body-file="json_data.txt" \ "$CONFLUENCE_API_URL" -q -O - if [[ $? -ne 0 ]]; then echo "HTTP request failed" exit 1 fi fi
Automating the publication of release notes to Confluence using Bitbucket Pipelines has been a game-changer for our team. It has streamlined our release process and ensured all relevant information is readily available. I hope this blog post provides insights and inspiration for others looking to implement similar solutions.
]]>In this blog post, I will guide you through the process of implementing a Spring Expression Language (SpEL) validator in a Spring Boot application. SpEL is a powerful expression language that supports querying and manipulating an object graph at runtime. By the end of this tutorial, you will have a working example of using SpEL for validation in your Spring Boot application.
First things first, let’s set up your Spring Boot project. Head over to Spring Initializer and create a new project with the following dependencies:
<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>3.4.2</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> <version>3.4.2</version> </dependency> </dependencies>
Next, we will create the main application class to bootstrap our Spring Boot application.
package com.example.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } }
Create a SpelExpression class to hold the user input.
package com.example.demo.model; public class SpelExpression { private String expression; // Getters and Setters public String getExpression() { return expression; } public void setExpression(String expression) { this.expression = expression; } }
Create a controller to handle user input and validate the SpEL expression.
package com.example.demo.controller; import com.example.demo.model.SpelExpression; import org.springframework.expression.ExpressionParser; import org.springframework.expression.spel.SpelParseException; import org.springframework.expression.spel.standard.SpelExpressionParser; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.PostMapping; @Controller public class SpelController { private final ExpressionParser parser = new SpelExpressionParser(); @GetMapping("/spelForm") public String showForm(Model model) { model.addAttribute("spelExpression", new SpelExpression()); return "spelForm"; } @PostMapping("/validateSpel") public String validateSpel(@ModelAttribute SpelExpression spelExpression, Model model) { try { parser.parseExpression(spelExpression.getExpression()); model.addAttribute("message", "The expression is valid."); } catch (SpelParseException e) { model.addAttribute("message", "Invalid expression: " + e.getMessage()); } return "result"; } }
Create Thymeleaf templates for the form and the result page.
<!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org"> <head> <title>SpEL Form</title> <style> body { font-family: Arial, sans-serif; background-color: #f4f4f9; color: #333; margin: 0; padding: 0; display: flex; justify-content: center; align-items: center; height: 100vh; } .container { background-color: #fff; padding: 20px; border-radius: 8px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); text-align: center; } h1 { color: #4CAF50; } form { margin-top: 20px; } label { display: block; margin-bottom: 8px; font-weight: bold; } input[type="text"] { width: 100%; padding: 8px; margin-bottom: 20px; border: 1px solid #ccc; border-radius: 4px; } button { padding: 10px 20px; background-color: #4CAF50; color: #fff; border: none; border-radius: 4px; cursor: pointer; } button:hover { background-color: #45a049; } </style> </head> <body> <div class="container"> <h1>SpEL Expression Validator</h1> <form th:action="@{/validateSpel}" th:object="${spelExpression}" method="post"> <div> <label>Expression:</label> <input type="text" th:field="*{expression}" /> </div> <div> <button type="submit">Validate</button> </div> </form> </div> </body> </html>
<!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org"> <head> <title>Validation Result</title> <style> body { font-family: Arial, sans-serif; background-color: #f4f4f9; color: #333; margin: 0; padding: 0; display: flex; justify-content: center; align-items: center; height: 100vh; } .container { background-color: #fff; padding: 20px; border-radius: 8px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); text-align: center; } h1 { color: #4CAF50; } p { font-size: 18px; } a { display: inline-block; margin-top: 20px; padding: 10px 20px; background-color: #4CAF50; color: #fff; text-decoration: none; border-radius: 4px; } a:hover { background-color: #45a049; } </style> </head> <body> <div class="container"> <h1>Validation Result</h1> <p th:text="${message}"></p> <a href="/spelForm">Back to Form</a> </div> </body> </html>
Now, it’s time to run your Spring Boot application. To test the SpEL validator, navigate to http://localhost:8080/spelForm in your browser.
Following this guide, you successfully implemented a SpEL validator in your Spring Boot application. This powerful feature enhances your application’s flexibility and robustness. Keep exploring SpEL for more dynamic and sophisticated solutions. Happy coding!
]]>As businesses increasingly rely on Salesforce to manage their critical data, ensuring data security has become more important than ever. Apex, Salesforce’s proprietary programming language, runs in system mode by default, bypassing object- and field-level security. To protect sensitive data, developers need to enforce strict security measures.
This blog will explore Apex security best practices, including enforcing sharing rules, field-level permissions, and user access enforcement to protect your Salesforce data.
Apex’s ability to bypass security settings puts the onus on developers to implement proper Salesforce security practices. Without these protections, your Salesforce application might unintentionally expose sensitive data to unauthorized users.
By following best practices such as enforcing sharing rules, validating inputs, and using security-enforced SOQL queries, you can significantly reduce the risk of data breaches and ensure your app adheres to the platform’s security standards.
Sharing rules are central to controlling data access in Salesforce. Apex doesn’t automatically respect these sharing rules unless explicitly instructed to do so. Here’s how to enforce them in your Apex code:
with sharing
in Apex ClassesBest Practice: Always use with sharing unless you explicitly need to override sharing rules for specific use cases. This ensures your code complies with Salesforce security standards.
public class AccountHandlerWithSharing { public void fetchAccounts() { // Ensures that sharing settings are respected List<Account> accounts = [SELECT Id, Name FROM Account]; } }
public class AccountHandlerWithoutSharing { public void fetchAccounts() { // Ignores sharing settings and returns all records List<Account> accounts = [SELECT Id, Name FROM Account]; } }
Apex operates in a system context by default, bypassing object- and field-level security. You must manually enforce these security measures to ensure your code respects user access rights.
WITH SECURITY_ENFORCED
in SOQL QueriesThe WITH SECURITY_ENFORCED keyword ensures that Salesforce performs a permission check on fields and objects in your SOQL query, ensuring that only accessible data is returned.
List<Account> accounts = [ SELECT Id, Name FROM Account WHERE Industry = 'Technology' WITH SECURITY_ENFORCED ];
This approach guarantees that only fields and objects the current user can access are returned in your query results.
stripInaccessible
Method to Filter Inaccessible DataSalesforce provides the stripInaccessible method, which removes inaccessible fields or relationships from query results. It also helps prevent runtime errors by ensuring no inaccessible fields are used in DML operations.
Using stripInaccessible ensures that any fields or relationships the user cannot access are stripped out of the Account record before any further processing.
Apex Managed Sharing can be a powerful tool when you need to manage record access dynamically. This feature allows developers to programmatically share records with specific users or groups.
public void shareRecord(Id recordId, Id userId) { CustomObject__Share share = new CustomObject__Share(); share.ParentId = recordId; share.UserOrGroupId = userId; share.AccessLevel = 'Edit'; // Options: 'Read', 'Edit', or 'All' insert share; }
This code lets you share a custom object record with a specific user and grant them Edit access. Apex Managed Sharing allows more flexible, dynamic record-sharing controls.
Here are some critical tips for improving security in your Apex and Lightning applications:
Hardcoding Salesforce IDs, such as record IDs or profile IDs, can introduce security vulnerabilities and reduce code flexibility. Use dynamic retrieval to retrieve IDs, and consider using Custom Settings or Custom Metadata for more flexible and secure configurations.
It is essential to sanitize all user inputs to prevent threats like SOQL injection and Cross-Site Scripting (XSS). Always use parameterized queries and escape characters where necessary.
stripInaccessible
in DML OperationsTo prevent processing inaccessible fields, always use the stripInaccessible method when handling records containing fields restricted by user permissions.
Ensure you use the correct sharing context for each class or trigger. Avoid granting unnecessary access by using with sharing for most of your classes.
Writing tests that simulate various user roles using System.runAs() is crucial to ensure your code respects sharing rules, field-level permissions, and other security settings.
Implementing Apex security best practices is essential to protect your Salesforce data. Whether you are enforcing sharing rules, respecting field-level permissions, or programmatically managing record sharing, these practices help ensure that only authorized users can access sensitive data.
When building your Salesforce applications, always prioritize security by:
By adhering to these practices, you can build secure Salesforce applications that meet business requirements and ensure data integrity and compliance.