Back-End Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/back-end-development/ Expert Digital Insights Tue, 22 Apr 2025 20:41:09 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Back-End Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/back-end-development/ 32 32 30508587 Promises Made Simple: Understanding Async/Await in JavaScript https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/ https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/#respond Tue, 22 Apr 2025 09:42:05 +0000 https://blogs.perficient.com/?p=380376

JavaScript is single-threaded. That means it runs one task at a time, on one core. But then how does it handle things like API calls, file reads, or user interactions without freezing up?

That’s where Promises and async/await come into play. They help us handle asynchronous operations without blocking the main thread.

Let’s break down these concepts in the simplest way possible so whether you’re a beginner or a seasoned dev, it just clicks.

JavaScript has something called an event loop. It’s always running, checking if there’s work to do—like handling user clicks, network responses, or timers. In the browser, the browser runs it. In Node.js, Node takes care of it.

When an async function runs and hits an await, it pauses that function. It doesn’t block everything—other code keeps running. When the awaited Promise settles, that async function picks up where it left off.

 

What is a Promise?

  • ✅ Fulfilled – The operation completed successfully.
  • ❌ Rejected – Something went wrong.
  • ⏳ Pending – Still waiting for the result.

Instead of using nested callbacks (aka “callback hell”), Promises allow cleaner, more manageable code using chaining.

 Example:

fetchData()
  .then(data => process(data))
  .then(result => console.log(result))
  .catch(error => console.error(error));

 

Common Promise Methods

Let’s look at the essential Promise utility methods:

  1. Promise.all()

Waits for all promises to resolve. If any promise fails, the whole thing fails.

Promise.all([p1, p2, p3])
  .then(results => console.log(results))
  .catch(error => console.error(error));
  • ✅ Resolves when all succeed.
  • ❌ Rejects fast if any fail.
  1. Promise.allSettled()

Waits for all promises, regardless of success or failure.

Promise.allSettled([p1, p2, p3])
  .then(results => console.log(results));
  • Each result shows { status: “fulfilled”, value } or { status: “rejected”, reason }.
  • Great when you want all results, even the failed ones.
  1. Promise.race()

Returns as soon as one promise settles (either resolves or rejects).

Promise.race([p1, p2, p3])
  .then(result => console.log('Fastest:', result))
  .catch(error => console.error('First to fail:', error));
  1. Promise.any()

Returns the first fulfilled promise. Ignores rejections unless all fail.

Promise.any([p1, p2, p3])
  .then(result => console.log('First success:', result))
  .catch(error => console.error('All failed:', error));

5.Promise.resolve() / Promise.reject

  • resolve(value) creates a resolved promise.
  • reject (value) creates a rejected promise.

Used for quick returns or mocking async behavior.

 

Why Not Just Use Callbacks?

Before Promises, developers relied on callbacks:

getData(function(response) {
  process(response, function(result) {
    finalize(result);
  });
});

This worked, but quickly became messy i.e. callback hell.

 

 What is async/await Really Doing?

Under the hood, async/await is just syntactic sugar over Promises. It makes asynchronous code look synchronous, improving readability and debuggability.

How it works:

  • When you declare a function with async, it always returns a Promise.
  • When you use await inside an async function, the execution of that function pauses at that point.
  • It waits until the Promise is either resolved or rejected.
  • Once resolved, it returns the value.
  • If rejected, it throws the error, which you can catch using try…catch.
async function greet() {
  return 'Hello';
}
greet().then(msg => console.log(msg)); // Hello

Even though you didn’t explicitly return a Promise, greet() returns one.

 

Execution Flow: Synchronous vs Async/Await

Let’s understand how await interacts with the JavaScript event loop.

console.log("1");

setTimeout(() => console.log("2"), 0);

(async function() {
  console.log("3");
  await Promise.resolve();
  console.log("4");
})();

console.log("5");

Output:

Let’s understand how await interacts with the JavaScript event loop.

1
3
5
4
2

Explanation:

  • The await doesn’t block the main thread.
  • It puts the rest of the async function in the microtask queue, which runs after the current stack and before setTimeout (macrotask).
  • That’s why “4” comes after “5”.

 

 Best Practices with async/await

  1. Use try/catch for Error Handling

Avoid unhandled promise rejections by always wrapping await logic inside a try/catch.

async function getUser() {
  try {
    const res = await fetch('/api/user');
    if (!res.ok) throw new Error('User not found');
    const data = await res.json();
    return data;
  } catch (error) {
    console.error('Error fetching user:', error.message);
    throw error; // rethrow if needed
  }
}
  1. Run Parallel Requests with Promise.all

Don’t await sequentially unless there’s a dependency between the calls.

❌ Bad:

const user = await getUser();
const posts = await getPosts(); // waits for user even if not needed

✅ Better:

const [user, posts] = await Promise.all([getUser(), getPosts()]);
  1. Avoid await in Loops (when possible)

❌ Bad:

//Each iteration waits for the previous one to complete
for (let user of users) {
  await sendEmail(user);
}

✅ Better:

//Run in parallel
await Promise.all(users.map(user => sendEmail(user)));

Common Mistakes

  1. Using await outside async
const data = await fetch(url); // ❌ SyntaxError
  1. Forgetting to handle rejections
    If your async function throws and you don’t .catch() it (or use try/catch), your app may crash in Node or log warnings in the browser.
  2. Blocking unnecessary operations Don’t await things that don’t need to be awaited. Only await when the next step depends on the result.

 

Real-World Example: Chained Async Workflow

Imagine a system where:

  • You authenticate a user,
  • Then fetch their profile,
  • Then load related dashboard data.

Using async/await:

async function initDashboard() {
  try {
    const token = await login(username, password);
    const profile = await fetchProfile(token);
    const dashboard = await fetchDashboard(profile.id);
    renderDashboard(dashboard);
  } catch (err) {
    console.error('Error loading dashboard:', err);
    showErrorScreen();
  }
}

Much easier to follow than chained .then() calls, right?

 

Converting Promise Chains to Async/Await

Old way:

login()
  .then(token => fetchUser(token))
  .then(user => showProfile(user))
  .catch(error => showError(error));

With async/await:

async function start() {
  try {
    const token = await login();
    const user = await fetchUser(token);
    showProfile(user);
  } catch (error) {
    showError(error);
  }
}

Cleaner. Clearer. Less nested. Easier to debug.

 

Bonus utility wrapper for Error Handling

If you hate repeating try/catch, use a helper:

const to = promise => promise.then(res => [null, res]).catch(err => [err]);

async function loadData() {
  const [err, data] = await to(fetchData());
  if (err) return console.error(err);
  console.log(data);
}

 

Final Thoughts

Both Promises and async/await are powerful tools for handling asynchronous code. Promises came first and are still widely used, especially in libraries. async/awa is now the preferred style in most modern JavaScript apps because it makes the code cleaner and easier to understand.

 

Tip: You don’t have to choose one forever — they work together! In fact, async/await is built on top of Promises.

 

]]>
https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/feed/ 0 380376
Scoping, Hoisting and Temporal Dead Zone in JavaScript https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/ https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/#respond Thu, 17 Apr 2025 11:44:38 +0000 https://blogs.perficient.com/?p=380251

Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.

What is Scope in JavaScript?

Think of scope like a boundary or container that controls where you can use a variable in your code.

In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.

This helps in two big ways:

  • Keeps your code safe – Only the right parts of the code can access the variable.
  • Avoids name clashes – You can use the same variable name in different places without them interfering with each other.

JavaScript mainly uses two types of scope:

1.Global Scope – Available everywhere in your code.

2.Local Scope – Available only inside a specific function or block.

 

Global Scope

When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.

If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.

var a = 5; // Global variable
function add() {
  return a + 10; // Using the global variable inside a function
}
console.log(window.a); // 5

In this example, a is declared outside of any function, so it’s globally available—even inside add().

A quick note:

  • If you declare a variable with var, it becomes a property of the window object in browsers.
  • But if you use let or const, the variable is still global, but not attached to window.
let name = "xyz";
function changeName() {
  name = "abc";  // Changing the value of the global variable
}
changeName();
console.log(name); // abc

In this example, we didn’t create a new variable—we just changed the value of the existing one.

👉 Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.

 

 Local Scope

In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.

There are two types of local scope:

1.Functional Scope

Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.

let firstName = "Shilpa"; // Global
function changeName() {
  let lastName = "Syal"; // Local to this function
console.log (`${firstName} ${lastName}`);
}
changeName();
console.log (lastName); // ❌ Error! Not available outside the function

You can even use the same variable name in different functions without any issue:

function mathMarks() {
  let marks = 80;
  console.log (marks);
}
function englishMarks() {
  let marks = 85;
  console.log (marks);
}

Here, both marks variables are separate because they live in different function scopes.

 

2.Block Scope

Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).

 

function getMarks() {
  let marks = 60;
  if (marks > 50) {
    const points = 10;
    console.log (marks + points); // ✅ Works here
  }
  console.log (points); // ❌ Uncaught Reference Error: points is not defined
}

 As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.

LEXICAL SCOPING & NESTED SCOPE:

When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.

function outerFunction() {
  let outerVar = "I’m outside";
  function innerFunction() {
      console.log (outerVar); // ✅ Can access outerVar
  }
  innerFunction();
}

In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.

 

VARIABLE SCOPE OR VARIABLE SHADOWING:

You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.

If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.

let name = "xyz"
function getName() {
  let name = "abc"            // Redeclaring the name variable
      console.log (name)  ;        //abc
}
getName();
console.log (name) ;          //xyz

To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.

let bonus = 500;
function getSalary() {
 if(true) {
     return 10000 + bonus;  // Looks up and finds bonus in the outer scope
  }
}
   console.log (getSalary()); // 10500

 

Key Takeaways: Scoping Made Simple

Global Scope: Variables declared outside any function are global and can be used anywhere in your code.

Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.

Global Variables Last Longer: They stay alive as long as your program is running.

Local Variables Are Temporary: They’re created when the function runs and removed once it ends.

Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.

Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.

Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.” 

Hoisting

To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.

It has two main phases:

1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.

2.Execution Phase: During this phase, code is executed line by line.

-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.

 

Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:

  1. functions– Functions are fully hoisted. They can invoke before their declaration in code.
foo (); // Output: "Hello, world!"
 function foo () {
     console.log ("Hello, world!");
 }
  1. var – Variables declared with var are hoisted in global scope but initialized with undefined. Accessible before the declaration with undefined.
console.log (x); // Output: undefined
 var x = 5;

This code seems straightforward, but it’s interpreted as:

var x;
console.log (x); // Output: undefined
 x = 5;

3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error

console.log (x); // Throws Reference Error: Cannot access 'x' before initialization
 let x = 5;


What is Temporal Dead Zone (TDZ)?

In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.

For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.

This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.

console.log (x); //x is not defined -- Reference Error.
let a=10; //b is undefined.
var b= 100; // you cannot access a before initialization Reference Error.

👉 Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.

 

🧾 Conclusion

JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding! 🙌

 

 

]]>
https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/feed/ 0 380251
Convert a Text File from UTF-8 Encoding to ANSI using Python in AWS Glue https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/ https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/#respond Mon, 14 Apr 2025 19:35:22 +0000 https://blogs.perficient.com/?p=379867

To convert a text file from UTF-8 encoded data to ANSI using AWS Glue, you will typically work with Python or PySpark. However, it’s important to understand that ANSI is not a specific encoding but often refers to Windows-1252 (or similar 8-bit encodings) in a Windows context.

AWS Glue, running on Apache Spark, uses UTF-8 as the default encoding. Converting to ANSI requires handling the character encoding during the writing phase, because Spark itself doesn’t support writing files in encodings other than UTF-8 natively. But there are a few workarounds.

Here’s a step-by-step guide to converting a text file from UTF-8 to ANSI using Python in AWS Glue. Assume you’re working with a plain text file and want to output a similarly formatted file in ANSI encoding.

General Process Flow

Technical Approach Step-By-Step Guide

Step 1: Add the import statements to the code

import boto3
import codecs

Step 2: Specify the source/target file paths & S3 bucket details

# Initialize S3 client
s3_client = boto3.client('s3')
s3_key_utf8 = ‘utf8_file_path/filename.txt’
s3_key_ansi = 'ansi_file_path/filename.txt'

# Specify S3 bucket and file paths
bucket_name = outgoing_bucket #'your-s3-bucket-name'
input_key = s3_key_utf8   #S3Path/name of input UTF-8 encoded file in S3
output_key = s3_key_ansi  #S3 Path/name to save the ANSI encoded file

Step 3: Write a function to convert the text file from UTF-8 to ANSI, based on the parameters supplied (S3 bucket name, source-file, target-file)

# Function to convert UTF-8 file to ANSI (Windows-1252) and upload back to S3
def convert_utf8_to_ansi(bucket_name, input_key, output_key):
    # Download the UTF-8 encoded file from S3
    response = s3_client.get_object(Bucket=bucket_name, Key=input_key)
    # Read the file content from the response body (UTF-8 encoded)
    utf8_content = response['Body'].read().decode('utf-8')
    # Convert the content to ANSI encoding (Windows-1252)
    ansi_content = utf8_content.encode('windows-1252', 'ignore')  # 'ignore' to handle invalid characters
    # Upload the converted file to S3 (in ANSI encoding)
    s3_client.put_object(Bucket=bucket_name, Key=output_key, Body=ansi_content) 

Step 4: Call the function that converts the text file from UTF-8 to ANSI

# Call the function to convert the file 
convert_utf8_to_ansi(bucket_name, input_key, output_key) 

 

]]>
https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/feed/ 0 379867
Boost Developer Productivity with Quarkus Live Coding https://blogs.perficient.com/2025/03/14/boost-developer-productivity-with-quarkus-live-coding/ https://blogs.perficient.com/2025/03/14/boost-developer-productivity-with-quarkus-live-coding/#comments Fri, 14 Mar 2025 21:22:57 +0000 https://blogs.perficient.com/?p=378687

Quarkus has gained traction as a modern Java framework designed for cloud-native development. In my previous blog, I discussed why learning Quarkus is a great choice. Today, let’s dive deeper into one of its standout features: Live Coding.

What is Quarkus Live Coding?

Live Coding in Quarkus provides an instant development experience where changes to your application’s code, configuration, and even dependencies are reflected in real time without restarting the application. This eliminates the need for slow rebuild-restart cycles, significantly improving productivity.

How Does Live Coding Work?

Quarkus automatically watches for file changes and reloads the necessary components without restarting the entire application. This feature is enabled by default in dev mode and can be triggered using:

mvn quarkus:dev

or if you are using Gradle:

gradle quarkusDev

Once the development server is running, any modifications to your application will be instantly reflected when you refresh the browser or make an API request.

Benefits of Live Coding

  1. Faster Development: Eliminates long wait times associated with traditional Java application restarts.
  2. Enhanced Feedback Loop: See the impact of code changes immediately, improving debugging and fine-tuning.
  3. Seamless Config and Dependency Updates: Application configurations and dependencies can be modified dynamically.
  4. Works with REST APIs, UI, and Persistence Layer: Whether you’re building RESTful services, working with frontend code, or handling database transactions, changes are instantly visible.

Live Coding in Action

Imagine you are developing a REST API with Quarkus and need to update an endpoint. With Live Coding enabled, you simply modify the resource class:

@Path("/hello")
public class GreetingResource {

    @GET
    public String hello() {
        return "Hello, Quarkus!";
    }
}

Change the return message to:

    return "Hello, Live Coding!";

Without restarting the application, refresh the browser or send an API request, and the change is immediately visible. No waiting, no downtime.

Enabling Live Coding in Remote Environments

While Live Coding is enabled by default in dev mode, you can also enable it in remote environments using:

mvn quarkus:remote-dev -Dquarkus.live-reload.url=<remote-server>

This allows developers working in distributed teams or cloud environments to take advantage of fast feedback cycles.

Conclusion

Quarkus Live Coding is a game-changer for Java development, reducing turnaround time and enhancing the overall developer experience. If you’re transitioning to Quarkus, leveraging this feature can significantly improve your workflow.

Have you tried Quarkus Live Coding? Share your experience in the comments!
Stay tuned for more features on security and reactive programming with quarkus.

]]>
https://blogs.perficient.com/2025/03/14/boost-developer-productivity-with-quarkus-live-coding/feed/ 1 378687
Optimizely Configured Commerce and Spire CMS – Figuring out Handlers https://blogs.perficient.com/2025/03/10/optimizely-b2b-commerce-and-spire-cms-figuring-out-handlers/ https://blogs.perficient.com/2025/03/10/optimizely-b2b-commerce-and-spire-cms-figuring-out-handlers/#comments Mon, 10 Mar 2025 17:16:11 +0000 https://blogs.perficient.com/?p=378314

I’m now a couple months into exploring Optimizely Configured Commerce and Spire CMS.  As much as I’m up to speed with the Configured Commerce side of things (having past experience with Customized commerce), the Spire CMS side is a bit daunting, having worked with traditional Optimizely CMS for a while. We face challenges in figuring out handlers, a key concept in both Customized Commerce and Spire CMS.

And yes there is documentation, but its more high level and not enough to understand the inner functioning of the code (or maybe I just haven’t had the patience to go through it all yet :)).

Needless to say, I took a rather “figure it out by myself” approach here. I find that this is a much better way to learn and remember stuff :).

Here’s to figuring out handlers

In a commerce site, there is Order History for every customer, with a “Reorder” capability. I will tweak the behavior of this Reorder action and prevent adding a specific SKU to cart again when user clicks “Reorder”.

Challenge #1 – Where does the code tied to reorder live?

Depending on what you are looking for and what you need to change, this can be different files in the Frontend source code.

Challenge #2 – How do I find the right file?

I start by searching on keywords like “reorder” which do lead me to some files but they are mostly .tsx files aka React components that had the Reorder button on them. What I’m looking for instead is the actual method that passes the current order lines to add to cart, in order to intercept and tweak.

Challenge #3 – How do I find the file which takes in Order Lines and adds to cart?

I decided it was time to put my browser skills to good use. I launch the site, open Dev tools, and hit Reorder to monitor all the Network calls that occur. And bravo.. I see the api call to Cart API for bulk load, which is what this action does. Here’s what that looks like :

api/v1/carts/current/cartlines/batch

with a Payload of cartlines sent to add to Cart.

Reverse engineering in action

Step #1 – I traced this back in code. Looked for “cartlines/batch” and found 1 file – CartService.ts

Its OOTB code, but for people new to this like me, we don’t know which folder has what. So, I’ll make this one step easier for you by telling you exactly where this file lives. You will find it at

FrontEnd\modules\client-framework\src\Services\CartService.ts

The method that makes the api call is addLineCollection(parameter: AddCartLinesApiParameter).

Step #2 – I now search for files that called this method. I found quite a few files that call this, but for my specific scenario, I stuck to the ones that said “reorder” specifically. These are the Frontend Handlers in Spire CMS.

Here’s the list and paths of the files that are relevant to the context here :

  • FrontEnd\modules\client-framework\src\{blueprintName}\Pages\OrderDetails\Handlers\Reorder.ts
  • FrontEnd\modules\client-framework\src\{blueprintName}\Pages\OrderHistory\Handlers\Reorder.ts
  • FrontEnd\modules\client-framework\src\{blueprintName}\Pages\OrderStatus\Handlers\Reorder.ts

Once I see the line that makes the call to addLineCollection() method, I check how the parameter is being set.

Step #3 – All that’s left now is to update the code that sets the AddCartLinesApiParameter for this call, from the existing Order’s order lines. I add a filter to exclude the one specific SKU that I don’t want re-added to cart on reorder, on the OrderLines collection. Looks something like this :

cartLines: order.value.orderLines!.filter(o => o.productErpNumber != ‘{my specific SKU}’)
And that was it. I save the files. Webpack rebuilds, I refresh my site, hit Reorder on the order that had this SKU and it no longer gets added to cart.

Conclusion

In theory, it sounds pretty straightforward. You should know the api that gets called, where the calls live in code, which handlers make these calls for each action etc.
But for beginners like me, it really isn’t. You don’t always know the structure of Spire CMS codebase, concept of blueprints or handlers, the API calls that are made per action, or how to work with react/typescript code. So in my opinion, this is a helpful little exercise, learning from which now sticks in memory for other similar use cases.
Hope you find it helpful too!
]]>
https://blogs.perficient.com/2025/03/10/optimizely-b2b-commerce-and-spire-cms-figuring-out-handlers/feed/ 1 378314
Optimizing Experiences with Optimizely: Custom Audience Criteria for Mobile Visitors https://blogs.perficient.com/2025/03/05/optimizing-experiences-with-optimizely-custom-audience-criteria-for-mobile-visitors/ https://blogs.perficient.com/2025/03/05/optimizing-experiences-with-optimizely-custom-audience-criteria-for-mobile-visitors/#comments Wed, 05 Mar 2025 22:06:56 +0000 https://blogs.perficient.com/?p=378170

In today’s mobile-first world, delivering personalized experiences to visitors using mobile devices is crucial for maximizing engagement and conversions. Optimizely’s powerful experimentation and personalization platform allows you to define custom audience criteria to target mobile users effectively.

By leveraging Optimizely’s audience segmentation, you can create tailored experiences based on factors such as device type, operating system, screen size, and user behavior. Whether you want to optimize mobile UX, test different layouts, or personalize content for Android vs. iOS users, understanding how to define mobile-specific audience criteria can help you drive better results.

In this blog, we’ll explore how to set up simple custom audience criteria for mobile visitors in Optimizely, the key benefits of mobile targeting, and the best practices to enhance user experiences across devices. Let’s dive in!

This solution is based on Example – Create audience criteria, which you can find in the Optimizely documentation.

Create the settings and criterion classes

First, we need to create two classes in our solution:

Class VisitorDeviceTypeCriterionSettings needs to inherit CriterionModelBase class, and we need only one property (settings) to determine if the visitor is using a desktop or a mobile device.

public bool IsMobile { get; set; }

The abstract CriterionModelBase class requires you to implement the Copy() method. Because you are not using complex reference types, you can implement it by returning a shallow copy as shown (see Create custom audience criteria):

public override ICriterionModel Copy()
{
    return base.ShallowCopy();
}

The entire class will look something like this:

using EPiServer.Data.Dynamic;
using EPiServer.Personalization.VisitorGroups;

namespace AlloyTest.Personalization.Criteria
{
    [EPiServerDataStore(AutomaticallyRemapStore = true)]
    public class VisitorDeviceTypeCriterionSettings : CriterionModelBase
    {
        public bool IsMobile { get; set; }

        public override ICriterionModel Copy()
        {
            // if this class has reference types that require deep copying, then
            // that implementation belongs here. Otherwise, you can just rely on
            // shallow copy from the base class
            return base.ShallowCopy();
        }
    }
}

Now, we need to implement the criterion class VisitorDeviceTypeCriterion and inherit the abstract CriterionBase class with the settings class as the type parameter:

public class VisitorDeviceTypeCriterion : CriterionBase<VisitorDeviceTypeCriterionSettings>

Add a VisitorGroupCriterion attribute to set the category, name, and description of the criterion (for more available VisitorGroupCriterion properties, see Create custom audience criteria:

[VisitorGroupCriterion(
    Category = "MyCustom",
    DisplayName = "Device Type",
    Description = "Criterion that matches type of the user's device"
)]

The abstract CriterionBase class requires you to implement an IsMatch() method that determines whether the current user matches this audience criterion. In this case, we need to determine from which device the visitor is accessing our site. Because Optimizely doesn’t provide this out of the box, we need to figure out that part.

One of the solutions is to use information from the request header, from the User-Agent field and analyze it to determine the OS and device type. We can do that by writing our match method:

public virtual bool MatchBrowserType(string userAgent)
{
    var os =
        new Regex(
            @"(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od|ad)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino",
            RegexOptions.IgnoreCase | RegexOptions.Multiline);
    var device =
        new Regex(
            @"1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-",
            RegexOptions.IgnoreCase | RegexOptions.Multiline);
    var deviceInfo = string.Empty;

    if (os.IsMatch(userAgent))
    {
        deviceInfo = os.Match(userAgent).Groups[0].Value;
    }

    if (device.IsMatch(userAgent.Substring(0, 4)))
    {
        deviceInfo += device.Match(userAgent).Groups[0].Value;
    }

    if (!string.IsNullOrEmpty(deviceInfo))
    {
        return true;
    }

    return false;
}

Now, we can go back and implement the IsMatch() method that is required by CriterionBase abstract class.

public override bool IsMatch(IPrincipal principal, HttpContext httpContext)
{
    return MatchBrowserType(httpContext.Request.Headers["User-Agent"].ToString());
}

 

Test the criterion

In the CMS we need to create a new audience criterion. When you click on the ‘Add Criteria’ button, there will be ‘MyCustom’ criteria group with our criteria:

When you select the ‘Device Type’ criteria, you will see something like this:

We can easily add a label for the checkbox by using Optimizely’s translation functionality. Create a new XML file VisitorGroupCriterion.xml and place it in your translations folder where your translation files are, like this:

Put this into the file that you created:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<languages>
  <language name="English" id="en-us">
    <visitorgroups>
      <criteria>
        <ismobile>
          <key>Is Mobile Device (Use this setting to show content only on Mobile)</key>
        </ismobile>
      </criteria>
    </visitorgroups>
  </language>
</languages>

 

There is one more thing to do. In VisitorDeviceTypeCriterionSettings.cs, decorate the IsMobile property with the translation definition. Add this attribute:

[CriterionPropertyEditor(LabelTranslationKey = "/visitorgroups/criteria/ismobile/key")]

It should look like this:

Now, in the editor view, we have a label for the checkbox.

 

Personalize the content by setting the content for this visitor group.

Desktop view:

 

Mobile view:

You can see that there is content that is only visible if you access the site with a mobile device.

 

And that’s it!

]]>
https://blogs.perficient.com/2025/03/05/optimizing-experiences-with-optimizely-custom-audience-criteria-for-mobile-visitors/feed/ 1 378170
How to Automate Content Updates Using AEM Groovy Scripts https://blogs.perficient.com/2025/02/27/how-to-automate-content-updates-using-aem-groovy-scripts/ https://blogs.perficient.com/2025/02/27/how-to-automate-content-updates-using-aem-groovy-scripts/#respond Thu, 27 Feb 2025 14:34:32 +0000 https://blogs.perficient.com/?p=377880

As an AEM author, updating existing page content is a routine task. However, manual updates, like rolling out a new template, can become tedious and costly when dealing with thousands of pages.

Fortunately, automation scripts can save the day. Using Groovy scripts within AEM can streamline the content update process, reducing time and costs. In this blog, we’ll outline the key steps and best practices for using Groovy scripts to automate content updates.

The Benefits of Utilizing Groovy Scripts

Groovy is a powerful scripting language that integrates seamlessly with AEM. It allows developers to perform complex operations with minimal code, making it an excellent tool for tasks such as: 

  • Automating repetitive tasks
  • Accessing and modifying repository content 
  • Bulk updating properties across multiple nodes
  • Managing template and component mappings efficiently

The Groovy Console for AEM provides an intuitive interface for running scripts, enabling rapid development and testing without redeploying code.   

Important things to know about Groovy Console 

  • Security – Due to security concerns, Groovy Console should not be installed in any production environment.  
  • Any content that needs to be updated in production environments should be packaged to a lower environment, using Groovy Console to update and validate content. Then you can repackage and deploy to production environments.  

How to Update Templates for Existing Web Pages

To illustrate how to use Groovy, let’s learn how to update templates for existing web pages authored inside AEM

Our first step is to identify the following:

  • Templates that need to be migrated
  • Associated components and their dependencies
  • Potential conflicts or deprecated functionalities

You should have source and destination template component mappings and page paths.  

As a pre-requisite for this solution, you will need to have JDK 11, Groovy 3.0.9, and Maven 3.6.3.   

Steps to Create a Template Mapping Script 

1. Create a CSV File 

The CSV file should contain two columns: 

  • Source → The legacy template path. 
  • Target → The new template path. 

Save this file as template-map.csv.

Source,Target 

"/apps/legacy/templates/page-old","/apps/new/templates/page-new" 

"/apps/legacy/templates/article-old","/apps/new/templates/article-new"v

2. Load the Mapping File in migrate.groovy 

In your migrate.groovy script, insert the following code to load the mapping file: 

def templateMapFile = new File("work${File.separator}config${File.separator}template-map.csv") 

assert templateMapFile.exists() : "Template Mapping File not found!"

3. Implement the Template Mapping Logic 

Next, we create a function to map source templates to target templates by utilizing the CSV file. 

String mapTemplate(sourceTemplateName, templateMapFile) { 

    /*this function uses the sourceTemplateName to look up the template 

    we will use to create new XML*/ 

    def template = '' 

    assert templateMapFile : "Template Mapping File not found!" 

 

    for (templateMap in parseCsv(templateMapFile.getText(ENCODING), separator: SEPARATOR)) { 

        def sourceTemplate = templateMap['Source'] 

        def targetTemplate = templateMap['Target'] 

        if (sourceTemplateName.equals(sourceTemplate)) { 

            template = targetTemplate 

        } 

    }   

        assert template : "Template ${sourceTemplateName} not found!" 

         

    return template 

}

After creating a package using Groovy script on your local machine, you can directly install it through the Package Manager. This package can be installed on both AEM as a Cloud Service (AEMaaCS) and on-premises AEM.

Execute the script in a non-production environment, verify that templates are correctly updated, and review logs for errors or skipped nodes. After running the script, check content pages to ensure they render as expected, validate that new templates are functioning correctly, and test associated components for compatibility. 

Groovy Scripts Minimize Manual Effort and Reduce Errors

Leveraging automation through scripting languages like Groovy can significantly simplify and accelerate AEM migrations. By following a structured approach, you can minimize manual effort, reduce errors, and ensure a smooth transition to the new platform, ultimately improving overall maintainability. 

More AEM Insights

Don’t miss out on more AEM insights and follow our Adobe blog! 

]]>
https://blogs.perficient.com/2025/02/27/how-to-automate-content-updates-using-aem-groovy-scripts/feed/ 0 377880
Python Optimization: Improve Code Performance https://blogs.perficient.com/2025/02/20/%f0%9f%9a%80-python-optimization-for-code-performance/ https://blogs.perficient.com/2025/02/20/%f0%9f%9a%80-python-optimization-for-code-performance/#respond Thu, 20 Feb 2025 11:48:48 +0000 https://blogs.perficient.com/?p=377527

🚀 Python Optimization: Improve Code Performance

🎯 Introduction

Python is an incredibly powerful and easy-to-use programming language. However, it can be slow if not optimized properly! 😱 This guide will teach you how to turbocharge your code, making it faster, leaner, and more efficient. Buckle up, and let’s dive into some epic optimization hacks! 💡🔥

For more on Python basics, check out our Beginner’s Guide to Python Programming.

🏎 1. Choosing the Right Data Structures for Better Performance

Picking the right data structure is like choosing the right tool for a job—do it wrong, and you’ll be banging a nail with a screwdriver! 🚧

🏗 1.1 Lists vs. Tuples: Optimize Your Data Storage

  • Use tuples instead of lists when elements do not change (immutable data). Tuples have lower overhead and are lightning fast! ⚡
# List (mutable)
my_list = [1, 2, 3]
# Tuple (immutable, faster)
my_tuple = (1, 2, 3)

🛠 1.2 Use Sets and Dictionaries for Fast Lookups

  • Searching in a list is like searching for a lost sock in a messy room 🧦. On the other hand, searching in a set or dictionary is like Googling something! 🚀
# Slow list lookup (O(n))
numbers = [1, 2, 3, 4, 5]
print(3 in numbers)  # Yawn... Slow!

# Fast set lookup (O(1))
numbers_set = {1, 2, 3, 4, 5}
print(3 in numbers_set)  # Blink and you'll miss it! ⚡

🚀 1.3 Use Generators Instead of Lists for Memory Efficiency

  • Why store millions of values in memory when you can generate them on the fly? 😎
# Generator (better memory usage)
def squared_numbers(n):
    for i in range(n):
        yield i * i
squares = squared_numbers(1000000)  # No memory explosion! 💥

🔄 2. Loop Optimizations for Faster Python Code

⛔ 2.1 Avoid Repeated Computation in Loops to Enhance Performance

# Inefficient
for i in range(10000):
    result = expensive_function()  # Ugh! Repeating this is a performance killer 😩
    process(result)

# Optimized
cached_result = expensive_function()  # Call it once and chill 😎
for i in range(10000):
    process(cached_result)

💡 2.2 Use List Comprehensions Instead of Traditional Loops for Pythonic Code

  • Why write boring loops when you can be Pythonic? 🐍
# Traditional loop (meh...)
squares = []
for i in range(10):
    squares.append(i * i)

# Optimized list comprehension (so sleek! 😍)
squares = [i * i for i in range(10)]

🎭 3. String Optimization Techniques

🚀 3.1 Use join() Instead of String Concatenation for Better Performance

# Inefficient (Creates too many temporary strings 🤯)
words = ["Hello", "world", "Python"]
sentence = ""
for word in words:
    sentence += word + " "

# Optimized (Effortless and FAST 💨)
sentence = " ".join(words)

🏆 3.2 Use f-strings for String Formatting in Python (Python 3.6+)

name = "Alice"
age = 25

# Old formatting (Ew 🤢)
print("My name is {} and I am {} years old.".format(name, age))

# Optimized f-string (Sleek & stylish 😎)
print(f"My name is {name} and I am {age} years old.")

🔍 4. Profiling & Performance Analysis Tools

⏳ 4.1 Use timeit to Measure Execution Time

import timeit
print(timeit.timeit("sum(range(1000))", number=10000))  # How fast is your code? 🚀

🧐 4.2 Use cProfile for Detailed Performance Profiling

import cProfile
cProfile.run('my_function()')  # Find bottlenecks like a pro! 🔍

For more on profiling, see our Guide to Python Profiling Tools.

🧠 5. Memory Optimization Techniques

🔍 5.1 Use sys.getsizeof() to Check Memory Usage

import sys
my_list = [1, 2, 3, 4, 5]
print(sys.getsizeof(my_list))  # How big is that object? 🤔

🗑 5.2 Use del and gc.collect() to Manage Memory

import gc
large_object = [i for i in range(1000000)]
del large_object  # Say bye-bye to memory hog! 👋
gc.collect()  # Cleanup crew 🧹

⚡ 6. Parallel Processing & Multithreading

🏭 6.1 Use multiprocessing for CPU-Bound Tasks

from multiprocessing import Pool

def square(n):
    return n * n

with Pool(4) as p:  # Use 4 CPU cores 🏎
    results = p.map(square, range(100))

🌐 6.2 Use Threading for I/O-Bound Tasks

import threading

def print_numbers():
    for i in range(10):
        print(i)

thread = threading.Thread(target=print_numbers)
thread.start()
thread.join()

For more on parallel processing, check out our Introduction to Python Multithreading.

🎉 Conclusion

Congratulations! 🎊 You’ve unlocked Python’s full potential by learning these killer optimization tricks. Now go forth and write blazing-fast, memory-efficient, and clean Python code. 🚀🐍

Got any favorite optimization hacks? Drop them in the comments! 💬🔥

For more in-depth information on Python optimization, check out these resources:

]]>
https://blogs.perficient.com/2025/02/20/%f0%9f%9a%80-python-optimization-for-code-performance/feed/ 0 377527
Navigating the Landscape of Development Frameworks: A Guide for Aspiring Developers. https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/ https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/#comments Tue, 18 Feb 2025 05:44:58 +0000 https://blogs.perficient.com/?p=377319

Nine years ago, I was eager to be a developer but found no convincing platform. Luckily, the smartphone world was booming, and its extraordinary growth immediately caught my eye. This led to my career as an Android developer, where I had the opportunity to learn the nuances of building mobile applications. The time I went along helped me expand my reach into hybrid mobile app development, allowing me to smoothly adapt to various platforms.

I also know the struggles of countless aspiring developers dilemma with uncertainty about which direction to head and which technology to pursue. Hence, the idea of writing this blog stemmed from my experiences and insights while making my own way through mobile app development. It is geared toward those beginning to learn this subject or adding to current knowledge.

Web Development

  • Web Development: Focuses on building the user interface (UI) and user experience (UX) of applications.
    • Technologies:
      • HTML (HyperText Markup Language): The backbone of web pages, used to structure content with elements like headings, paragraphs, images, and links.
      • CSS (Cascading Style Sheets): Styles web pages by controlling layout, colors, fonts, and animations, making websites visually appealing and responsive.
      • JavaScript: A powerful programming language that adds interactivity to web pages, enabling dynamic content updates, event handling, and logic execution.
      • React: A JavaScript library developed by Facebook for building fast and scalable user interfaces using a component-based architecture.
      • Angular: A TypeScript-based front-end framework developed by Google that provides a complete solution for building complex, dynamic web applications.
      • Vue.js: A progressive JavaScript framework known for its simplicity and flexibility, allowing developers to build user interfaces and single-page applications efficiently.
    • Upskilling:
      • Learn the basics of HTML, CSS, and JavaScript (essential for any front-end developer).
      • Explore modern frameworks like React or Vue.js for building interactive UIs.
      • Practice building small projects like a portfolio website or a simple task manager.
      • Recommended Resources:

Backend Development

  • Backend Development: Focuses on server-side logic, APIs, and database management.
    • Technologies:
      • Node.js: A JavaScript runtime that allows developers to build fast, scalable server-side applications using a non-blocking, event-driven architecture.
      • Python (Django, Flask): Python is a versatile programming language; Django is a high-level framework for rapid web development, while Flask is a lightweight framework offering flexibility and simplicity.
      • Java (Spring Boot): A Java-based framework that simplifies the development of enterprise-level applications with built-in tools for microservices, security, and database integration.
      • Ruby on Rails: A full-stack web application framework built with Ruby, known for its convention-over-configuration approach and rapid development capabilities.
    • Upskilling:
      • Learn the basics of backend languages like JavaScript (Node.js) or Python.
      • Understand APIs (REST and GraphQL).
      • Practice building CRUD applications and connecting them to databases like MySQL or MongoDB.
      • Recommended Resources:

Mobile App Development

  • Native Development:
    • Android Development
      • Java: A widely used, object-oriented programming language known for its platform independence (Write Once, Run Anywhere) and strong ecosystem, making it popular for enterprise applications and Android development.
      • Kotlin: A modern, concise, and expressive programming language that runs on the JVM, is fully interoperable with Java, and is officially recommended by Google for Android app development due to its safety and productivity features.
    • iOS Development:
      • Swift: A modern, fast, and safe programming language developed by Apple for iOS, macOS, watchOS, and tvOS development. It offers clean syntax, performance optimizations, and strong safety features.
      • Objective-C: An older, dynamic programming language used for Apple app development before Swift. It is based on C with added object-oriented features but is now largely replaced by Swift for new projects.
    • Upskilling:
      • Learn Kotlin or Swift (modern, preferred languages for Android and iOS).
      • Use platform-specific tools: Android Studio (Android) or Xcode (iOS).
      • Start small, like creating a to-do list app or weather app.
      • Recommended Resources:
  • Cross-Platform Development:
    • Technologies:
      • React Native: A JavaScript framework developed by Meta for building cross-platform mobile applications using a single codebase. It leverages React and native components to provide a near-native experience.
      • Flutter: A UI toolkit by Google that uses the Dart language to build natively compiled applications for mobile, web, and desktop from a single codebase, offering high performance and a rich set of pre-designed widgets.
    • Upskilling:

Game Development

  • Technologies:
    • Unity (C#): A popular game engine known for its versatility and ease of use, supporting 2D and 3D game development across multiple platforms. It uses C# for scripting and is widely used for indie and AAA games.
    • Unreal Engine (C++): A high-performance game engine developed by Epic Games, known for its stunning graphics and powerful features. It primarily uses C++ and Blueprints for scripting, making it ideal for AAA game development.
    • Godot: An open-source game engine with a lightweight footprint and built-in scripting language (GDScript), along with support for C# and C++. It is beginner-friendly and widely used for 2D and 3D game development.
  • Upskilling:
    • Learn a game engine (Unity is beginner-friendly and widely used).
    • Explore C# (for Unity) or C++ (for Unreal Engine).
    • Practice by creating simple 2D games, then progress to 3D.
    • Recommended Resources:

Data Science and Machine Learning

  • Technologies:
    • Python (NumPy, Pandas, Scikit-learn): Python is widely used in data science and machine learning, with NumPy for numerical computing, Pandas for data manipulation, and Scikit-learn for machine learning algorithms.
    • R: A statistical programming language designed for data analysis, visualization, and machine learning. It is heavily used in academic and research fields.
    • TensorFlow: An open-source machine learning framework developed by Google, known for its scalability and deep learning capabilities, supporting both CPUs and GPUs.
    • PyTorch: A deep learning framework developed by Facebook, favored for its dynamic computation graph, ease of debugging, and strong research community support.
  • Upskilling:
    • Learn Python and libraries like NumPy, Pandas, and Matplotlib.
    • Explore machine learning concepts and algorithms using Scikit-learn or TensorFlow.
    • Start with data analysis projects or simple ML models.
    • Recommended Resources:

DevOps and Cloud Development

  • Technologies:
    • Docker: A containerization platform that allows developers to package applications with dependencies, ensuring consistency across different environments.
    • Kubernetes: An open-source container orchestration system that automates the deployment, scaling, and management of containerized applications.
    • AWS, Azure, Google Cloud: Leading cloud platforms offering computing, storage, databases, and AI/ML services, enabling scalable and reliable application hosting.
    • CI/CD tools: Continuous Integration and Continuous Deployment tools (like Jenkins, GitHub Actions, and GitLab CI) automate testing, building, and deployment processes for faster and more reliable software releases.
  • Upskilling:
    • Learn about containerization (Docker) and orchestration (Kubernetes).
    • Understand cloud platforms like AWS and their core services (EC2, S3, Lambda).
    • Practice setting up CI/CD pipelines with tools like Jenkins or GitHub Actions.
    • Recommended Resources:

Embedded Systems and IoT Development

  • Technologies:
    • C, C++: Low-level programming languages known for their efficiency and performance, widely used in system programming, game development, and embedded systems.
    • Python: A versatile, high-level programming language known for its simplicity and readability, used in web development, automation, AI, and scientific computing.
    • Arduino: An open-source electronics platform with easy-to-use hardware and software, commonly used for building IoT and embedded systems projects.
    • Raspberry Pi: A small, affordable computer that runs Linux and supports various programming languages, often used for DIY projects, robotics, and education.
  • Upskilling:
    • Learn C/C++ for low-level programming.
    • Experiment with hardware like Arduino or Raspberry Pi.
    • Build projects like smart home systems or sensors.
    • Recommended Resources:

How to Get Started and Transition Smoothly

  1. Assess Your Interests:
    • Do you prefer visual work (Frontend, Mobile), problem-solving (Backend, Data Science), or system-level programming (IoT, Embedded Systems)?
  2. Leverage Your QA Experience:
    • Highlight skills like testing, debugging, and attention to detail when transitioning to development roles.
    • Learn Test-Driven Development (TDD) and how to write unit and integration tests.
  3. Build Projects:
    • Start with small, practical projects and showcase them on GitHub.
    • Examples: A weather app, an e-commerce backend, or a simple game.
  4. Online Platforms for Learning:
    • FreeCodeCamp: For web development.
    • Udemy and Coursera: Wide range of development courses.
    • HackerRank or LeetCode: For coding practice.
  5. Network and Apply:
    • Contribute to open-source projects.
    • Build connections in developer communities like GitHub, Reddit, or LinkedIn.

Choosing the right development framework depends on your interests, career goals, and project requirements. If you enjoy building interactive user experiences, Web Development with React, Angular, or Vue.js could be your path. If you prefer handling server-side logic, Backend Development with Node.js, Python, or Java might be ideal. Those fascinated by mobile applications can explore Native (Kotlin, Swift) or Cross-Platform (React Native, Flutter) Development.

For those drawn to game development, Unity and Unreal Engine provide powerful tools, while Data Science & Machine Learning enthusiasts can leverage Python and frameworks like TensorFlow and PyTorch. If you’re passionate about infrastructure and automation, DevOps & Cloud Development with Docker, Kubernetes, and AWS is a strong choice. Meanwhile, Embedded Systems & IoT Development appeals to those interested in hardware-software integration using Arduino, Raspberry Pi, and C/C++.

Pros and Cons of Different Development Paths

Path Pros Cons
Web Development High-demand, fast-paced, large community Frequent technology changes
Backend Development Scalable applications, strong job market Can be complex, requires database expertise
Mobile Development Booming industry, native vs. cross-platform options Requires platform-specific knowledge
Game Development Creative field, engaging projects Competitive market, longer development cycles
Data Science & ML High-paying field, innovative applications Requires strong math and programming skills
DevOps & Cloud Essential for modern development, automation focus Can be complex, requires networking knowledge
Embedded Systems & IoT Hardware integration, real-world applications Limited to specialized domains

Final Recommendations

  1. If you’re just starting, pick a general-purpose language like JavaScript or Python and build small projects.
  2. If you have a specific goal, choose a framework aligned with your interest (e.g., React for frontend, Node.js for backend, Flutter for cross-platform).
  3. For career growth, explore in-demand technologies like DevOps, AI/ML, or cloud platforms.
  4. Keep learning and practicing—build projects, contribute to open-source, and stay updated with industry trends.

No matter which path you choose, the key is continuous learning and hands-on experience. Stay curious, build projects, and embrace challenges on your journey to becoming a skilled developer, check out Developer Roadmaps for further insights and guidance. 🚀 Happy coding!

]]>
https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/feed/ 3 377319
Automate Release Notes to Confluence with Bitbucket Pipelines https://blogs.perficient.com/2025/02/13/automate-release-notes-to-confluence-with-bitbucket-pipelines/ https://blogs.perficient.com/2025/02/13/automate-release-notes-to-confluence-with-bitbucket-pipelines/#respond Fri, 14 Feb 2025 05:44:45 +0000 https://blogs.perficient.com/?p=376360

In this blog post, I will share my journey of implementing an automated solution to publish release notes for service deployments to Confluence using Bitbucket Pipelines. This aimed to streamline our release process and ensure all relevant information was easily accessible to our team. By leveraging tools like Bitbucket and Confluence, we achieved a seamless integration that enhanced our workflow.

Step 1: Setting Up the Pipeline

We configured our Bitbucket pipeline to include a new step for publishing release notes. This involved writing a script in the bitbucket-pipelines.yml file to gather the necessary information (SHA, build number, and summary of updates).

Step 2: Generating Release Notes

We pulled the summary of updates from our commit messages and release notes. To ensure the quality of the summaries, we emphasized the importance of writing detailed and informative commit messages.

Step 3: Publishing to Confluence

Using the Confluence Cloud REST API, we automated the creation of Confluence pages. We made a parent page titled “Releases” and configured the script to publish a new page.

Repository Variables

We used several repository variables to keep sensitive information secure and make the script more maintainable:

  • REPO_TOKEN: The token used to authenticate with the Bitbucket API.
  • CONFLUENCE_USERNAME: The username for Confluence authentication.
  • CONFLUENCE_TOKEN: The token for Confluence authentication.
  • CONFLUENCE_SPACE_KEY: The key to the Confluence space where the release notes are published.
  • CONFLUENCE_ANCESTOR_ID: The ID of the parent page under which new release notes pages are created.
  • CONFLUENCE_API_URL: The URL of the Confluence API endpoint.

Repovariables

Script Details

Here is the script we used in our bitbucket-pipelines.yml file, along with an explanation of each part:

Step 1: Define the Pipeline Step

- step: &release-notes
      name: Publish Release Notes
      image: atlassian/default-image:3
  • Step Name: The step is named “Publish Release Notes”.
  • Docker Image: Uses the atlassian/default-image:3 Docker image for the environment.

Step 2: List Files

script:
  - ls -la /src/main/resources/
  • List Files: The ls -la command lists the files in the specified directory to ensure the necessary files are present.

Step 3: Extract Release Number

- RELEASE_NUMBER=$(grep '{application_name}.version' /src/main/resources/application.properties | cut -d'=' -f2)
  • Extract Release Number: The grep command extracts the release number from the application.properties file where the property {application_name}.version should be present.

Step 4: Create Release Title

- RELEASE_TITLE="Release - $RELEASE_NUMBER Build- $BITBUCKET_BUILD_NUMBER Commit- $BITBUCKET_COMMIT"
  • Create Release Title: Construct the release title using the release number, Bitbucket build number, and commit SHA.

Step 5: Get Commit Message

- COMMIT_MESSAGE=$(git log --format=%B -n 1 ${BITBUCKET_COMMIT})
  • Get Commit Message: The git log command retrieves the commit message for the current commit.

Step 6: Check for Pull Request

- |
  if [[ $COMMIT_MESSAGE =~ pull\ request\ #([0-9]+) ]]; then
    PR_NUMBER=$(echo "$COMMIT_MESSAGE" | grep -o -E 'pull\ request\ \#([0-9]+)' | sed 's/[^0-9]*//g')
  • Check for Pull Request: The script checks if the commit message contains a pull request number.
  • Extract PR Number: If a pull request number is found, it is extracted using grep and sed.

Step 7: Fetch Pull Request Description

RAW_RESPONSE=$(wget --no-hsts -qO- --header="Authorization: Bearer $REPO_TOKEN" "https://api.bitbucket.org/2.0/repositories/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pullrequests/${PR_NUMBER}")
PR_DESCRIPTION=$(echo "$RAW_RESPONSE" | jq -r '.description')
echo "$PR_DESCRIPTION" > description.txt
  • Fetch PR Description: Uses wget to fetch the pull request description from the Bitbucket API.
  • Parse Description: Parses the description using jq and saves it to description.txt.

Step 8: Prepare JSON Data

 AUTH_HEADER=$(echo -n "$CONFLUENCE_USERNAME:$CONFLUENCE_TOKEN" | base64 | tr -d '\n')
 JSON_DATA=$(jq -n --arg title "$RELEASE_TITLE" \
                    --arg type "page" \
                    --arg space_key "$CONFLUENCE_SPACE_KEY" \
                    --arg ancestor_id "$CONFLUENCE_ANCESTOR_ID" \
                    --rawfile pr_description description.txt \
                    '{
                      title: $title,
                      type: $type,
                      space: {
                        key: $space_key
                      },
                      ancestors: [{
                        id: ($ancestor_id | tonumber)
                      }],
                      body: {
                        storage: {
                          value: $pr_description,
                          representation: "storage"
                        }
                      }
                    }')
  echo "$JSON_DATA" > json_data.txt
  • Prepare Auth Header: Encodes the Confluence username and token for authentication.
  • Construct JSON Payload: Uses jq to construct the JSON payload for the Confluence API request.
  • Save JSON Data: Saves the JSON payload to json_data.txt.

Step 9: Publish to Confluence

  wget --no-hsts --method=POST --header="Content-Type: application/json" \
      --header="Authorization: Basic $AUTH_HEADER" \
      --body-file="json_data.txt" \
      "$CONFLUENCE_API_URL" -q -O -
  if [[ $? -ne 0 ]]; then
    echo "HTTP request failed"
    exit 1
  fi
  • Send POST Request: This method uses wget to send a POST request to the Confluence API to create or update the release notes page.
  • Error Handling: Checks if the HTTP request failed and exits with an error message if it did.

Script

# Service for publishing release notes
- step: &release-notes
      name: Publish Release Notes
      image: atlassian/default-image:3
      script:
        - ls -la /src/main/resources/
        - RELEASE_NUMBER=$(grep '{application_name}.version' /src/main/resources/application.properties | cut -d'=' -f2)
        - RELEASE_TITLE="Release - $RELEASE_NUMBER Build- $BITBUCKET_BUILD_NUMBER Commit- $BITBUCKET_COMMIT"
        - COMMIT_MESSAGE=$(git log --format=%B -n 1 ${BITBUCKET_COMMIT})
        - |
          if [[ $COMMIT_MESSAGE =~ pull\ request\ #([0-9]+) ]]; then
            PR_NUMBER=$(echo "$COMMIT_MESSAGE" | grep -o -E 'pull\ request\ \#([0-9]+)' | sed 's/[^0-9]*//g')
            RAW_RESPONSE=$(wget --no-hsts -qO- --header="Authorization: Bearer $REPO_TOKEN" "https://api.bitbucket.org/2.0/repositories/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pullrequests/${PR_NUMBER}")
            PR_DESCRIPTION=$(echo "$RAW_RESPONSE" | jq -r '.description')
            echo "$PR_DESCRIPTION" > description.txt
            AUTH_HEADER=$(echo -n "$CONFLUENCE_USERNAME:$CONFLUENCE_TOKEN" | base64 | tr -d '\n')
            JSON_DATA=$(jq -n --arg title "$RELEASE_TITLE" \
                              --arg type "page" \
                              --arg space_key "$CONFLUENCE_SPACE_KEY" \
                              --arg ancestor_id "$CONFLUENCE_ANCESTOR_ID" \
                              --rawfile pr_description description.txt \
                              '{
                                title: $title,
                                type: $type,
                                space: {
                                  key: $space_key
                                },
                                ancestors: [{
                                  id: ($ancestor_id | tonumber)
                                }],
                                body: {
                                  storage: {
                                    value: $pr_description,
                                    representation: "storage"
                                  }
                                }
                              }')
            echo "$JSON_DATA" > json_data.txt
            wget --no-hsts --method=POST --header="Content-Type: application/json" \
              --header="Authorization: Basic $AUTH_HEADER" \
              --body-file="json_data.txt" \
              "$CONFLUENCE_API_URL" -q -O -
            if [[ $? -ne 0 ]]; then
              echo "HTTP request failed"
              exit 1
            fi
          fi

Confluence_page
Outcomes and Benefits

  • The automation significantly reduced the manual effort required to publish release notes.
  • The project improved our overall release process efficiency and documentation quality.

Conclusion

Automating the publication of release notes to Confluence using Bitbucket Pipelines has been a game-changer for our team. It has streamlined our release process and ensured all relevant information is readily available. I hope this blog post provides insights and inspiration for others looking to implement similar solutions.

]]>
https://blogs.perficient.com/2025/02/13/automate-release-notes-to-confluence-with-bitbucket-pipelines/feed/ 0 376360
How to Implement Spring Expression Language (SpEL) Validator in Spring Boot: A Step-by-Step Guide https://blogs.perficient.com/2025/02/12/how-to-implement-spring-expression-language-spel-validator-in-spring-boot-a-step-by-step-guide/ https://blogs.perficient.com/2025/02/12/how-to-implement-spring-expression-language-spel-validator-in-spring-boot-a-step-by-step-guide/#respond Wed, 12 Feb 2025 07:07:48 +0000 https://blogs.perficient.com/?p=376468

In this blog post, I will guide you through the process of implementing a Spring Expression Language (SpEL) validator in a Spring Boot application. SpEL is a powerful expression language that supports querying and manipulating an object graph at runtime. By the end of this tutorial, you will have a working example of using SpEL for validation in your Spring Boot application.

Project Structure


Project Structure

Step 1: Set Up Your Spring Boot Project

First things first, let’s set up your Spring Boot project. Head over to Spring Initializer and create a new project with the following dependencies:

  • Spring Boot Starter Web
  • Thymeleaf (for the form interface)
    <dependencies>
    	<dependency>
    		<groupId>org.springframework.boot</groupId>
    		<artifactId>spring-boot-starter-web</artifactId>
    		<version>3.4.2</version>
    	</dependency>
    	<dependency>
    		<groupId>org.springframework.boot</groupId>
    		<artifactId>spring-boot-starter-thymeleaf</artifactId>
    		<version>3.4.2</version>
    	</dependency>
    </dependencies>
    

Step 2: Create the Main Application Class

Next, we will create the main application class to bootstrap our Spring Boot application.

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class DemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }
}

Step 3: Create a Model Class

Create a SpelExpression class to hold the user input.

package com.example.demo.model;

public class SpelExpression {
    private String expression;

    // Getters and Setters
    public String getExpression() {
        return expression;
    }

    public void setExpression(String expression) {
        this.expression = expression;
    }
}


Step 4: Create a Controller

Create a controller to handle user input and validate the SpEL expression.

package com.example.demo.controller;

import com.example.demo.model.SpelExpression;
import org.springframework.expression.ExpressionParser;
import org.springframework.expression.spel.SpelParseException;
import org.springframework.expression.spel.standard.SpelExpressionParser;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.ModelAttribute;
import org.springframework.web.bind.annotation.PostMapping;

@Controller
public class SpelController {

    private final ExpressionParser parser = new SpelExpressionParser();

    @GetMapping("/spelForm")
    public String showForm(Model model) {
        model.addAttribute("spelExpression", new SpelExpression());
        return "spelForm";
    }

    @PostMapping("/validateSpel")
    public String validateSpel(@ModelAttribute SpelExpression spelExpression, Model model) {
        try {
            parser.parseExpression(spelExpression.getExpression());
            model.addAttribute("message", "The expression is valid.");
        } catch (SpelParseException e) {
            model.addAttribute("message", "Invalid expression: " + e.getMessage());
        }
        return "result";
    }
}

Step 5: Create Thymeleaf Templates

Create Thymeleaf templates for the form and the result page.

spelForm.html

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <title>SpEL Form</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            background-color: #f4f4f9;
            color: #333;
            margin: 0;
            padding: 0;
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
        }
        .container {
            background-color: #fff;
            padding: 20px;
            border-radius: 8px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            text-align: center;
        }
        h1 {
            color: #4CAF50;
        }
        form {
            margin-top: 20px;
        }
        label {
            display: block;
            margin-bottom: 8px;
            font-weight: bold;
        }
        input[type="text"] {
            width: 100%;
            padding: 8px;
            margin-bottom: 20px;
            border: 1px solid #ccc;
            border-radius: 4px;
        }
        button {
            padding: 10px 20px;
            background-color: #4CAF50;
            color: #fff;
            border: none;
            border-radius: 4px;
            cursor: pointer;
        }
        button:hover {
            background-color: #45a049;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>SpEL Expression Validator</h1>
        <form th:action="@{/validateSpel}" th:object="${spelExpression}" method="post">
            <div>
                <label>Expression:</label>
                <input type="text" th:field="*{expression}" />
            </div>
            <div>
                <button type="submit">Validate</button>
            </div>
        </form>
    </div>
</body>
</html>

result.html

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <title>Validation Result</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            background-color: #f4f4f9;
            color: #333;
            margin: 0;
            padding: 0;
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
        }
        .container {
            background-color: #fff;
            padding: 20px;
            border-radius: 8px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            text-align: center;
        }
        h1 {
            color: #4CAF50;
        }
        p {
            font-size: 18px;
        }
        a {
            display: inline-block;
            margin-top: 20px;
            padding: 10px 20px;
            background-color: #4CAF50;
            color: #fff;
            text-decoration: none;
            border-radius: 4px;
        }
        a:hover {
            background-color: #45a049;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>Validation Result</h1>
        <p th:text="${message}"></p>
        <a href="/spelForm">Back to Form</a>
    </div>
</body>
</html>

Step 6: Run the Application

Now, it’s time to run your Spring Boot application. To test the SpEL validator, navigate to http://localhost:8080/spelForm in your browser.

For Valid Expression


Expression Validator

Expression Validator Result

For Invalid Expression

Expression Validator

Expression Validator Result
Conclusion

Following this guide, you successfully implemented a SpEL validator in your Spring Boot application. This powerful feature enhances your application’s flexibility and robustness. Keep exploring SpEL for more dynamic and sophisticated solutions. Happy coding!

]]>
https://blogs.perficient.com/2025/02/12/how-to-implement-spring-expression-language-spel-validator-in-spring-boot-a-step-by-step-guide/feed/ 0 376468
Apex Security Best Practices for Salesforce Applications https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/ https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/#respond Mon, 03 Feb 2025 05:51:18 +0000 https://blogs.perficient.com/?p=373874

As businesses increasingly rely on Salesforce to manage their critical data, ensuring data security has become more important than ever. Apex, Salesforce’s proprietary programming language, runs in system mode by default, bypassing object- and field-level security. To protect sensitive data, developers need to enforce strict security measures.

This blog will explore Apex security best practices, including enforcing sharing rules, field-level permissions, and user access enforcement to protect your Salesforce data.

Why Apex Security is Critical for Your Salesforce Applications

Apex’s ability to bypass security settings puts the onus on developers to implement proper Salesforce security practices. Without these protections, your Salesforce application might unintentionally expose sensitive data to unauthorized users.

By following best practices such as enforcing sharing rules, validating inputs, and using security-enforced SOQL queries, you can significantly reduce the risk of data breaches and ensure your app adheres to the platform’s security standards.

Enforcing Sharing Rules in Apex to Maintain Data Security

Sharing rules are central to controlling data access in Salesforce. Apex doesn’t automatically respect these sharing rules unless explicitly instructed to do so. Here’s how to enforce them in your Apex code:

Using with sharing in Apex Classes

  • with sharing: Ensures the current user’s sharing settings are enforced, preventing unauthorized access to records.
  • without sharing: Ignores sharing rules and is often used for administrative tasks or system-level operations where access should not be restricted.
  • inherited sharing: Inherits sharing settings from the calling class.

Best Practice: Always use with sharing unless you explicitly need to override sharing rules for specific use cases. This ensures your code complies with Salesforce security standards.

Example

public class AccountHandlerWithSharing {
    public void fetchAccounts() {
        // Ensures that sharing settings are respected
        List<Account> accounts = [SELECT Id, Name FROM Account];
    }
}
public class AccountHandlerWithoutSharing {
    public void fetchAccounts() {
        // Ignores sharing settings and returns all records
        List<Account> accounts = [SELECT Id, Name FROM Account];
    }
}

Enforcing Object and Field-Level Permissions in Apex

Apex operates in a system context by default, bypassing object- and field-level security. You must manually enforce these security measures to ensure your code respects user access rights.

Using WITH SECURITY_ENFORCED in SOQL Queries

The WITH SECURITY_ENFORCED keyword ensures that Salesforce performs a permission check on fields and objects in your SOQL query, ensuring that only accessible data is returned.

Example

List<Account> accounts = [
    SELECT Id, Name
    FROM Account
    WHERE Industry = 'Technology'
    WITH SECURITY_ENFORCED
];

This approach guarantees that only fields and objects the current user can access are returned in your query results.

Using the stripInaccessible Method to Filter Inaccessible Data

Salesforce provides the stripInaccessible method, which removes inaccessible fields or relationships from query results. It also helps prevent runtime errors by ensuring no inaccessible fields are used in DML operations.

Example

Account acc = [SELECT Id, Name FROM Account LIMIT 1];
Account sanitizedAcc = (Account) Security.stripInaccessible(AccessType.READABLE, acc);

Using stripInaccessible ensures that any fields or relationships the user cannot access are stripped out of the Account record before any further processing.

Apex Managed Sharing: Programmatically Share Records

Apex Managed Sharing can be a powerful tool when you need to manage record access dynamically. This feature allows developers to programmatically share records with specific users or groups.

Example

public void shareRecord(Id recordId, Id userId) {
    CustomObject__Share share = new CustomObject__Share();
    share.ParentId = recordId;
    share.UserOrGroupId = userId;
    share.AccessLevel = 'Edit'; // Options: 'Read', 'Edit', or 'All'
    insert share;
}

This code lets you share a custom object record with a specific user and grant them Edit access. Apex Managed Sharing allows more flexible, dynamic record-sharing controls.

Security Tips for Apex and Lightning Development

Here are some critical tips for improving security in your Apex and Lightning applications:

Avoid Hardcoding IDs

Hardcoding Salesforce IDs, such as record IDs or profile IDs, can introduce security vulnerabilities and reduce code flexibility. Use dynamic retrieval to retrieve IDs, and consider using Custom Settings or Custom Metadata for more flexible and secure configurations.

Validate User Inputs to Prevent Security Threats

It is essential to sanitize all user inputs to prevent threats like SOQL injection and Cross-Site Scripting (XSS). Always use parameterized queries and escape characters where necessary.

Use stripInaccessible in DML Operations

To prevent processing inaccessible fields, always use the stripInaccessible method when handling records containing fields restricted by user permissions.

Review Sharing Contexts to Ensure Data Security

Ensure you use the correct sharing context for each class or trigger. Avoid granting unnecessary access by using with sharing for most of your classes.

Write Test Methods to Simulate User Permissions

Writing tests that simulate various user roles using System.runAs() is crucial to ensure your code respects sharing rules, field-level permissions, and other security settings.

Conclusion: Enhancing Salesforce Security with Apex

Implementing Apex security best practices is essential to protect your Salesforce data. Whether you are enforcing sharing rules, respecting field-level permissions, or programmatically managing record sharing, these practices help ensure that only authorized users can access sensitive data.

When building your Salesforce applications, always prioritize security by:

  • Using with sharing where possible.
  • Implementing security-enforced queries.
  • Tools like stripInaccessible can be used to filter out inaccessible fields.

By adhering to these practices, you can build secure Salesforce applications that meet business requirements and ensure data integrity and compliance.

Further Reading on Salesforce Security

]]>
https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/feed/ 0 373874