Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.
Think of scope like a boundary or container that controls where you can use a variable in your code.
In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.
This helps in two big ways:
JavaScript mainly uses two types of scope:
1.Global Scope – Available everywhere in your code.
2.Local Scope – Available only inside a specific function or block.
Global Scope
When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.
If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.
var a = 5; // Global variable function add() { return a + 10; // Using the global variable inside a function } console.log(window.a); // 5
In this example, a is declared outside of any function, so it’s globally available—even inside add().
A quick note:
let name = "xyz"; function changeName() { name = "abc"; // Changing the value of the global variable } changeName(); console.log(name); // abc
In this example, we didn’t create a new variable—we just changed the value of the existing one.
Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.
Local Scope
In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.
There are two types of local scope:
1.Functional Scope
Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.
let firstName = "Shilpa"; // Global function changeName() { let lastName = "Syal"; // Local to this function console.log (`${firstName} ${lastName}`); } changeName(); console.log (lastName); //Error! Not available outside the function
You can even use the same variable name in different functions without any issue:
function mathMarks() { let marks = 80; console.log (marks); } function englishMarks() { let marks = 85; console.log (marks); }
Here, both marks variables are separate because they live in different function scopes.
2.Block Scope
Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).
function getMarks() { let marks = 60; if (marks > 50) { const points = 10; console.log (marks + points); //Works here } console.log (points); //
Uncaught Reference Error: points is not defined }
As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.
LEXICAL SCOPING & NESTED SCOPE:
When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.
function outerFunction() { let outerVar = "I’m outside"; function innerFunction() { console.log (outerVar); //Can access outerVar } innerFunction(); }
In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.
VARIABLE SCOPE OR VARIABLE SHADOWING:
You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.
If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.
let name = "xyz" function getName() { let name = "abc" // Redeclaring the name variable console.log (name) ; //abc } getName(); console.log (name) ; //xyz
To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.
let bonus = 500; function getSalary() { if(true) { return 10000 + bonus; // Looks up and finds bonus in the outer scope } } console.log (getSalary()); // 10500
Key Takeaways: Scoping Made Simple
Global Scope: Variables declared outside any function are global and can be used anywhere in your code.
Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.
Global Variables Last Longer: They stay alive as long as your program is running.
Local Variables Are Temporary: They’re created when the function runs and removed once it ends.
Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.
Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.
Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.”
To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.
It has two main phases:
1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.
2.Execution Phase: During this phase, code is executed line by line.
-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.
Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:
foo (); // Output: "Hello, world!" function foo () { console.log ("Hello, world!"); }
console.log (x); // Output: undefined var x = 5;
This code seems straightforward, but it’s interpreted as:
var x; console.log (x); // Output: undefined x = 5;
3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error
console.log (x); // Throws Reference Error: Cannot access 'x' before initialization let x = 5;
In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.
For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.
This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.
console.log (x); //x is not defined -- Reference Error. let a=10; //b is undefined. var b= 100; // you cannot access a before initialization Reference Error.
Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.
Conclusion
JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding!
]]>
Power Fx is a low-code language expressing logic across the Microsoft Power Platform. It’s a general-purpose, strong-typed, declarative, and functional programming language described in human-friendly text. Makers can use Power Fx directly in an Excel-like formula bar or Visual Studio Code text window. Its concise and straightforward nature makes everyday programming tasks easy for both makers and developers.
Power Fx is expressed in human-friendly text. It’s a low-code language that makers can use directly in an Excel-like formula bar or Visual Studio Code text window. The “low” in low-code is due to the concise and straightforward nature of the language, making everyday programming tasks easy for both makers and developers.
Power Fx enables the full spectrum of development, from no-code makers without any programming knowledge to pro-code for professional developers. It enables diverse teams to collaborate and save time and effort.
To use Power Fx as an expression language in a desktop flow, you must create one and enable the respective toggle button when creating it through Power Automate for the desktop’s console.
Each Power Fx expression must start with an “=” (equals to sign).
If you’re transitioning from flows where Power Fx is disabled, you might notice some differences. To streamline your experience while creating new desktop flows, here are some key concepts to keep in mind:
With Power Fx Disabled
Give your collection a name (e.g., myCollection) in the Variable Name field.
In the Value field, define the collection. Collections in PAD are essentially arrays, which you can define by enclosing the values in square brackets [ ].
Action: Set Variable
Variable Name: myNumberCollection
Value: [1, 2, 3, 4, 5]
Action: Set Variable
Variable Name: myTextCollection
Value: [“Alice”, “Bob”, “Charlie”]
You can also create collections with mixed data types. For example, a collection with both numbers and strings:
Action: Set Variable
Variable Name: mixedCollection
Value: [1, “John”, 42, “Doe”]
If you want to use the dollar sign ($) followed by a opening curly brace sign ({) within a Power Fx expression or in the syntax of a UI/Web element selector and have Power Automate for desktop not treat it as the string interpolation syntax, make sure to follow this syntax: $${ (the first dollar sign will act as an escape character)
For the complete list of all available functions in Power Automate for desktop flows, go to Formula reference – desktop flows.
Yes, use Power Fx if your flow needs custom logic, data transformation, or integration with Power Apps and you’re comfortable with the learning curve.
No, avoid it if your flows are relatively simple or if you’re primarily focused on automation tasks like file manipulation, web scraping, or UI automation, where Power Automate Desktop’s native features will be sufficient.
]]>Quarkus has gained traction as a modern Java framework designed for cloud-native development. In my previous blog, I discussed why learning Quarkus is a great choice. Today, let’s dive deeper into one of its standout features: Live Coding.
Live Coding in Quarkus provides an instant development experience where changes to your application’s code, configuration, and even dependencies are reflected in real time without restarting the application. This eliminates the need for slow rebuild-restart cycles, significantly improving productivity.
Quarkus automatically watches for file changes and reloads the necessary components without restarting the entire application. This feature is enabled by default in dev mode and can be triggered using:
mvn quarkus:dev
or if you are using Gradle:
gradle quarkusDev
Once the development server is running, any modifications to your application will be instantly reflected when you refresh the browser or make an API request.
Imagine you are developing a REST API with Quarkus and need to update an endpoint. With Live Coding enabled, you simply modify the resource class:
@Path("/hello")
public class GreetingResource {
@GET
public String hello() {
return "Hello, Quarkus!";
}
}
Change the return message to:
return "Hello, Live Coding!";
Without restarting the application, refresh the browser or send an API request, and the change is immediately visible. No waiting, no downtime.
While Live Coding is enabled by default in dev mode, you can also enable it in remote environments using:
mvn quarkus:remote-dev -Dquarkus.live-reload.url=<remote-server>
This allows developers working in distributed teams or cloud environments to take advantage of fast feedback cycles.
Quarkus Live Coding is a game-changer for Java development, reducing turnaround time and enhancing the overall developer experience. If you’re transitioning to Quarkus, leveraging this feature can significantly improve your workflow.
Have you tried Quarkus Live Coding? Share your experience in the comments!
Stay tuned for more features on security and reactive programming with quarkus.
Website performance is crucial for user satisfaction and overall business success. Slow-loading pages, unresponsive features, and delayed database queries can lead to frustrated users, decreased conversions, and a poor user experience. One key to improving site performance is identifying bottlenecks in your database interactions, and that’s where SQL Server Profiler comes in.
SQL Server Profiler is a tool provided by Microsoft SQL Server to help database administrators, developers, and support teams monitor, trace, and troubleshoot SQL Server activity in real-time. It captures and analyzes SQL Server events such as queries, stored procedures, locks, and performance issues.
You need to capture the events that will help you identify slow queries and stored procedures. In this blog, we will discuss one of the events provided by SQL Server Profiler.
RPC: Completed – This event will capture the execution details of stored procedures that are called remotely.
Columns to Include:
Ensure the following columns are selected to track performance and identify slow Stored Procedures:
Running trace-capturing events for the database. We can stop and start the trace and clear all the events in the trace using the toolbar. If you want to start a whole new trace, you can also do this using the toolbar.
Start a new trace, then load the webpage from which you want to capture data. Once the page has finished loading, stop the trace and review all the events captured.
After stopping the trace, you can analyze the captured data:
SQL Server Profiler is an invaluable tool for boosting your website’s performance. By identifying slow queries, analyzing execution plans, and tracking server activity, you can pinpoint and resolve performance bottlenecks in your database interactions. Whether you’re dealing with slow queries, deadlocks, or server configuration issues, SQL Server Profiler provides the insights you need to make informed decisions and optimize your website’s performance.
Reference
]]>
Python is an incredibly powerful and easy-to-use programming language. However, it can be slow if not optimized properly! This guide will teach you how to turbocharge your code, making it faster, leaner, and more efficient. Buckle up, and let’s dive into some epic optimization hacks!
For more on Python basics, check out our Beginner’s Guide to Python Programming.
Picking the right data structure is like choosing the right tool for a job—do it wrong, and you’ll be banging a nail with a screwdriver!
# List (mutable)
my_list = [1, 2, 3]
# Tuple (immutable, faster)
my_tuple = (1, 2, 3)
# Slow list lookup (O(n))
numbers = [1, 2, 3, 4, 5]
print(3 in numbers) # Yawn... Slow!
# Fast set lookup (O(1))
numbers_set = {1, 2, 3, 4, 5}
print(3 in numbers_set) # Blink and you'll miss it! ⚡
# Generator (better memory usage)
def squared_numbers(n):
for i in range(n):
yield i * i
squares = squared_numbers(1000000) # No memory explosion! 💥
# Inefficient
for i in range(10000):
result = expensive_function() # Ugh! Repeating this is a performance killer 😩
process(result)
# Optimized
cached_result = expensive_function() # Call it once and chill 😎
for i in range(10000):
process(cached_result)
# Traditional loop (meh...)
squares = []
for i in range(10):
squares.append(i * i)
# Optimized list comprehension (so sleek! 😍)
squares = [i * i for i in range(10)]
# Inefficient (Creates too many temporary strings 🤯)
words = ["Hello", "world", "Python"]
sentence = ""
for word in words:
sentence += word + " "
# Optimized (Effortless and FAST 💨)
sentence = " ".join(words)
name = "Alice"
age = 25
# Old formatting (Ew 🤢)
print("My name is {} and I am {} years old.".format(name, age))
# Optimized f-string (Sleek & stylish 😎)
print(f"My name is {name} and I am {age} years old.")
import timeit
print(timeit.timeit("sum(range(1000))", number=10000)) # How fast is your code? 🚀
import cProfile
cProfile.run('my_function()') # Find bottlenecks like a pro! 🔍
For more on profiling, see our Guide to Python Profiling Tools.
import sys
my_list = [1, 2, 3, 4, 5]
print(sys.getsizeof(my_list)) # How big is that object? 🤔
import gc
large_object = [i for i in range(1000000)]
del large_object # Say bye-bye to memory hog! 👋
gc.collect() # Cleanup crew 🧹
from multiprocessing import Pool
def square(n):
return n * n
with Pool(4) as p: # Use 4 CPU cores 🏎
results = p.map(square, range(100))
import threading
def print_numbers():
for i in range(10):
print(i)
thread = threading.Thread(target=print_numbers)
thread.start()
thread.join()
For more on parallel processing, check out our Introduction to Python Multithreading.
Congratulations! You’ve unlocked Python’s full potential by learning these killer optimization tricks. Now go forth and write blazing-fast, memory-efficient, and clean Python code.
Got any favorite optimization hacks? Drop them in the comments!
For more in-depth information on Python optimization, check out these resources:
Nine years ago, I was eager to be a developer but found no convincing platform. Luckily, the smartphone world was booming, and its extraordinary growth immediately caught my eye. This led to my career as an Android developer, where I had the opportunity to learn the nuances of building mobile applications. The time I went along helped me expand my reach into hybrid mobile app development, allowing me to smoothly adapt to various platforms.
I also know the struggles of countless aspiring developers dilemma with uncertainty about which direction to head and which technology to pursue. Hence, the idea of writing this blog stemmed from my experiences and insights while making my own way through mobile app development. It is geared toward those beginning to learn this subject or adding to current knowledge.
Choosing the right development framework depends on your interests, career goals, and project requirements. If you enjoy building interactive user experiences, Web Development with React, Angular, or Vue.js could be your path. If you prefer handling server-side logic, Backend Development with Node.js, Python, or Java might be ideal. Those fascinated by mobile applications can explore Native (Kotlin, Swift) or Cross-Platform (React Native, Flutter) Development.
For those drawn to game development, Unity and Unreal Engine provide powerful tools, while Data Science & Machine Learning enthusiasts can leverage Python and frameworks like TensorFlow and PyTorch. If you’re passionate about infrastructure and automation, DevOps & Cloud Development with Docker, Kubernetes, and AWS is a strong choice. Meanwhile, Embedded Systems & IoT Development appeals to those interested in hardware-software integration using Arduino, Raspberry Pi, and C/C++.
Path | Pros | Cons |
Web Development | High-demand, fast-paced, large community | Frequent technology changes |
Backend Development | Scalable applications, strong job market | Can be complex, requires database expertise |
Mobile Development | Booming industry, native vs. cross-platform options | Requires platform-specific knowledge |
Game Development | Creative field, engaging projects | Competitive market, longer development cycles |
Data Science & ML | High-paying field, innovative applications | Requires strong math and programming skills |
DevOps & Cloud | Essential for modern development, automation focus | Can be complex, requires networking knowledge |
Embedded Systems & IoT | Hardware integration, real-world applications | Limited to specialized domains |
No matter which path you choose, the key is continuous learning and hands-on experience. Stay curious, build projects, and embrace challenges on your journey to becoming a skilled developer, check out Developer Roadmaps for further insights and guidance. Happy coding!
I’ve had plenty of opportunities to guide developers new to the React and React Native frameworks. While everyone is different, I wanted to provide a structured guide to help bring a fresh developer into the React fold.
This introduction to React is intended for a developer that at least has some experience with JavaScript, HTML and basic coding practices.
Ideally, this person has coded at least one project using JavaScript and HTML. This experience will aid in understanding the syntax of components, but any aspiring developer can learn from it as well.
There are several tiers for beginner level programmers who would like to learn React and are looking for someone like you to help them get up to speed.
For a developer like this, I would recommend introductory JavaScript and HTML knowledge. Maybe a simple programming exercise or online instruction, before introducing them to React. You can compare JavaScript to a language they are familiar with and cover core concepts. A basic online guide should be sufficient to get them up and running with HTML.
I would go over some basics of JavaScript and HTML to make sure they have enough to grasp the syntax and terminologies used in React. A supplementary course or online guide might be good for a refresher before introducing them to modern concepts.
Even if they haven’t used JavaScript or HTML much, they should be able to ramp up quickly. Reading through React documentation should be enough to jumpstart the learning process.
You can begin their React and React Native journey with the following guidelines:
The React developer documentation is a great place to start if the developer has absolutely no experience or is just starting out. It provides meaningful context in the differences between standard JavaScript and HTML and how React handles them. It also provides a valuable reference on available features and what you can do within the framework.
Pro tip: I recommend starting them right off with functional components. They are more widely used and often have better performance, especially with hooks. I personally find them easier to work with as well.
Class component:
function MyButton() { return ( <button>I'm a button</button> ); }
Functional component:
const MyButton = () => { return ( <button>I'm a button</button> ) }
The difference with such a small example isn’t very obvious, but it becomes much different once you introduce hooks. Hooks allow you to extract functionality into a reusable container, this allows you to keep logic separate or import it in other components. There are also several built-in hooks that make life easier. Hooks always start with “use” (useState, useRef, etc.). You are also able to create custom hooks for your own logic.
Once they understand basic concepts, it’s time to focus on advanced React concepts. State management is an important factor in React which covers component and app-wide states. Learning widely used packages might come in handy. I recommend Redux Toolkit as it’s easy to learn, but extremely extensible. It is great for both big and small projects and offers simple to complex state management features.
Now might be a great time to point out the key differences between React and React Native. They are very similar with a few minor adjustments:
React | React Native | |
---|---|---|
Layout | Uses HTML tags | “core components” (View instead of div for example). |
Styling | CSS | Style objects |
X/Y Coordinate Planes | Flex direction: row | Flex direction: column |
Navigation | URLs | Routes react-navigation |
I would follow the React concepts with an example project. This allows the developer to see how a project is structured and how to code within the framework. Tic-Tac-Toe is a great example for a new React developer to give a try to see if they understand the basic concepts.
Debugging in Chrome is extremely useful for things like console logs and other logging that is beneficial for defects. The Style Inspector is another mandatory tool for React that lets you see how styles are applied to different elements. For React Native, the documentation contains useful links to helpful tools.
Assign the new React developer low-level bugs or feature enhancements to tackle. Closely monitoring their progress via pair programing has been extremely beneficial in my experience. This provides the opportunity to ask real-time questions to which the experienced developer can offer guidance. This also provides an opportunity to correct any mistakes or bad practices before they become ingrained. Merge requests should be reviewed together before approval to ensure code quality.
These tips and tools will give a new React or React Native developer the skills they can develop to contribute to projects. Obviously, the transition to React Native will be a lot smoother for a developer familiar with React, but any developer that is familiar with JavaScript/HTML should be able to pick up both quickly.
Thanks for your time and I wish you the best of luck with onboarding your new developer onto your project!
For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!
]]>As businesses increasingly rely on Salesforce to manage their critical data, ensuring data security has become more important than ever. Apex, Salesforce’s proprietary programming language, runs in system mode by default, bypassing object- and field-level security. To protect sensitive data, developers need to enforce strict security measures.
This blog will explore Apex security best practices, including enforcing sharing rules, field-level permissions, and user access enforcement to protect your Salesforce data.
Apex’s ability to bypass security settings puts the onus on developers to implement proper Salesforce security practices. Without these protections, your Salesforce application might unintentionally expose sensitive data to unauthorized users.
By following best practices such as enforcing sharing rules, validating inputs, and using security-enforced SOQL queries, you can significantly reduce the risk of data breaches and ensure your app adheres to the platform’s security standards.
Sharing rules are central to controlling data access in Salesforce. Apex doesn’t automatically respect these sharing rules unless explicitly instructed to do so. Here’s how to enforce them in your Apex code:
with sharing
in Apex ClassesBest Practice: Always use with sharing unless you explicitly need to override sharing rules for specific use cases. This ensures your code complies with Salesforce security standards.
public class AccountHandlerWithSharing { public void fetchAccounts() { // Ensures that sharing settings are respected List<Account> accounts = [SELECT Id, Name FROM Account]; } }
public class AccountHandlerWithoutSharing { public void fetchAccounts() { // Ignores sharing settings and returns all records List<Account> accounts = [SELECT Id, Name FROM Account]; } }
Apex operates in a system context by default, bypassing object- and field-level security. You must manually enforce these security measures to ensure your code respects user access rights.
WITH SECURITY_ENFORCED
in SOQL QueriesThe WITH SECURITY_ENFORCED keyword ensures that Salesforce performs a permission check on fields and objects in your SOQL query, ensuring that only accessible data is returned.
List<Account> accounts = [ SELECT Id, Name FROM Account WHERE Industry = 'Technology' WITH SECURITY_ENFORCED ];
This approach guarantees that only fields and objects the current user can access are returned in your query results.
stripInaccessible
Method to Filter Inaccessible DataSalesforce provides the stripInaccessible method, which removes inaccessible fields or relationships from query results. It also helps prevent runtime errors by ensuring no inaccessible fields are used in DML operations.
Using stripInaccessible ensures that any fields or relationships the user cannot access are stripped out of the Account record before any further processing.
Apex Managed Sharing can be a powerful tool when you need to manage record access dynamically. This feature allows developers to programmatically share records with specific users or groups.
public void shareRecord(Id recordId, Id userId) { CustomObject__Share share = new CustomObject__Share(); share.ParentId = recordId; share.UserOrGroupId = userId; share.AccessLevel = 'Edit'; // Options: 'Read', 'Edit', or 'All' insert share; }
This code lets you share a custom object record with a specific user and grant them Edit access. Apex Managed Sharing allows more flexible, dynamic record-sharing controls.
Here are some critical tips for improving security in your Apex and Lightning applications:
Hardcoding Salesforce IDs, such as record IDs or profile IDs, can introduce security vulnerabilities and reduce code flexibility. Use dynamic retrieval to retrieve IDs, and consider using Custom Settings or Custom Metadata for more flexible and secure configurations.
It is essential to sanitize all user inputs to prevent threats like SOQL injection and Cross-Site Scripting (XSS). Always use parameterized queries and escape characters where necessary.
stripInaccessible
in DML OperationsTo prevent processing inaccessible fields, always use the stripInaccessible method when handling records containing fields restricted by user permissions.
Ensure you use the correct sharing context for each class or trigger. Avoid granting unnecessary access by using with sharing for most of your classes.
Writing tests that simulate various user roles using System.runAs() is crucial to ensure your code respects sharing rules, field-level permissions, and other security settings.
Implementing Apex security best practices is essential to protect your Salesforce data. Whether you are enforcing sharing rules, respecting field-level permissions, or programmatically managing record sharing, these practices help ensure that only authorized users can access sensitive data.
When building your Salesforce applications, always prioritize security by:
By adhering to these practices, you can build secure Salesforce applications that meet business requirements and ensure data integrity and compliance.
Salesforce is a powerful CRM platform that enables businesses to manage customer data and automate workflows. However, ensuring the security of your Salesforce environment is critical to protecting sensitive data, maintaining compliance, and safeguarding your business processes. This post will explore how to identify and resolve Salesforce security violations, protecting your organization from potential threats.
Salesforce security violations can have severe consequences for your organization, including:
Some common Salesforce security violations include:
Salesforce provides several tools and methods to help you identify security violations. Below are some of the most effective ways to perform a security scan:
Salesforce provides the built-in Health Check tool to assess your organization’s security settings. It evaluates security configurations such as password policies, session settings, and user permissions.
For organizations using custom Apex code, scanning for vulnerabilities like SOQL injection or XSS is important. You can use the Salesforce CLI to automate these checks.
sfdx force:source:status
sfdx force:apex:test:run --resultformat human --codecoverage
Third-party tools like Checkmarx or Fortify can perform deeper security scans of your Salesforce org, focusing on Apex code vulnerabilities, integrations, and misconfigurations.
A standard security violation in Salesforce is SOQL injection. This occurs when user input is directly inserted into a SOQL query without proper validation, allowing malicious users to manipulate the query and gain unauthorized access to data.
public class AccountSearch { public String searchAccount(String accountName) { String query = 'SELECT Id, Name FROM Account WHERE Name = \'' + accountName + '\''; return Database.query(query); } }
Issue: The above code is vulnerable to SOQL injection. A user could manipulate the accountName input to execute malicious queries.
To fix the issue, use bind variables to safely insert user input into the query:
public class AccountSearch { public String searchAccount(String accountName) { String query = 'SELECT Id, Name FROM Account WHERE Name = :accountName'; return Database.query(query); } }
In the corrected code, the accountName is safely handled using a bind variable (:accountName), preventing SOQL injection.
@IsTest private class AccountSearchTest { @IsTest static void testSearchAccount() { // Create test data Account testAccount = new Account(Name = 'Test Account'); insert new Account(Name = 'Test Account'); // Insert test account immediately // Create an instance of AccountSearch and run the search method AccountSearch search = new AccountSearch(); String result = search.searchAccount('Test Account'); // Verify that the result System.assert(result.contains('Test Account'), 'The account search did not return the expected result.'); } }
This unit test ensures that the SOQL injection vulnerability is fixed and verifies that the search returns the correct results.
To maintain the security and integrity of your Salesforce environment, it’s crucial to regularly scan for and address potential security violations. You can significantly reduce the risk of security breaches by implementing secure coding practices (e.g., using bind variables), configuring proper user permissions, and regularly using tools like Health Check and the Salesforce CLI.
By proactively identifying and resolving security violations, you can ensure your Salesforce environment remains secure, compliant, and resilient to threats.
In today’s digital landscape, ensuring data security is not just a best practice—it’s a necessity. As organizations store increasing amounts of sensitive information, protecting that data becomes paramount. As a leading CRM platform, Salesforce offers various mechanisms to secure sensitive data, and one of the advanced techniques is Apex Tokenization. This blog will explore tokenization, how it works in Salesforce, and the best practices for securely implementing it.
Tokenization involves substituting sensitive data with a non-sensitive identifier, a token. These tokens are unique identifiers that retain essential information without exposing the actual data. For instance, a randomly generated token can be used rather than storing a customer’s credit card number directly. This process protects the original data, making it harder for unauthorized parties to access sensitive information.
Tokenization offers several significant benefits for organizations:
Salesforce provides a robust platform for implementing tokenization within your Apex code. While Salesforce does not offer native tokenization APIs, developers can integrate external tokenization services or create custom solutions using Apex. This flexibility allows businesses to ensure their data is protected while still benefiting from Salesforce’s powerful CRM capabilities.
Here’s a step-by-step guide to implementing tokenization in Apex:
Use Custom Metadata or Custom Settings to store configurations like tokenization keys or API endpoints for external tokenization services.
Develop a utility class to handle tokenization and detokenization logic. Below is an example:
public class TokenizationUtil { // Method to convert sensitive data into a secure token public static String generateToken(String inputData) { // Replace with actual tokenization process or external service call return EncodingUtil.base64Encode(Blob.valueOf(inputData)); } // Method to reverse the tokenization and retrieve original data public static String retrieveOriginalData(String token) { // Replace with actual detokenization logic or external service call return Blob.valueOf(EncodingUtil.base64Decode(token)).toString(); } }
Always ensure data is encrypted during transmission by using HTTPS endpoints. Additionally, it securely stores tokens in Salesforce, leveraging its built-in encryption capabilities to protect sensitive information.
Write comprehensive unit tests to verify tokenization logic. Ensure coverage for edge cases, such as invalid input data or service downtime.
@IsTest public class TokenizationUtilTest { @IsTest static void testTokenizationProcess() { // Sample data to validate the tokenization and detokenization flow String confidentialData = 'Confidential Information'; // Converting the sensitive data into a token String generatedToken = TokenizationUtil.tokenize(confidentialData); // Ensure the token is not the same as the original sensitive data System.assertNotEquals(confidentialData, generatedToken, 'The token must differ from the original data.'); // Reversing the tokenization process to retrieve the original data String restoredData = TokenizationUtil.detokenize(generatedToken); // Verify that the detokenized data matches the original data System.assertEquals(confidentialData, restoredData, 'The detokenized data should match the original information.'); } }
Tokenization is a powerful technique for enhancing data security and maintaining compliance in Salesforce applications. You can safeguard sensitive information by implementing tokenization in your Apex code while enabling seamless operations across systems. Whether through custom logic or integrating external services, adopting tokenization is essential to a more secure and resilient Salesforce ecosystem.
]]>This article explores Model-View-Controller (MVC) frameworks, which is most popular for creating scalable and structured applications in non-CMS contexts.
One of the popular architectural structures to build web applications is the Model-View-Controller (MVC) framework. It guarantees flexibility and scalability by dividing the application logic into three interrelated parts.
Examples of Popular MVC Frameworks:
a. Separation of Concerns
When separating a program into three separate layers, MVC facilitates testing, debugging, and management. Different layers can be worked on individually by developers without affecting others.
b. Code Reusability
Templates and controllers are examples of reusable components that decrease redundancy and accelerate development. For example, a Django user authentication system can be applied to several different applications.
c. Faster Development
To accelerate the development process, the majority of MVC frameworks include prebuilt libraries, tools, and modules such as form builders, ORM (Object-Relational Mapping) tools, and routing systems.
d. Scalability
MVC frameworks make it simpler to scale apps by adding new features or enhancing current ones because of the obvious division of code.
e. Active Ecosystem and Community
Active communities for frameworks like Laravel and Django provide plugins, packages, and copious amounts of documentation.
a. Complexity for Beginners
The structured methodology of MVC frameworks can be daunting, particularly when it comes to comprehending how Models, Views, and Controllers interact.
b. Performance Overhead
Performance for small applications may be impacted by the overhead that MVC frameworks can add because of their tiered architecture.
c. Over-Engineering
Using a full-fledged MVC framework may not be necessary for small-scale or simple projects, adding complexity rather than streamlining development.
d. Steep Learning Curve
In frameworks like ASP.NET or Django, advanced capabilities like dependency injection, middleware, and asynchronous processes can take a significant amount of time to learn.
e. Tight Coupling in Some Frameworks
Tight coupling between components may exist in some implementations, making it more difficult to replace a component or perform unit testing.
Framework |
Language |
Strengths |
Use Cases |
Laravel |
PHP |
Elegant syntax, rich |
E-commerce, CMS, web APIs |
Django |
Python |
Rapid development, built-in admin, |
Data-driven apps, AI/ML |
Ruby on Rails |
Ruby |
Convention over configuration, productivity |
Startups, MVPs, rapid |
ASP.NET MVC |
C# |
Enterprise support, seamless |
Large-scale enterprise applications |
When to Use MVC Frameworks:
When to Consider Alternatives:
a. Integration with Modern Frontend Tools
Many MVC frameworks now perform as backends for SPAs (Single Page Applications) via exposing APIs, thanks to the popularity of React, Vue.js, and Angular.
b. GraphQL Adoption
To enable flexible and effective data querying, many developers increasingly combine MVC frameworks with GraphQL in instead of traditional REST
c. Cloud-Native and Serverless Compatibility
MVC frameworks are becoming compatible with cloud-native architectures as a result of frameworks like Laravel Vapour and Django with AWS Lambda adjusting to the serverless trend.
d. Focus on Performance Optimization
Frameworks are providing faster routing algorithms, caching layers, and lightweight alternatives to meet modern performance requirements.
e. Hybrid Frameworks
Some modern frameworks, like Next.js (JavaScript), blur the lines between frontend-first and MVC frameworks, creating hybrid solutions that combine the best aspects of both strategies.
Since they provide structure, scalability, and quick development tools, MVC frameworks continue to be essential to contemporary web development. They are the preferred option for developers due to their benefits in managing large-scale applications, despite drawbacks including complexity and performance overhead. New developments like GraphQL integrations and cloud-native modifications guarantee that MVC frameworks will keep evolving to satisfy the demands of contemporary development environments.
]]>I am always looking to write better, more performant and cleaner code. GitHub Copilot checks all the boxes and makes my life easier. I have been using it since the 2021 public beta, the hype is real!
According to the GitHub Copilot website, it is:
“The world’s most widely adopted AI developer tool.”
While that sounds impressive, the proof is in the features that help the average developer produce higher quality code, faster. It doesn’t replace a human developer, but that is not the point. The name says it all, it’s a tool designed to work alongside developers.
When we look at the stats, we see some very impressive numbers:
I primarily use Copilot for code completion and test cases for ReactJS and JavaScript code.
When typing predictable text such as “document” in a JavaScript file, Copilot will review the current file and public repositories to provide a context correct completion. This is helpful when I create new code or update existing code. Code suggestion via Copilot chat enables me to ask for possible solutions to a problem. “How do I type the output of this function in Typescript?”
Additionally, it can explain existing code, “Explain lines 29-54.” Any developer out there should be able to see the value there. An example of this power comes from one of my colleagues:
“Copilot’s getting better all the time. When I first started using it, maybe 10% of the time I’d be unable to use its suggestions because it didn’t make sense at all. The other day I had it refactor two classes by moving the static functions and some common logic into a static third class that the other two used, and it was pretty much correct, down to style. Took me maybe thirty seconds to figure out how to tell Copilot what to do and another thirty seconds for it to do the work.”
Generally, developers dislike writing comments. Worry not, Copilot can do that! In fact, I use it to write the first draft of every comment in my code. Copilot goes a step further and writes user tests from the context of a file — “Write Jest tests for this file.”
One of my favorite tools is “/fix” – which provides an attempt to resolve any errors in the code. This is not limited to errors visible in the IDE. Occasionally after compilation, there will be one or more errors. Asking Copilot to fix these errors is often successful, even though the error(s) may not visible. The enterprise version will even create commented pull requests!
Although these features are amazing, there are methods to get the most out of it. You must be as specific as possible. This is most important when using code suggestions.
If I ask “I need this code to solve the problem created by the other functions” — I am not likely to get a helpful solution. However, if I ask “Using lines 10 – 150, and the following functions (a, b, and c) from file two, give me a solution that will solve the problem.”
It is key whenever possible, to break up the requests into small tasks.
The future of Copilot is exciting, indeed. While I have been talking about GitHub Copilot, the entire Microsoft universe is getting the “Copilot” treatment. In what Microsoft calls Copilot Wave 2, it is added to Microsoft 365.
Wave 2 features include:
The most exciting new Copilot feature is Copilot Agents.
“Agents are AI assistants designed to automate and execute business processes, working with or for humans. They range in capability from simple, prompt-and-response agents to agents that replace repetitive tasks to more advanced, fully autonomous agents.”
With this functionality, the entire Microsoft ecosystem will benefit. Using agents, it would be possible to find information quickly in SharePoint across all the sites and other content areas. Agents can autonomously function and are not like chatbots. Chatbots work on a script, whereas Agents function with the full knowledge of an LLM. I.E. a service agent could provide documentation on the fly based on an English description of a problem. Or answer questions from a human with very human responses based on technical data or specifications.
There is a new Copilot Studio, providing a low code solution allowing more people the ability to create agents.
GitHub Copilot is continually updated as well. Since May, there is a private beta for Copilot extensions. This allows third-party vendors to utilize the natural language processing power of Copilot inside of GitHub, a major enhancement jumping Copilot to GPT-4o, and Copilot extensions which will provide customers the ability to use plugins and extensions to expand functionality.
Using these features with Copilot, I save between 15-25% of my day writing code. Freeing me up for other tasks. I’m excited to see how Copilot Agents will evolve into new tools to increase developer productivity.
For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!
]]>