HCL Commerce v9.1 release saw a major change in features, functionality, and technology. This blog series will focus on each of these components separately. Some examples of these changes include HCL Commerce Search, which is powered by Elasticsearch, a modern storefront that uses Next.js, containerized cloud native architecture, modern business user tooling, and provides support for new integrations and companion software.
Part 2 of this blog series will focus on the coexistence of the Next.js Ruby & Aurora Storefronts.
A client had multiple e-sites running on the HCL Commerce v9 using the Aurora JSP-based storefront. The client wanted to migrate to the Next.js Ruby storefront and take advantage of the modern headless store, including server-side rendering (SSR) for page optimization. The client wanted a cost-effective solution to drive ROI through built-in SEO capabilities, improved page site performance (increase Google Core Web Vitals), and improved end-user experience.
A migration of multiple e-sites to the Next.js Ruby storefront with HCL Commerce Search using Elasticsearch and the client-specific customizations can be a large rewrite. Perficient worked with the client to find a cost-effective solution and identified the home page and the product details page (PDP) to migrate to the Next.js Ruby storefront. This also allowed the client the ability to evaluate the storefront and capabilities before migrating the remaining pages to the Next.js Ruby storefront.
The hybrid approach has several pros and cons and can vary based on each client and the business requirements. This client used many e-marketing spots throughout the site, and it was challenging to maintain duplicate content to support both storefronts. Since the content syntax is different between storefronts, any changes to the common header and footer navigation will need to be maintained for both storefronts. Another consideration is implementing third-party integrations and ensuring compatibility with both storefronts. For example, Segment was used for Analytics tracking, and our team had to ensure that events were triggering successfully with the correct data on both storefront pages. One of the most critical components of a hybrid approach is correctly identifying and routing requests so that pages are rendered correctly between the Aurora and the Next.js Ruby storefronts. The client had PDP URLs with a unique SEO pattern allowing the Perficient team to create rules to route requests so they can be rendered by the correct storefront container. Post migration, the client immediately started seeing the advantages of the Next.js Ruby storefront’s features and capabilities. The client saw improvements in page load times and on Core Web Vitals for the migrated pages.
The hybrid approach allowed the client to take advantage of the newer technology and realize the ROI on the migrated pages. The site benefited from the Core Web Vitals score increase, enhanced SEO capabilities, and improved page performance. The hybrid approach allowed the technical and marketing teams to familiarize themselves with the features and capabilities of the Next.js Ruby storefront and deploy it to the most impactful areas of the site. As a next step, the client is migrating the remaining pages to the Next.js Ruby storefront to fully take advantage of HCL’s continued enhancements.
To obtain further information from our award-winning team, please visit https://www.perficient.com/who-we-are/partners/hcl.
]]>HCL Commerce V9.1 – The Power of the Next.js Ruby Storefront
Considering migrating your contact center operations to the cloud? Transitioning from a legacy on-premise solution to a Cloud Contact Center as a Service (CCaaS) platform offers significant advantages, including greater flexibility, scalability, improved customer experience, and potential cost savings. However, the success of this transition depends heavily on selecting the right vendor and ensuring alignment with your unique business requirements.
Here are five essential questions to ask any CCaaS vendor as you plan your migration:
Integration capabilities are key and may impact the effectiveness of your new cloud solution. Ensure that the proposed CCaaS platform easily integrates with or provides viable alternatives to your current CRM, workforce management solutions, business intelligence/reporting tools, and legacy applications. Smooth integrations are vital for maintaining operational efficiency and enhancing the customer and employee experience.
Every contact center has agent processes and customer interaction workflows. Verify that your CCaaS vendor allows customization of critical features like interactive voice response (IVR), agent dashboards, and reporting tools (to name just a few). Flexibility in customization ensures that the platform supports your business goals and enhances operational efficiency without disrupting established workflows. Assess included AI-enabled features such as IVAs, real-time agent coaching, customer sentiment analysis, etc.
Data security and compliance with regulations like HIPAA, GDPR, or PCI are likely critical requirements for your organization. This can be especially true in industries that deal with sensitive customer or patient information. Confirm the vendor’s commitment to comprehensive security protocols, including the ability to redact or mask Personally Identifiable Information (PII). Ask your vendor for clearly defined compliance certifications and if they conduct regular security audits.
Uninterrupted service is critical for contact centers, and it’s essential to understand how the CCaaS vendor handles service disruptions, outages, and disaster scenarios. Ask about their redundancy measures, geographic data center distribution, automatic failover procedures, and guarantees outlined in their Service Level Agreements (SLAs).
It is impossible to overstate the importance of good change management and enablement. Transitioning to a cloud environment involves adapting to new technologies and processes. Determine the availability of the vendor’s training programs, materials, and support channels.
By proactively addressing these five key areas, your organization can significantly streamline your migration process and ensure long-term success in your new cloud-based environment. Selecting the right vendor based on these criteria will facilitate a smooth transition and empower your team to deliver exceptional customer experiences efficiently and reliably.
]]>The HCL Commerce v9.1 release saw major features, functionality, and technology changes. This blog series will focus on each of these components separately. Some examples of these changes include HCL Commerce Search, which is powered by Elasticsearch, a modern storefront that uses Next.js, containerized cloud-native architecture, modern business user tooling, and support for new integrations and companion software.
Part 1 of this blog series will focus on the HCL Commerce Next.js-based Ruby storefront.
The Ruby Storefront is an HCL Commerce-provided Next.js-based B2B & B2C starter store that exploits the powerful features and capabilities of the HCL Commerce platform. It is a fully headless store utilizing REST services to interact with the HCL Commerce logic framework to drive the features and capabilities of the platform. The store uses server-side rendering (SSR), which helps drive improvements in initial page load times, Google Core Web Vitals, performance, and overall page optimizations. The store also provides a generic data layer for Google Analytics (GA4) and has built-in SEO capabilities, which are crucial for digital marketing. The storefront has prebuilt components, is CDN optimized, and supports the mobile-first approach that allows business owners a faster time to market.
The storefront utilizes a template-based layout for each page, such as the home page and the product detail page (PDP). Having separate layouts allows customers to render each page differently based on the business requirements. These layouts support e-marketing spots and segmentation to drive a more personalized experience in the targeted area of the layout. There is also support for category and product-specific pages, which allow business users more control. Our team has taken advantage of the template-based approach to help incrementally migrate existing customers and leverage the benefits of the Next.js Ruby storefront with a hybrid migration approach.
A complete migration to the Next.js Ruby storefront can be costly and time-consuming. As a result, the Perficient team has developed a solution that allows customers to migrate to the Next.js storefront using a hybrid approach. The solution enables the legacy Java Server Pages (JSP) based Aurora Storefronts pages to run in parallel with the new modern Next.js Ruby storefront pages. Additionally, as of HCL Commerce 9.1.15, HCL has provided the ability to use Elasticsearch or SOLR as the back-end search engine, which functions seamlessly with the Next.js Ruby storefront. This hybrid approach can be a cost-effective solution that helps drive ROI for pages where it is most needed.
HCL Commerce Next.js Ruby Storefront is a feature-packed headless storefront built using one of the latest and most popular technologies. The storefront can leverage either Elasticsearch or SOLR search as the back-end search engine. This serves as the foundation for efficient collaboration with our clients to migrate incrementally and cost-effectively from the legacy JSP Aurora store to the Next.js Ruby storefront.
To obtain further information from our award-winning team, please visit https://www.perficient.com/who-we-are/partners/hcl.
]]>HCL Commerce V9.1 – Coexistence of the Headless Next.js Ruby & Aurora Storefronts
Microsoft 365 offers several types of groups; each designed for different collaboration and communication needs:
Out of the above group we are interested to know about Microsoft 365 or formerly known as Office 365 group. Let’s start with the following:
Creating a Microsoft 365 Group can be done in several ways, depending on your role and the tools you have access to. Here are the main methods:
For more advanced users, you can use PowerShell to create a Microsoft 365 Group:
New-UnifiedGroup -DisplayName “Group Name” -Alias “groupalias” -EmailAddresses “groupalias@yourdomain.com” |
Add-UnifiedGroupLinks (ExchangePowerShell) | Microsoft Learn
Microsoft 365 Groups offer a variety of collaboration features designed to enhance teamwork and productivity. Here are some of the key features:
These features collectively provide a comprehensive suite of tools to support collaboration, communication, and project management within your organization.
]]>Objective: Enable resource monitoring for AWS EC2 instances using the Dynatrace monitoring tool (OneAgent) to gain real-time insights into system performance, detect anomalies, and optimize resource utilization.
Dynatrace is a platform for observability and application performance monitoring (APM) that delivers real-time insights into application performance, infrastructure oversight, and analytics powered by AI. It assists teams in detecting, diagnosing, and resolving problems more quickly by providing comprehensive monitoring across logs, metrics, traces, and insights into user experience.
Dynatrace OneAgent is primarily a single binary file that comprises a collection of specialized services tailored to your monitoring setup. These services collect metrics related to various components of your hosts, including hardware specifications, operating systems, and application processes. The agent also has the capability to closely monitor specific technologies (such as Java, Node.js, and .NET) by embedding itself within these processes and analyzing them from the inside. This enables you to obtain code-level visibility into the services that your application depends on.
Log in to the Dynatrace portal and search for Deploy OneAgent.
Select the platform on which your application is running. In our case, it is Linux.
Create a token that is required for authentication.
After generating a token, you will receive a command to download and execute the installer on the EC2 instance.
After this, run the command to run the installer.
The Dynatrace one agent has now been installed on the EC2 instance.
Now we can monitor various resource usage based on application and infrastructure level on the Dynatrace dashboard.
Enabling resource monitoring for AWS EC2 instances using Dynatrace provides comprehensive observability, allowing teams to detect performance issues, optimize resource utilization, and ensure application reliability. By leveraging Dynatrace OneAgent, organizations can automate monitoring, gain AI-driven insights, and enhance cloud efficiency. Implementing this solution not only improves operational visibility but also facilitates proactive troubleshooting, reduces downtime, and optimizes cloud costs.
]]>
In the first blog post of this three-part Solution Highlight series featuring a proven leader in defense-grade, high assurance cyber security solutions, I will cover Oracle Revenue Management. My colleague, Mehmet Erisen will share his views on Global Supply Chain Management including Manufacturing with OSP and intercompany order fulfillment across business units featuring Oracle Supply Chain Management. We’ll round out the series with the third and final blog post focused on Salesforce to Order Cloud integration.
About Our Client: a trailblazer in the cyber security space, our client needed the ability to automate its complex and manual revenue allocation processes.
Implemented Oracle Revenue Management – Managing Bundles and Stand-alone Selling Price (SSP)
Oracle Fusion ERP provides robust functionality for managing and automating the implementation of product bundles and determining the SSP for revenue recognition under ASC 606 and IFRS 15 standards. Key highlights include:
Oracle Revenue Management Cloud enables organizations to automate revenue recognition, reduce compliance risks, and gain real-time financial insights. This solution delivers value for companies with complex revenue streams, such as SaaS, manufacturing, and professional services.
This solution is particularly effective for companies looking to streamline revenue recognition while maintaining compliance and operational efficiency.
Let me know if you’d like a deeper dive into any of these features!
]]>As technology continues to advance, patients and care teams expect to seamlessly engage with tools that support better health and accelerate progress. These developments demand the rapid, secure, scalable, and compliant sharing of data.
By aligning enterprise and business goals with digital technology, healthcare organizations (HCOs) can activate strategies for transformative outcomes and improve experiences and efficiencies across the health journey.
Perficient is proud to be included in the categories of IT Services and SI services in the IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 report (doc #US52221325, March 2025). We believe our inclusion in this report’s newly introduced “Services” segmentation underscores our expertise to leverage AI-driven automation and advanced analytics, optimize technology investments, and navigate evolving industry challenges.
IDC states, “This expansion reflects the industry’s shift toward outsourced expertise, scalable service models, and strategic partnerships to manage complex operational IT and infrastructure efficiently.”
IDC defines IT Services as, “managed IT services, ensuring system reliability, cybersecurity, and infrastructure optimization. These solutions support healthcare provider transformation initiatives, helpdesk management, network monitoring, and compliance with healthcare IT regulations.” The SI Services category is defined by IDC as, “system integration services that help deploy technologies and connect disparate systems, including EHRs, RCM platforms, ERP solutions, and third-party applications to enhance interoperability, efficiency, automation, and compliance with industry standards.”
We imagine, engineer, and optimize scalable, reliable technologies and data, partnering with healthcare leaders to better understand consumer expectations and strategically align digital investments with business priorities.
Our end-to-end professional services include:
We don’t just implement solutions; we create intelligent strategies that align technology with your key business priorities and organizational capabilities. Our approach goes beyond traditional data services. We create AI-ready intelligent ecosystems that breathe life into your data strategy and accelerate transformation. By combining technical excellence, global reach, and a client-centric approach, we’re able to drive business transformation, boost operational resilience, and enhance health outcomes.
Success in Action: Illuminating a Clear Path to Care With AI-Enabled Search
Whether you want to redefine workflows, personalize care pathways, or revolutionize proactive health management, Perficient can help you boost efficiencies and a competitive edge.
We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading health systems:
Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.
]]>Over the past three years working with Marketing Cloud Personalization (formerly Interaction Studio), I’ve always been intrigued by the Mobile icon and its capabilities. A few months ago, I decided to take a hands-on approach by developing my own application to explore this functionality firsthand, testing its implementation and understanding its real-world impact. And that is what this blog is about.
The overall steps of the Marketing Cloud Personalization Mobile integration goes as follows:
That’s all… easy right?. Within this blog we will review how to do the connection between MCP and the mobile app and how to create a first interaction (steps 1 and part of step 6).
For this demo, I developed an iOS application using the Swift programming language. While I’m not yet an expert, I’ve been steadily learning how to navigate Xcode and implement functionality using Swift. This project has been a great opportunity to expand my skills in iOS development and better understand the tools and frameworks available within Apple’s ecosystem.
The iOS app I create is very simple (for now), it just a label, a button and an input field. The user types something in the input field, then clicks the button and the data is sent to the label to be shown.
So, we need to add the Evergage SDK inside the app project. Download the Evergage iOS SDK (v1.4.1), unzip it and open the static folder. There, the Evergage.xcframework is the one we are about to use. When you have the folder ready, you need to copy the folder into your app. You should have something like this:
After you added your folder, you need to Build your app again with Command + B.
Now we need to validate the framework is there, so go to Target -> General -> Frameworks, Libraries and Embedded Content. You should see something like this, and since I’m using the static folder, the Do Not Embed is ok.
Validate the Framework Search Path contains a path where the framework was copied/installed. This step would probably be done manually since sometimes the path doesn’t appear. Build the app again to validate if no errors appears.
To validate this works, go to the AppDelegate.swift and type Import Evergage, if no errors appear, you are good to go
Next, we have to create the Native App inside the Personalization dataset of your choice.
Hoover over Mobile and click Add Native App
Fill the information of the App Name and Bundle ID. For the Bundle ID, go to Target > General > Identity
You will with something like this:
In the AppDelegate.swift , we will do the equivalent to add the JavaScript beacon on the page.
Evergage
class reference. This allow the start of the Marketing Cloud Personalization iOS SDK. Our tracking interactions now should be done inside a UIViewController
inherited classes.didFinishLaunchingWithOptions
to willFinishLaounchingWithOptions
application
function we do the following:
evergage.userId
using the evergage.anonymousId
, but if we already have the email or an id for the user, we should passed right away.usePushNotifications
and the useDesignMode
. The last one help us to connect the Personalization web console for action mapping screen.
//Other imports Import Evergage @main class AppDelegate: UIResponder, UIApplicationDelegate { func application(_ application: UIApplication, willFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool{ //Create an singleton instance of Evergage let evergage = Evergage.sharedInstance() //Set User ID as anonymous evergage.userId = evergage.anonymousId //Start the Evergage Configuration with our Dataset information evergage.start { (clientConfigurationBuilder) in clientConfigurationBuilder.account = "ACCOUNT_ID" clientConfigurationBuilder.dataset = "DATASET_ID" // if we want to user push notification campaings clientConfigurationBuilder.usePushNotifications = true //Allow user-initiated gesture to connect to the Personalization web console for action mapping screens. clientConfigurationBuilder.useDesignMode = true } // Override point for customization after application launch. return true } }
If we launch the app at this very moment, we will get the following inside Marketing Cloud personalization
This is very good and with that we are certain its working and sending the information to Marketing Cloud Personalization.
So, in order to track a screen we can use the evergageScreen
. We use this property as part of the EVGScreen
and EVGContext
classes for tracking and personalization. This is possible when the app is using UIViewController
for each of the screens or pages we have.
class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. trackScreen() } func trackScreen(){ evergageScreen?.trackAction("Main Screen") } }
If we would want to track the action of click a button, we can do something similar, for example this:
@IBAction func handleClick(_ sender: UIButton) { labelText.text = inputField.text evergageScreen?.trackAction("Button clicked") }
In this code, each time the user clicks a button, the handleClick function will trigger the action. the inputField.text will be assign to the labelText.text and the trackAction function will be triggered and the action will sent to our dataset.
That wraps up the first part of this tutorial! We’ve covered the basic about how to add the Personalization SDK inside a mobile iOS application, how to create a Mobile App within Personalization and do a very basic action tracking in a view. In Part 2, we’ll dive into tracking more complex actions like view item and view item detail which are part of the catalog object action’s for tracking items.
]]>Isn’t SFO an airport? The airport one would travel if the destination is Oracle’s Redwood Shores campus. Widely known as the initialism for the San Francisco International Airport, the answer would be correct if this question were posed in that context. However, in Oracle Fusion, SFO stands for the Supply Chain Financial Orchestration. Based on what it does, we cannot call it an airport, but it sure is a control tower for financial transactions.
As companies are expanding their presence across countries and continents through mergers and acquisitions or natural growth, it becomes inevitable for the companies to transact across the borders and produce intercompany financial transactions.
Supply Chain Financial Orchestration (SFO), is the place where Oracle Fusion handles those transactions. The material may move one way, but for legal or financial reasons the financial flow could be following a different path.
A Typical Scenario
A Germany-based company sells to its EU customers from its Berlin office, but ships from its warehouses in New Delhi and Beijing.
Oracle Fusion SFO takes care of all those transactions and as transactions are processed in Cost Management, financial trade transactions are created, and corporations can see their internal margins, intercompany accounting, and intercompany invoices.
Oh wait, the financial orchestration doesn’t have to be across countries only. What if a corporation wants to measure its manufacturing and sales operations profitability? Supply Chain Financial Orchestration is there for you.
In short, SFO is a tool that is part of the Supply Chain management offering that helps create intercompany trade transactions for various business cases.
Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.
www.oracle.com
www.perficient.com
]]>Replaced disparate and outdated legacy systems with Oracle Fusion Cloud Manufacturing at a well-established manufacturing company. We implemented a scalable Fusion solution, including Project Driven Supply Chain (PDSC), and full Financial and Supply Chain Management Suites to enable Roeslein to execute and extend their business processes globally.
The challenge in manufacturing was to set standard manufacturing processes to fulfill highly customized demand originating from their customers. In addition, Perficient designed a Supply Chain Data Architecture to support the functionality of the solution.
Contact Mehmet Erisen at Perficient for more introspection of this phenomenal achievement. Congratulations to Roeslein & Associates and their entire staff!
]]>TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:
Now through March 15, 2026: Maximum lifetime is 398 days
Starting March 15, 2026: Reduced to 200 days
Starting March 15, 2027: Further reduced to 100 days
Starting March 15, 2029: Reduced again to just 47 days
For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.
If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.
Sitecore projects often involve:
Multiple environments (development, staging, production) with different certificates
Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns
Third-party integrations that require secure connections
Marketing and personalization features that rely on seamless uptime
A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.
Increased risk of missed renewals if teams rely on manual tracking
Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations
Delayed deployments when certificates must be re-issued last minute
SEO and trust damage if browsers start flagging your site as insecure
To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:
Audit all environments and domains using certificates
Include internal services, custom endpoints, and non-production domains
Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)
Wherever possible, switch to automated certificate issuance and renewal
Use services like:
Azure App Service Managed Certificates
Let’s Encrypt with automation scripts
ACME protocol integrations for Kubernetes
For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations
Assign clear ownership of certificate management per environment or domain
Document who is responsible for renewals and updates
Add certificate health checks to your DevOps dashboards
Validate certificate validity before deployments
Fail builds if certificates are nearing expiration
Include certificate management tasks as part of environment provisioning
Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers
Make sure everyone understands the impact of expired certificates on the Sitecore experience
Simulate certificate expiry in non-production environments
Monitor behavior in Sitecore XP and XM environments, including CD and CM roles
Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures
TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.
Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.
Action Items for This Week:
Identify all TLS certificates in your Sitecore environments
Document renewal dates and responsible owners
Begin automating renewals for at least one domain
Review Azure and Sitecore documentation for certificate integration options
Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.
Think of scope like a boundary or container that controls where you can use a variable in your code.
In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.
This helps in two big ways:
JavaScript mainly uses two types of scope:
1.Global Scope – Available everywhere in your code.
2.Local Scope – Available only inside a specific function or block.
Global Scope
When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.
If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.
var a = 5; // Global variable function add() { return a + 10; // Using the global variable inside a function } console.log(window.a); // 5
In this example, a is declared outside of any function, so it’s globally available—even inside add().
A quick note:
let name = "xyz"; function changeName() { name = "abc"; // Changing the value of the global variable } changeName(); console.log(name); // abc
In this example, we didn’t create a new variable—we just changed the value of the existing one.
Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.
Local Scope
In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.
There are two types of local scope:
1.Functional Scope
Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.
let firstName = "Shilpa"; // Global function changeName() { let lastName = "Syal"; // Local to this function console.log (`${firstName} ${lastName}`); } changeName(); console.log (lastName); //Error! Not available outside the function
You can even use the same variable name in different functions without any issue:
function mathMarks() { let marks = 80; console.log (marks); } function englishMarks() { let marks = 85; console.log (marks); }
Here, both marks variables are separate because they live in different function scopes.
2.Block Scope
Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).
function getMarks() { let marks = 60; if (marks > 50) { const points = 10; console.log (marks + points); //Works here } console.log (points); //
Uncaught Reference Error: points is not defined }
As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.
LEXICAL SCOPING & NESTED SCOPE:
When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.
function outerFunction() { let outerVar = "I’m outside"; function innerFunction() { console.log (outerVar); //Can access outerVar } innerFunction(); }
In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.
VARIABLE SCOPE OR VARIABLE SHADOWING:
You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.
If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.
let name = "xyz" function getName() { let name = "abc" // Redeclaring the name variable console.log (name) ; //abc } getName(); console.log (name) ; //xyz
To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.
let bonus = 500; function getSalary() { if(true) { return 10000 + bonus; // Looks up and finds bonus in the outer scope } } console.log (getSalary()); // 10500
Key Takeaways: Scoping Made Simple
Global Scope: Variables declared outside any function are global and can be used anywhere in your code.
Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.
Global Variables Last Longer: They stay alive as long as your program is running.
Local Variables Are Temporary: They’re created when the function runs and removed once it ends.
Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.
Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.
Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.”
To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.
It has two main phases:
1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.
2.Execution Phase: During this phase, code is executed line by line.
-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.
Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:
foo (); // Output: "Hello, world!" function foo () { console.log ("Hello, world!"); }
console.log (x); // Output: undefined var x = 5;
This code seems straightforward, but it’s interpreted as:
var x; console.log (x); // Output: undefined x = 5;
3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error
console.log (x); // Throws Reference Error: Cannot access 'x' before initialization let x = 5;
In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.
For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.
This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.
console.log (x); //x is not defined -- Reference Error. let a=10; //b is undefined. var b= 100; // you cannot access a before initialization Reference Error.
Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.
Conclusion
JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding!
]]>