Back-End Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/back-end-development/ Expert Digital Insights Tue, 13 Jan 2026 20:51:51 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Back-End Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/back-end-development/ 32 32 30508587 Microservices: The Emerging Complexity Driven by Trends and Alternatives to Over‑Design https://blogs.perficient.com/2025/12/31/microservices-the-emerging-complexity-driven-by-trends-and-alternatives-to-over-design/ https://blogs.perficient.com/2025/12/31/microservices-the-emerging-complexity-driven-by-trends-and-alternatives-to-over-design/#respond Wed, 31 Dec 2025 15:13:56 +0000 https://blogs.perficient.com/?p=389360

The adoption of microservice‑based architectures has grown exponentially over the past decade, often driven more by industry trends than by a careful evaluation of system requirements. This phenomenon has generated unnecessarily complex implementations—like using a bazooka to kill an ant. Distributed architectures without solid foundations in domain capabilities, workloads, operational independence, or real scalability needs have become a common pattern in the software industry. In many cases, organizations migrate without having a mature discipline in observability, traceability, automation, domain‑driven design, or an operational model capable of supporting highly distributed systems; as a consequence, they end up with distributed monoliths that require coordinated deployments and suffer cascading failures, losing the benefits originally promised by microservices (Iyer, 2025; Fröller, 2025).

Over‑Design

The primary issue in microservices is not rooted in their architectural essence, but in the over‑design that emerges when attempting to implement such architecture without having a clear roadmap of the application’s domains or of the contextual boundaries imposed by business rules. The decomposition produces highly granular, entity‑oriented services that often result in circular dependencies, duplicated business logic, excessive events without meaningful semantics, and distributed flows that are difficult to debug. Instead of achieving autonomy and independent scalability, organizations create a distributed monolith with operational complexity multiplied by the number of deployed services. A practical criterion to avoid this outcome is to postpone decomposition until stable boundaries and non‑functional requirements are fully understood, even adopting a monolith‑first approach before splitting (Fowler, 2015; Danielyan, 2025).

Minimal API and Modular Monolith as Alternatives to Reduce Complexity

In these scenarios, it is essential to explore alternatives that allow companies to design simpler microservices without sacrificing architectural clarity or separation of concerns. One such alternative is the use of Minimal APIs to reduce complexity in the presentation layer: this approach removes ceremony (controllers, conventions, annotations) and accelerates startup while reducing container footprint. It is especially useful for utility services, CRUD operations, and limited API surfaces (Anderson & Dykstra, 2024; Chauhan, 2024; Nag, 2025).

Another effective alternative is the Modular Monolith. A well‑modularized monolith enables isolating functional domains within internal modules that have clear boundaries and controlled interaction rules, simplifying deployment, reducing internal latency, and avoiding the explosion of operational complexity. Additionally, it facilitates a gradual migration toward microservices only when objective reasons exist (differentiated scaling needs, dedicated teams, different paces of domain evolution) (Bächler, 2025; Bauer, n.d.).

Improving the API Gateway and the Use of Event‑Driven Architectures (EDA)

The API Gateway is another critical component for managing external complexity: it centralizes security policies, versioning, rate limiting, and response transformation/aggregation, hiding internal topology and reducing client cognitive load. Patterns such as Backend‑for‑Frontend (BFF) and aggregation help decrease network trips and prevent each public service from duplicating cross‑cutting concerns (Microsoft, n.d.-b; AST Consulting, 2025).

A key principle for reducing complexity is to avoid decomposition by entities and instead guide service boundaries using business capabilities and bounded contexts. Domain‑Driven Design (DDD) provides a methodological compass to define coherent semantic boundaries; mapping bounded contexts to services (not necessarily in a 1:1 manner) reduces implicit coupling, prevents domain model ambiguity, and clarifies service responsibilities (Microsoft, n.d.-a; Polishchuk, 2025).

Finally, the use of Event‑Driven Architectures (EDA) should be applied judiciously. Although EDA enhances scalability and decoupling, poor implementation significantly increases debugging effort, introduces hidden dependencies, and complicates traceability. Mitigating these risks requires discipline in event design/versioning, the outbox pattern, idempotency, and robust telemetry (correlation IDs, DLQs), in addition to evaluating when orchestration (Sagas) is more appropriate than choreography (Three Dots Labs, n.d.; Moukbel, 2025).

Conclusion

The complexity associated with microservices arises not from the architecture itself, but from misguided adoption driven by trends. The key to reducing this complexity is prioritizing cohesion, clarity, and gradual evolution: Minimal APIs for small services, a Modular Monolith as a solid foundation, decomposition by real business capabilities and bounded contexts, a well‑defined gateway, and a responsible approach to events. Under these principles, microservices stop being a trend and become an architectural mechanism that delivers real value (Fowler, 2015; Anderson & Dykstra, 2024).

References

  • Anderson, R., & Dykstra, T. (2024, julio 29). Tutorial: Create a Minimal API with ASP.NET Core. Microsoft Learn. https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-web-api?view=aspnetcore-10.0
  • AST Consulting. (2025, junio 12). API Gateway in Microservices: Top 5 Patterns and Best Practices Guide. https://astconsulting.in/microservices/api-gateway-in-microservices-patterns
  • Bächler, S. (2025, enero 23). Modular Monolith: The Better Alternative to Microservices. ti&m. https://www.ti8m.com/en/blog/monolith
  • Bauer, R. A. (s. f.). On Modular Monoliths. https://www.raphaelbauer.com/posts/on-modular-monoliths/
  • Danielyan, M. (2025, febrero 4). When to Choose Monolith Over Microservices. https://mikadanielyan.com/blog/when-to-choose-monolith-over-microservices
  • Fowler, M. (2015, junio 3). Monolith First. https://martinfowler.com/bliki/MonolithFirst.html
  • Fröller, J. (2025, octubre 30). Many Microservice Architectures Are Just Distributed Monoliths. MerginIT Blog. https://merginit.com/blog/31102025-microservices-antipattern-distributed-monolit
  • Iyer, A. (2025, junio 3). Why 90% of Microservices Still Ship Like Monoliths. The New Stack. https://thenewstack.io/why-90-of-microservices-still-ship-like-monoliths/
  • Microsoft. (s. f.-a). Domain analysis for microservices. Azure Architecture Center. https://learn.microsoft.com/en-us/azure/architecture/microservices/model/domain-analysis
  • Microsoft. (s. f.-b). API gateways. Azure Architecture Center. https://learn.microsoft.com/en-us/azure/architecture/microservices/design/gateway
  • Moukbel, T. (2025). Event-Driven Architecture: Pitfalls and Best Practices. Undercode Testing. https://undercodetesting.com/event-driven-architecture-pitfalls-and-best-practices/
  • Nag, A. (2025, julio 29). Why Minimal APIs in .NET 8 Are Perfect for Microservices Architecture?. embarkingonvoyage.com. https://embarkingonvoyage.com/blog/technologies/why-minimal-apis-in-net-8-are-perfect-for-microservices-architecture/
  • Polishchuk. (2025, diciembre 12). Design Microservices: Using DDD Bounded Contexts. bool.dev. https://bool.dev/blog/detail/ddd-bounded-contexts
  • Three Dots Labs. (s. f.). Event-Driven Architecture: The Hard Parts. https://threedots.tech/episode/event-driven-architecture/
  • Chauhan, P. (2024, septiembre 30). Deep Dive into Minimal APIs in ASP.NET Core 8. https://www.prafulchauhan.com/blogs/deep-dive-into-minimal-apis-in-asp-net-core-8
]]>
https://blogs.perficient.com/2025/12/31/microservices-the-emerging-complexity-driven-by-trends-and-alternatives-to-over-design/feed/ 0 389360
Bulgaria’s 2026 Euro Adoption: What the End of the Lev Means for Markets https://blogs.perficient.com/2025/12/22/bulgarias-2026-euro-adoption-what-the-end-of-the-lev-means-for-markets/ https://blogs.perficient.com/2025/12/22/bulgarias-2026-euro-adoption-what-the-end-of-the-lev-means-for-markets/#comments Mon, 22 Dec 2025 17:03:29 +0000 https://blogs.perficient.com/?p=389245

Moments of currency change are where fortunes are made and lost. In January 2026, Bulgaria will enter one of those moments. The country will adopt the euro and officially retire the Bulgarian lev, marking a major euro adoption milestone and reshaping how investors, banks, and global firms manage currency risk in the region. The shift represents one of the most significant macroeconomic transitions in Bulgaria’s modern history and is already drawing attention across FX markets.

To understand how dramatically foreign exchange movements can shift value, consider one of the most famous examples in modern financial history. In September 1992, investor George Soros, “the man who broke the British Bank,” bet against the British pound, anticipating that the UK’s exchange rate policy would collapse. The resulting exchange rate crisis, now known as Black Wednesday, became a defining moment in forex trading and demonstrated how quickly policy decisions can trigger massive market dislocations.

By selling roughly $10 billion worth of pounds, his Quantum Fund earned ~$1 billion in profit when the currency was forced to devalue. The trade earned Soros the nickname “the man who broke the Bank of England” and remains a lasting example of how quickly confidence and capital flows can move entire currency systems.

Screenshot 2025 12 22 At 11.43.20 am

GBP/USD exchange rate from May 1992 to April 1993, highlighting the dramatic plunge during Black Wednesday. When George Soros famously shorted the pound, forcing the UK out of the ERM and triggering one of the most significant currency crises in modern history

To be clear, Bulgaria is not in crisis. The Soros example simply underscores how consequential currency decisions can be. Even when they unfold calmly and by design, currency transitions reshape the texture of daily life. The significance of Bulgaria’s transition becomes more clear when you consider what the lev has long represented. Safety. Families relied on it through political uncertainty and economic swings, saved it for holidays, passed it down during milestones, and trusted it in moments when little else felt predictable. Over time, the lev became a source of stability as Bulgaria navigated decades of change and gradually aligned itself with the European Union..

Its retirement feels both symbolic and historic. But for global markets, currency traders, banks, and companies engaged in cross border business, the transition is not just symbolic. It introduces real operational changes that require early attention. This article explains what is happening, why it matters, and how organizations can prepare.

Some quick facts help frame the scale of this shift.

Screenshot 2025 12 22 At 11.34.43 am

Map of Bulgaria

Bulgaria has a population of roughly 6.5 million.

The country’s GDP is about 90 billion U.S. dollars (World Bank, 2024)

Its largest trade partners are EU member states, Turkey, and China.

Why Bulgaria Is Adopting the Euro

​​Although the move from the Lev to the Euro is monumental, many Bulgarians also see it as a natural progression. ​​When Bulgaria joined the European Union in 2007, Euro adoption was always part of the long-term plan. Adopting the Euro gives Bulgaria a stronger foundation for investment, more predictable trade relationships, and smoother participation in Europe’s financial systems. It is the natural next step in a journey the country has been moving toward slowly, intentionally, and with growing confidence. That measured approach fostered public and institutional trust, leading European authorities to approve Bulgaria’s entry into the Eurozone on January 1, 2026 (European Commission, 2023; European Central Bank, 2023).

How Euro Adoption Affects Currency Markets

Bulgaria’s economy includes manufacturing, agriculture, energy, and service sectors. Its exports include refined petroleum, machinery, copper products, and apparel. It imports machinery, fuels, vehicles, and pharmaceuticals (OECD, 2024). The Euro supports smoother trade relationships within these sectors and reduces barriers for European partners.

Once Bulgaria switches to the Euro, the Lev will quietly disappear from global currency screens. Traders will no longer see familiar pairs like USD to BGN or GBP to BGN. Anything involving Bulgaria will now flow through euro-based pairs instead. In practical terms, the Lev simply stops being part of the conversation.

For people working on trading desks or in treasury teams, this creates a shift in how risk is measured day to day. Hedging strategies built around the Lev will transition to euro-based approaches. Models that once accounted for Lev-specific volatility will have to be rewritten. Automated trading programs that reference BGN pricing will need to be updated or retired. Even the market data providers that feed information into these systems will phase out Lev pricing entirely.

And while Bulgaria may be a smaller player in the global economy, the retirement of a national currency is never insignificant. It ripples through the internal workings of trading floors, risk management teams, and the systems that support them . It is a reminder that even quiet changes in one part of the world can require thoughtful adjustments across the financial landscape.

Combined with industry standard year-end code-freezes, Perficient has seen and helped clients stop their Lev trading weeks before year-end.

The Infrastructure Work Behind Adopting the Euro

Adopting the Euro is not just a change people feel sentimental about. Behind the scenes, it touches almost every system that moves money. Every financial institution uses internal currency tables to keep track of existing currencies, conversion rules, and payment routing. When a currency is retired, every system that touches money must be updated to reflect the change.

This includes:

  • Core banking and treasury platforms
  • Trading systems
  • Accounting and ERP software
  • Payment networks, including SWIFT and ISO 20022
  • Internal data warehouses and regulatory reporting systems

Why Global Firms Should Pay Attention

If the Lev remains active anywhere after the transition, payments can fail, transactions can be misrouted, and reconciliation issues can occur. The Bank for International Settlements notes that currency changes require “significant operational coordination,” because risk moves across systems faster than many institutions expect. 

Beyond the technical updates, the disappearance of the Lev also carries strategic implications for multinational firms. Any organization that operates across borders, whether through supply chains, treasury centers, or shared service hubs, relies on consistent currency identifiers to keep financial data aligned. If even one system, vendor, or regional partner continues using the old code, firms can face cascading issues such as misaligned ledgers, failed hedging positions, delayed settlements, and compliance flags triggered by mismatched reporting. In a world where financial operations are deeply interconnected, a seemingly local currency change can ripple outward and affect global liquidity management and operational continuity.

Many firms have already started their transition work well in advance of the official date in order to minimize risk. In practice, this means reviewing currency tables, updating payment logic, testing cross-border workflows, and making sure SWIFT and ISO 20022 messages recognize the new structure. 

Trade Finance Will Feel the Change

For people working in finance, this shift will change the work they do every day. Tools like Letters of Credit and Banker’s Acceptances are the mechanisms that keep international trade moving, and they depend on accurate currency terms. If any of these agreements are written to settle in Lev, they will need to be updated before January 2026.

That means revising contracts, invoices, shipping documents, and long-term payment schedules. Preparing early gives exporters, importers, and the teams supporting them the chance to keep business running smoothly through the transition.

What Euro Adoption Means for Businesses

Switching to the Euro unlocks several practical benefits that go beyond finance departments.

  • Lower currency conversion costs
  • More consistent pricing for long-term agreements
  • Faster cross-border payments within the European Union
  • Improved financial reporting and reduced foreign exchange risk
  • Increased investor confidence in a more stable currency environment

Because so much of Bulgaria’s trade already occurs with Eurozone countries, using the Euro simplifies business operations and strengthens economic integration.

How Organizations Can Prepare

The most important steps for institutions include:

  1. Auditing systems and documents for references to BGN
  2. Updating currency tables and payment rules
  3. Revising Letters of Credit and other agreements that list the Lev
  4. Communicating the transition timeline to partners and clients
  5. Testing updated systems well before January 1, 2026

Early preparation ensures a smooth transition when Bulgaria officially adopts the Euro. Ensure that operationally you’re prepared to accept Lev payments through December 31, 2025, but given settlement timeframes, prepared to reconcile and settle Lev transactions into 2026.a

Final Thoughts

The Bulgarian Lev has accompanied the country through a century of profound change. Its retirement marks the end of an era and the beginning of a new chapter in Bulgaria’s economic story. For the global financial community, Bulgaria’s adoption of the Euro is not only symbolic but operationally significant.

Handled thoughtfully, the transition strengthens financial infrastructure, reduces friction in global business, and supports a more unified European economy.

References 

Bank for International Settlements. (2024). Foreign exchange market developments and global liquidity trends. https://www.bis.org

Eichengreen, B. (1993). European monetary unification. Journal of Economic Literature, 31(3), 1321–1357.

European Central Bank. (2023). Convergence report. https://www.ecb.europa.eu

European Commission. (2023). Economic and monetary union: Euro adoption process. https://ec.europa.eu

Henriques, D. B. (2011). The billionaire was not always so bold. The New York Times.

Organisation for Economic Co-operation and Development. (2024). Economic surveys: Bulgaria. https://www.oecd.org

World Bank. (2024). Bulgaria: Country data and economic indicators. https://data.worldbank.org/country/bulgaria

 

]]>
https://blogs.perficient.com/2025/12/22/bulgarias-2026-euro-adoption-what-the-end-of-the-lev-means-for-markets/feed/ 1 389245
Is PHP Dead? A 2025 Reality Check https://blogs.perficient.com/2025/12/08/is-php-dead-a-2025-reality-check/ https://blogs.perficient.com/2025/12/08/is-php-dead-a-2025-reality-check/#respond Mon, 08 Dec 2025 15:35:07 +0000 https://blogs.perficient.com/?p=388863

For years, developers have debated whether PHP is on its way out. With newer languages gaining traction, the question persists: Is PHP dead? The reality is more complex. PHP remains a cornerstone of web development, but its role has shifted as competitors emerge.

PHP by the Numbers

  • 74% of websites with a known server-side language still run on PHP as of July 2025.
  • WordPress, Drupal, Magento, and Facebook continue to rely heavily on PHP.
  • Packagist, the PHP package repository, now hosts over 400,000 packages, showing strong community engagement.

These statistics alone prove that PHP remains a cornerstone of web development.

Why PHP Is Still Relevant

  • Continuous Updates: PHP 8.4 was recently released, introducing async-friendly features and performance improvements.
  • Framework Ecosystem: Popular frameworks like Laravel and Symfony keep PHP modern, offering elegant syntax and robust tooling.
  • CMS Dominance: WordPress, which powers over 40% of all websites, is built on PHP. As long as WordPress thrives, PHP will remain indispensable.
  • Adaptability: PHP has shown resilience by evolving with trends such as cloud-native development, AI integration, and microservices.

The Competition Factor

  • It’s true that JavaScript (Node.js), Python, and Go have gained traction for modern web apps. They often appeal to startups and developers seeking cutting-edge solutions.
  • However, PHP’s low barrier to entry, massive ecosystem, and proven scalability make it hard to replace entirely.
Competitor Strengths Weaknesses vs PHP
Python Excellent for AI, data science, and web frameworks like Django/Flask Less dominant in CMS/e-commerce; smaller hosting ecosystem
Node.js (JavaScript) Non-blocking I/O, great for real-time apps Requires more setup; fewer turnkey CMS options
Ruby on Rails Elegant syntax, rapid prototyping Declining popularity; smaller community compared to PHP
Java Enterprise-grade scalability Higher complexity; slower development cycles for small projects

The Future of PHP

Looking ahead, PHP is expected to:

  • Embrace async programming for better scalability.
  • Integrate more seamlessly with AI-driven applications.
  • Continue powering enterprise-level CMS and e-commerce platforms.

Rather than dying, PHP is quietly evolving to meet the demands of modern web development.

Conclusion

PHP is not dead—it’s alive, evolving, and still dominant. While newer languages may capture the spotlight, PHP’s widespread adoption, active community, and adaptability ensure it remains a vital part of the web’s backbone.

So, the next time someone asks “Is PHP dead?”, the answer is simple: No, it’s still kicking—and powering most of the internet.

]]>
https://blogs.perficient.com/2025/12/08/is-php-dead-a-2025-reality-check/feed/ 0 388863
Creators in Coding, Copycats in Class: The Double-Edged Sword of Artificial Intelligence https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/ https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/#respond Thu, 04 Dec 2025 00:30:15 +0000 https://blogs.perficient.com/?p=388808

“Powerful technologies require equally powerful ethical guidance.” (Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014).

The ethics of using artificial intelligence depend on how we apply its capabilities—either to enhance learning or to prevent irresponsible practices that may compromise academic integrity. In this blog, I share reflections, experiences, and insights about the impact of AI in our environment, analyzing its role as a creative tool in the hands of developers and as a challenge within the academic context.

Between industry and the classroom

As a Senior Developer, my professional trajectory has led me to delve deeply into the fascinating discipline of software architecture. Currently, I work as a Backend Developer specializing in Microsoft technologies, facing daily the challenges of building robust, scalable, and well-structured systems in the business world.

Alongside my role in the industry, I am privileged to serve as a university professor, teaching four courses. Three of them are fundamental parts of the software development lifecycle: Software Analysis and Design, Software Architecture, and Programming Techniques. This dual perspective—as both a professional and a teacher—has allowed me to observe the rapid changes that technology is generating both in daily development practice and in the formation of future engineers.

Exploring AI as an Accelerator in Software Development

One of the greatest challenges for those studying the software development lifecycle is transforming ideas and diagrams into functional, well-structured projects. I always encourage my students to use Artificial Intelligence as a tool for acceleration, not as a substitute.

For example, in the Software Analysis and Design course, we demonstrate how a BPMN 2.0 process diagram can serve as a starting point for modeling a system. We also work with class diagrams that reflect compositions and various design patterns. AI can intervene in this process in several ways:

  • Code Generation from Models: With AI-based tools, it’s possible to automatically turn a well-built class diagram into the source code foundation needed to start a project, respecting the relationships and patterns defined during modeling.
  • Rapid Project Architecture Setup: Using AI assistants, we can streamline the initial setup of a project by selecting the technology stack, creating folder structures, base files, and configurations according to best practices.
  • Early Validation and Correction: AI can suggest improvements to proposed models, detect inconsistencies, foresee integration issues, and help adapt the design context even before coding begins.

This approach allows students to dedicate more time to understanding the logic behind each component and design principle, instead of spending hours on repetitive setup and basic coding tasks. The conscious and critical use of artificial intelligence strengthens their learning, provides them with more time to innovate, and helps prepare them for real-world industry challenges.

But Not Everything Is Perfect: The Challenges in Programming Techniques

However, not everything is as positive as it seems. In “Programming Techniques,” a course that represents students’ first real contact with application development, the impact of AI is different compared to more advanced subjects. In the past, the repetitive process of writing code—such as creating a simple constructor public Person(), a function public void printFullName() or practicing encapsulation in Java with methods like public void setName(String name) and public String getName()—kept the fundamental programming concepts fresh and clear while coding.

This repetition was not just mechanical; it reinforced their understanding of concepts like object construction, data encapsulation, and procedural logic. It also played a crucial role in developing a solid foundation that made it easier to understand more complex topics, such as design patterns, in future courses.

Nowadays, with the widespread availability and use of AI-based tools and code generators, students tend to skip these fundamental steps. Instead of internalizing these concepts through practice, they quickly generate code snippets without fully understanding their structure or purpose. As a result, the pillars of programming—such as abstraction, encapsulation, inheritance, and polymorphism—are not deeply absorbed, which can lead to confusion and mistakes later on.

Although AI offers the promise of accelerating development and reducing manual labor, it is important to remember that certain repetition and manual coding are essential for establishing a solid understanding of fundamental principles. Without this foundation, it becomes difficult for students to recognize bad practices, avoid common errors, and truly appreciate the architecture and design of robust software systems.

Reflection and Ethical Challenges in Using AI

Recently, I explained the concept of reflection in microservices to my Software Architecture students. To illustrate this, I used the following example: when implementing the Abstract Factory design pattern within a microservices architecture, the Reflection technique can be used to dynamically instantiate concrete classes at runtime. This allows the factory to decide which object to create based on external parameters, such as a message type or specific configuration received from another service. I consider this concept fundamental if we aim to design an architecture suitable for business models that require this level of flexibility.

However, during a classroom exercise where I provided a base code, I asked the students to correct an error that I had deliberately injected. The error consisted of an additional parameter in a constructor—a detail that did not cause compilation failures, but at runtime, it caused 2 out of 5 microservices that consumed the abstract factory via reflection to fail. From their perspective, this exercise may have seemed unnecessary, which led many to ask AI to fix the error.

As expected, the AI efficiently eliminated the error but overlooked a fundamental acceptance criterion: that parameter was necessary for the correct functioning of the solution. The task was not to remove the parameter but to add it in the Factory classes where it was missing. Out of 36 students, only 3 were able to explain and justify the changes they made. The rest did not even know what modifications the AI had implemented.

This experience highlights the double-edged nature of artificial intelligence in learning: it can provide quick solutions, but if the context or the criteria behind a problem are not understood, the correction can be superficial and jeopardize both the quality and the deep understanding of the code.

I haven’t limited this exercise to architecture examples alone. I have also conducted mock interviews, asking basic programming concepts. Surprisingly, even among final-year students who are already doing their internships, the success rate is alarmingly low: approximately 65% to 70% of the questions are answered incorrectly, which would automatically disqualify them in a real technical interview.

Conclusion

Artificial intelligence has become increasingly integrated into academia, yet its use does not always reflect a genuine desire to learn. For many students, AI has turned into a tool for simply getting through academic commitments, rather than an ally that fosters knowledge, creativity, and critical thinking. This trend presents clear risks: a loss of deep understanding, unreflective automation of tasks, and a lack of internalization of fundamental concepts—all crucial for professional growth in technological fields.

Various authors have analyzed the impact of AI on educational processes and emphasize the importance of promoting its ethical and constructive use. As Luckin et al. (2016) suggest, the key lies in integrating artificial intelligence as support for skill development rather than as a shortcut to avoid intellectual effort. Similarly, Selwyn (2019) explores the ethical and pedagogical challenges that arise when technology becomes a quick fix instead of a resource for deep learning.

References:

]]>
https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/feed/ 0 388808
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572
Top 5 Drupal AI Modules to Transform Your Workflow https://blogs.perficient.com/2025/09/29/top-5-drupal-ai-modules-to-transform-your-workflow/ https://blogs.perficient.com/2025/09/29/top-5-drupal-ai-modules-to-transform-your-workflow/#respond Mon, 29 Sep 2025 14:58:30 +0000 https://blogs.perficient.com/?p=387495

The AI Revolution is in Drupal CMS 

The way we create, optimize, and deliver content has fundamentally changed. Artificial Intelligence is no longer a futuristic concept; it’s a practical, indispensable tool for content teams. For years, Drupal has been the gold standard for structured, enterprise-level content management. Now, with the rapid maturation of the community’s Artificial Intelligence Initiative, Drupal is emerging as the premier platform for an Intelligent CMS. 

This post is for every content editor, site builder, and digital marketer who spends too much time on repetitive tasks like writing alt text, crafting meta descriptions, or translating copy. We’re moving the AI power from external tools directly into your Drupal admin screen. 

We will explore five essential Drupal modules that leverage AI to supercharge your content workflow, making your team faster, your content better, and your website more effective. This is about making Drupal work smarter, not just harder. 

The collective effort to bring this intelligence to Drupal is being driven by the community, and you can see the foundational work, including the overview of many related projects, right here at the Drupal Artificial Intelligence Initiative. 

 

  1. AI CKEditor Integration: The Content Co-Pilot

This functionality is typically provided by a suite of modules, with the core framework being the AI (Artificial Intelligence) module and its submodules like AI CKEditor. It integrates large language models (LLMs) like those from OpenAI or Anthropic directly into your content editor. 

Role in the CMS 

This module places an AI assistant directly inside the CKEditor 5 toolbar, the primary rich-text editor in Drupal. It turns the editor from a passive text field into an active, helpful partner. It knows the context of your page and is ready to assist without ever requiring you to leave the edit screen. 

How It’s Useful 

  • For Content Editors: It eliminates the dreaded “blank page syndrome.” Highlight a bulleted list and ask the AI to “turn this into a formal paragraph” or “expand this summary into a 500-word article.” You can instantly check spelling and grammar, adjust the tone of voice (e.g., from professional to friendly), and summarize long blocks of text for teasers or email excerpts. It means spending less time writing the first draft and more time editing and refining the final, human-approved version. 
  • For Site Builders: It reduces the need for editors to jump between Drupal and external AI tools, streamlining the entire content creation workflow and keeping your team focused within the secure environment of the CMS. 

 

  1. AI Image Alt Text: The SEO Automator

AI Image Alt Text is a specialized module that performs one critical task exceptionally well: using computer vision to describe images for accessibility and SEO. 

Role in the CMS 

This module hooks into the Drupal Media Library workflow. The moment an editor uploads a new image, the module sends that image to a Vision AI service (like Google Vision or an equivalent LLM) for analysis. The AI identifies objects, actions, and scenes, and then generates a descriptive text which is automatically populated into the image’s Alternative Text (Alt Text) field. 

How It’s Useful 

  • For Accessibility: Alt text is crucial for WCAG compliance. Screen readers use this text to describe images to visually impaired users. This module ensures that every image, regardless of how busy the editor is, has a meaningful description, making your site more inclusive right from the start. 
  • For SEO & Editors: Alt text is a ranking signal for search engines. It also saves the editor the most tedious part of their job. Instead of manually typing a description like “Woman sitting at a desk typing on a laptop with a cup of coffee,” the AI provides a high-quality, descriptive draft instantly, which the editor can quickly approve or slightly refine. It’s a huge time-saver and compliance booster. 

 

  1. AI Translation: The Multilingual Enabler

This feature is often a submodule within the main AI (Artificial Intelligence) framework, sometimes leveraging a dedicated integration like the AI Translate submodule, or integrating with the Translation Management Tool (TMGMT). 

Role in the CMS 

Drupal is one of the world’s most powerful platforms for building multilingual websites. This module builds upon that strength by injecting AI as a Translation Provider. Instead of waiting for a human translator for the first pass, this module allows content to be translated into dozens of languages with the click of a button. 

How It’s Useful 

  • For Global Content Teams: Imagine launching a product page simultaneously across five markets. This tool performs the initial, high-quality, machine-generated translation and saves it as a draft in the corresponding language node. The local editor then only needs to perform post-editing (reviewing and culturally adapting the text), which is significantly faster and cheaper than translating from scratch. 
  • For Site Owners: It drastically cuts the time-to-market for multilingual content and ensures translation consistency across technical terms. It leverages the AI’s power for speed while retaining the essential human oversight for cultural accuracy. 

 

  1. AI Automators: The Smart Curator

AI Automators (a powerful submodule of the main AI project) allows you to set up rules that automatically populate or modify fields based on content entered in other fields. 

Role in the CMS 

This is where the magic of “smart” content happens. An Automator is a background worker that monitors when a piece of content is saved. You can configure it to perform chained actions using an LLM. For instance, when an editor publishes a new blog post: 

  1. Read the content of the Body field. 
  2. Use a prompt to generate five relevant keywords/topics. 
  3. Automatically populate the Taxonomy/Tags field with those terms. 
  4. Use another prompt to generate a concise post for X (formerly Twitter). 
  5. Populate a new Social Media Post field with that text. 

How It’s Useful 

  • For Content Strategists: It enforces content standards and completeness. Every piece of content is automatically tagged and optimized, reducing the chance of human error and improving content discoverability through precise categorization. It ensures your SEO and content strategy is executed flawlessly on every save. 
  • For Site Builders: It brings the power of Event-Condition-Action (ECA) workflows into the AI space. It’s a no-code way to build complex, intelligent workflows that ensure data integrity and maximize the usefulness of content metadata. 

 

  1. AI Agents: The Operational Assistant

AI Agents, typically used in conjunction with the main AI framework, is a powerful new tool that uses natural language to execute administrative and site-building tasks. 

Role in the CMS

An AI Agent is like a virtual assistant for your Drupal back-end. Instead of navigating through multiple complex configuration forms to, say, create a new field on a content type, you simply tell the Agent what you want it to do in plain English. The Agent interprets your request, translates it into the necessary Drupal API calls, and executes the changes. The module comes with various built-in agents (like a Field Type Agent or a Content Type Agent). 

How It’s Useful 

  • For Site Builders and Non-Technical Admins: This is a revolutionary step toward conversational configuration. You can issue a command like: “Please create a new Content Type called ‘Product Review’ and add a new text field named ‘Reviewer Name’.” The agent handles the creation process instantly. This dramatically reduces the learning curve and time needed for common site-building tasks. 
  • For Automation: Agents can be chained together or triggered by other systems to perform complex, multi-step actions on the CMS structure itself. Need to update the taxonomy on 50 terms? A dedicated agent can handle the large-scale configuration change based on a high-level instruction, making system maintenance far more efficient. It turns administrative management into a conversation. 

 

Conclusion:

The integration of AI into Drupal is one of the most exciting developments in the platform’s history. It is a powerful affirmation of Drupal’s strength as a structured content hub. These modules—the AI CKEditor, AI Image Alt Text, AI Translation, AI Automators, and now the transformative AI Agentsare not here to replace your team. They are here to empower them. 

By automating the mundane, repetitive, and technical aspects of content management and even site configuration, these tools free up your content creators and site builders to focus on what humans do best: strategy, creativity, and high-level decision-making. The future of content management in Drupal is intelligent, efficient, and, most importantly, human-powered. It’s time to equip your team with these new essentials and watch your digital experiences flourish. 

]]>
https://blogs.perficient.com/2025/09/29/top-5-drupal-ai-modules-to-transform-your-workflow/feed/ 0 387495
ChatGPT vs Microsoft Copilot: Solving Node & Sitecore Issues https://blogs.perficient.com/2025/09/17/chatpgt-vs-microsoft-copilot/ https://blogs.perficient.com/2025/09/17/chatpgt-vs-microsoft-copilot/#comments Wed, 17 Sep 2025 05:20:30 +0000 https://blogs.perficient.com/?p=386776

In today’s world of AI-powered development tools, ChatGPT and Microsoft Copilot are often compared side by side. Both promise to make coding easier, debugging faster, and problem-solving more efficient. But when it comes to solving real-world enterprise issues, the difference in their effectiveness becomes clear.

Recently, I faced a practical challenge while working with Sitecore 10.2.0 and Sitecore SXA 11.3.0, which presented a perfect case study for comparing the two AI assistants.

The Context: Node.js & Sitecore Compatibility

I was troubleshooting an issue with Sitecore SXA where certain commands (npm run build, sxa r Main, and sxa w) weren’t behaving as expected. Initially, my environment was running on Node.js v14.17.1, but I upgraded to v20.12.2. After the upgrade, I started suspecting a compatibility issue between Node.js and Sitecore’s front-end build setup.

Naturally, I decided to put both Microsoft Copilot and ChatGPT to the test to see which one handled things better.

My Experience with Microsoft Copilot

When I first used Copilot, I gave it a very specific and clear prompt:

I am facing an issue with Sitecore SXA 11.3.0 on Sitecore 10.2.0 using Node.js v20.12.2. The gulp tasks are not running properly. Is this a compatibility issue and what should I do?

Copilot’s Response

  • Copilot generated a generic suggestion about checking the gulp configuration.
  • It repeated standard troubleshooting steps such as “try reinstalling dependencies,” “check your package.json,” and “make sure Node is installed correctly.”
  • Despite rephrasing the prompt multiple times, it failed to recognize the known compatibility issue between Sitecore SXA’s front-end tooling and newer Node versions.

Takeaway: Copilot provided a starting point, but the guidance lacked the technical depth and contextual relevance required to move the solution forward. It felt more like a general suggestion than a targeted response to the specific challenge at hand.

My Experience with ChatGPT

I then tried the same prompt in ChatGPT.

ChatGPT’s Response

  • Immediately identified that Sitecore SXA 11.3.0 running on Sitecore 10.2.0 has known compatibility issues with Node.js 20+.
  • It suggested that I should switch to Node.js v18.20.7 because it’s stable and works well with Sitecore.
  • Recommended checking SXA version compatibility matrix to confirm the supported Node versions.
  • Also guided me on how to use Node Version Manager (NVM) to switch between multiple Node versions without affecting other projects.

This response was not only accurate but also actionable. By following the steps, I was able to resolve the issue and get the build running smoothly again.

Takeaway: ChatGPT felt like talking to a teammate who understands how Sitecore and Node.js really work. In contrast, Copilot seemed more like the suggestion tool, it offered helpful prompts but didn’t fully comprehend the broader context or the specific challenge I was addressing.

Key Differences I Observed

What I Looked At Microsoft Copilot ChatGPT
Understanding the problem Gave basic answers, missed deeper context Understood the issue well and gave thoughtful replies
Sitecore knowledge Limited understanding, especially with SXA Familiar with SXA and Sitecore, provided valuable insights
Node.js compatibility Missed the Node.js 20+ issue Spotted the problem and suggested the right fix
Suggested solutions Repeated generic advice Gave clear, specific steps that actually helped
Ease of Use Good for quick code snippets Great for solving tricky problems step by step

Takeaways for Developers

  1. Copilot is great for boilerplate code and inline suggestions – if you want quick syntax help, it works well.
  2. ChatGPT shines in debugging and architectural guidance – especially when working with enterprise systems like Sitecore or giving code suggestions.
  3. When you’re stuck on environment or compatibility issues, ChatGPT can save hours by pointing you in the right direction.
  4. Best workflow: Use Copilot for code-writing speed, and ChatGPT for solving bigger technical challenges.

Final Thoughts

Both Microsoft Copilot and ChatGPT are powerful AI tools, but they serve different purposes.

  • Copilot functions like a code suggestion tool integrated within your IDE.
  • ChatGPT feels like a senior consultant who understands the ecosystem and gives you actionable advice.

When working on complex platforms like Sitecore 10.2.0 with SXA 11.3.0, and specific Node.js compatibility issues, ChatGPT clearly comes out ahead.

]]>
https://blogs.perficient.com/2025/09/17/chatpgt-vs-microsoft-copilot/feed/ 2 386776
Drupal 11’s AI Features: What They Actually Mean for Your Team https://blogs.perficient.com/2025/09/04/drupal-11s-ai-features-what-they-actually-mean-for-your-team/ https://blogs.perficient.com/2025/09/04/drupal-11s-ai-features-what-they-actually-mean-for-your-team/#comments Thu, 04 Sep 2025 14:04:33 +0000 https://blogs.perficient.com/?p=386893

Drupal 11’s AI Features: What They Actually Mean for Your Team

If you’ve been following the Drupal community lately, you’ve probably heard about the excitement with AI in Drupal 11 and the new Drupal AI Initiative. With over $100,000 in funding and 290+ AI modules already available, this will be a game changer.

But here’s the thing, AI in Drupal isn’t about replacing your team. It’s about making everyone more effective at what they already do best. Let’s talk through some of these new capabilities and what they mean for different teams in your organization.

Content Teams: Finally, An Assistant That Actually Helps

Creating quality content quickly has always been a challenge, but Drupal 11’s AI features tackle this head-on. The AI CKEditor integration gives content creators real-time assistance right in the editing interface, things like spelling corrections, translations, and contextual suggestions as you type.

The AI Content module is where things get interesting. It can automatically adjust your content’s tone for different audiences, summarize long content, and even suggest relevant taxonomy terms. For marketing teams juggling multiple campaigns, this means maintaining brand consistency without the usual back-and-forth reviews.

One feature that’s already saving teams hours is the AI Image Alt Text module. Instead of manually writing alt text for accessibility compliance, it generates descriptions automatically. The AI Translate feature is another game-changer for organizations with global reach—one-click multilingual content creation that actually understands context.

The bottom line? Your content team can focus on strategy and creativity instead of getting bogged down in routine tasks.

Developers: Natural Language Site Building

Here’s where Drupal 11 gets really exciting for a dev team. The AI Agents module introduces something we haven’t seen before, text-to-action capabilities. Developers can now modify Drupal configurations, create content types, and manage taxonomies just by describing what they need in spoken english.

Instead of clicking through admin interfaces, you can literally tell Drupal what you want, “Create a content type for product reviews with fields for rating, pros, cons, and reviewer information.” The system understands and executes these commands.

The AI module ecosystem supports over 21 major providers, OpenAI, Claude, AWS Bedrock, Google Vertex, and more. This means you’re not locked into any single AI provider and can choose the best model for specific tasks. The AI Explorer gives you a testing ground to experiment with prompts before pushing anything live.

For complex workflows, AI Automators let you chain multiple AI systems together. Think automated content transformation, field population, and business logic handling with minimal custom code.

The other great aspect of Drupal AI, is the open source backbone of Drupal, allows you to extend, add and build upon these agents in any way your dev team sees fit.

Marketing Teams: Data-Driven Campaign Planning

Marketing teams might be the biggest winners here. The AI Content Strategy module analyzes your existing content and provides recommendations for what to create next based on actual data, not guesswork. It identifies gaps in your content strategy and suggests targeted content based on audience behavior and industry trends.

The AI Search functionality means visitors can find content quickly, no more keyword guessing games. The integrated chatbot framework provides intelligent customer service that can access your site’s content to give accurate responses.

For SEO, the AI SEO module generates reports with user recommendations, reviewing content and metadata automatically. This reduces the need for separate SEO tools while giving insights right where you can act on them.

Why This Matters Right Now

The Drupal AI Initiative represents something more than just new features. With dedicated teams from leading agencies and serious funding behind it, this is Drupal positioning itself as the go-to platform for AI-powered content management.

For IT executives evaluating CMS options, Drupal 11’s approach is a great fit. You maintain complete control over your data and AI interactions while getting enterprise-grade governance with approval workflows and audit trails. It’s AI augmentation rather than AI replacement.

The practical benefits are clear: faster campaign launches, consistent brand voice across all content, and teams freed from manual tasks to focus on strategic work. In today’s competitive landscape, that kind of operational efficiency can make the difference between leading your market and playing catch-up.

The Reality Check

We all know, no technology is perfect. The success of these AI features, especially within the open source community, depends heavily on implementation and team adoption. You’ll need to spend time in training and process development to see real benefits. Like any new technology, there will be a learning curve as your team figures out the best ways to leverage these new features.

Based on what we are seeing within groups that have done early adoption of the AI features, they are seeing a good ROI on improvement of team efficiency, marketing time as well as reduced SEO churn.

If you’re considering how Drupal 11’s AI features might fit your organization, it’s worth having a conversation with an experienced implementation partner like Perficient. We can help you navigate the options and develop an AI strategy that makes sense for your specific situation.

]]>
https://blogs.perficient.com/2025/09/04/drupal-11s-ai-features-what-they-actually-mean-for-your-team/feed/ 2 386893
Invoke the Mapbox Geocoding API to Populate the Location Autocomplete Functionality https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/ https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/#respond Thu, 21 Aug 2025 08:01:53 +0000 https://blogs.perficient.com/?p=381495

While working on one of my projects, I needed to implement an autocomplete box using Mapbox Geocoding APIs in a React/Next.js application. The goal was to filter a list of hospitals based on the selected location. The location results from the API include coordinates, which I compared with the coordinates of the hospitals in my list.

The API returns various properties, including coordinates, under the properties section (as shown in the image below). These coordinates (latitude and longitude) can be used to filter the hospital list by matching them with the selected location.

Mapboxresultproperties

The API requires an access token, which can be obtained by signing up on the Mapbox platform. You can refer to the Geocoding API documentation for more details. The documentation provides a variety of APIs that can be used depending on your specific requirements.

Below are some example APIs taken from the same link.

# A basic forward geocoding request
# Find Los Angeles

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Los%20Angeles&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Find a town called 'Chester' in a specific region
# Add the proximity parameter with local coordinates
# This ensures the town of Chester, New Jersey is in the results

curl "https://api.mapbox.com/search/geocode/v6/forward?q=chester&proximity=-74.70850,40.78375&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Specify types=country to search only for countries named Georgia
# Results will exclude the American state of Georgia

curl "https://api.mapbox.com/search/geocode/v6/forward?q=georgia&types=country&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Limit the results to two results using the limit option
# Even though there are many possible matches
# for "Washington", this query will only return two results.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Washington&limit=2&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Search for the Place feature "Kaaleng" in the Ilemi Triangle. Specifying the cn worldview will return the country value South Sudan. Not including leaving the worldview parameter would default to the us worldview and return the country value Kenya.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Kaaleng&worldview=cn&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

The implementation leverages React hooks along with state management for handling component behavior and data flow.

How to Create an Autocomplete Component in React

  1. Create a React component.
  2. Sign up and apply the access token and API URL to the constants.
  3. Create a type to bind the structure of the API response results.
  4. Use the useEffect hook to invoke the API.
  5. Map the fetched results to the defined type.
  6. Apply CSS to style the component and make the autocomplete feature visually appealing.
#constants.ts

export const APIConstants = {
  accessToken: 'YOUR_MAPBOX_ACCESS_TOKEN',
  geoCodeSearchForwardApiUrl: 'https://api.mapbox.com/search/geocode/v6/forward',
  searchWordCount: 3,
};
#LocationResultProps.ts

type Suggetions = {
  properties: {
    feature_type: string;
    full_address: string;
    name: string;
    name_preferred: string;
    coordinates: {
      longitude: number;
      latitude: number;
    };
  };
};
export type LocationResults = {
  features: Array<Suggetions>;
};
#Styles.ts

export const autoComplete = {
  container: {
    width: '250px',
    margin: '20px auto',
  },
  input: {
    width: '100%',
    padding: '10px',
    fontSize: '16px',
    border: '1px solid #ccc',
    borderRadius: '4px',
  },
  dropdown: {
    top: '42px',
    left: '0',
    right: '0',
    backgroundColor: '#fff',
    border: '1px solid #ccc',
    borderTop: 'none',
    maxHeight: '150px',
    listStyleType: 'none',
    padding: '0',
    margin: '0',
    zIndex: 1000,
  },
  item: {
    padding: '5px',
    cursor: 'pointer',
    borderBottom: '1px solid #eee',
  },
};

#LocationSearchInput.tsx

import React, { useEffect, useState } from 'react';
import { APIConstants } from 'lib/constants';
import { autoComplete } from '../Styles';
import { LocationResults } from 'lib/LocationResultProps';

export const Default = (): JSX.Element => {
  const apiUrlParam: string[][] = [
    //['country', 'us%2Cpr'],
    ['types', 'region%2Cpostcode%2Clocality%2Cplace%2Cdistrict%2Ccountry'],
    ['language', 'en'],
    //['worldview', 'us'],
  ];

  const [inputValue, setInputValue] = useState<string>('');
  const [results, setresults] = useState<LocationResults>();
  const [submitted, setSubmitted] = useState<boolean>(false);

  // When the input changes, reset the "submitted" flag.
  const handleChange = (value: string) => {
    setSubmitted(false);
    setInputValue(value);
  };
  const handleSubmit = (value: string) => {
    setSubmitted(true);
    setInputValue(value);
  };

  // Fetch results when the input value changes
  useEffect(() => {
    if (inputValue.length < APIConstants?.searchWordCount) {
      setresults(undefined);
      return;
    }
    if (submitted) {
      return;
    }
    const queryInputParam = [
      ['q', inputValue],
      ['access_token', APIConstants?.accessToken ?? ''],
    ];

    const fetchData = async () => {
      const queryString = apiUrlParam
        .concat(queryInputParam)
        .map((inner) => inner.join('='))
        .join('&');
      const url = APIConstants?.geoCodeSearchForwardApiUrl + '?' + queryString;

      try {
        const response: LocationResults = await (await fetch(url)).json();
        setresults(response);
        console.log(response);
      } catch (err: unknown) {
        console.error('Error obtaining location results for autocomplete', err);
      }
    };

    fetchData();
  }, [inputValue]);

  return (
    <div>
      <div style={autoComplete.container}>
        <input
          style={autoComplete.input}
          onChange={(e) => handleChange(e.target?.value)}
          value={inputValue}
          placeholder="Find Location"
        />

        {inputValue &&
          !submitted &&
          results?.features?.map((x) => {
            return (
              <ul style={autoComplete.dropdown}>
                <li style={autoComplete.item}>
                  <span onClick={() => handleSubmit(x?.properties?.full_address)}>
                    {x?.properties?.full_address}
                  </span>
                </li>
              </ul>
            );
          })}
      </div>
    </div>
  );
};

Finally, we can search for a location using a zip code, state, or country.

Recording 20250520 135312 (1)

 

Additionally, the reverse geocoding API is used similarly, requiring only minor adjustments to the parameters and API URL. The location autocomplete box offers a wide range of use cases. It can be integrated into user forms such as registration or contact forms, where exact location coordinates or a full address need to be captured upon selection. Each location result includes various properties. Based on the user’s input, whether it’s a city, ZIP code, or state, the autocomplete displays matching results.

 

]]>
https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/feed/ 0 381495
AI: Security Threat to Personal Data? https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/ https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/#respond Mon, 18 Aug 2025 07:33:26 +0000 https://blogs.perficient.com/?p=385942

In recent years, AI chatbots like ChatGPT have gone from fun tools for answering questions to serious helpers in workplaces, education, and even personal decision-making. With ChatGPT-5 now being the latest and most advanced version, it’s no surprise that people are asking a critical question:

“Is my personal data safe when I use ChatGPT-5?”

First, What Is ChatGPT-5?

ChatGPT-5 is an AI language model created by OpenAI. You can think of it like a super-smart digital assistant that can:

  • Answering questions across a wide range of topics
  • Drafting emails, essays, and creative content
  • Writing and debugging code
  • Assisting with research and brainstorming
  • Supporting productivity and learning

It learns from patterns in data, but here’s an important point – it doesn’t “remember” your conversations unless the developer has built a special memory feature and you’ve agreed to it.

How Your Data Is Used

When you chat with ChatGPT-5, your messages are processed to generate a response. Depending on the app or platform you use, your conversations may be:

  • Temporarily stored to improve the AI’s performance
  • Reviewed by humans (in rare cases) to train and fine-tune the system
  • Deleted or anonymized after a specific period, depending on the service’s privacy policy

This is why reading the privacy policy is not just boring legal stuff – it’s how you find out precisely what happens to your data.

Real Security Risks to Be Aware Of

The concerns about ChatGPT-5 (and similar AI tools) are less about it being “evil” and more about how your data could be exposed if not appropriately handled.

Here are the main risks:

1. Accidental Sharing of Sensitive Information

Many users unknowingly type personal details – such as their full name, home address, phone number, passwords, or banking information – into AI chat windows. While the chatbot itself may not misuse this data, it is still transmitted over the internet and may be temporarily stored by the platform. If the platform suffers a data breach or if the information is accessed by unauthorized personnel, your sensitive data could be exposed or exploited.

Best Practice: Treat AI chats like public forums – never share confidential or personally identifiable information.

2. Data Retention by Third-Party Platforms

AI chatbots are often integrated into third-party platforms, such as browser extensions, productivity tools, or mobile apps. These integrations may collect and store your chat data on their own servers, sometimes without clearly informing you. Unlike official platforms with strict privacy policies, third-party services may lack robust security measures or transparency.

Risk Example: A browser extension that logs your AI chats could be hacked, exposing all stored conversations.

Best Practice: Use only trusted, official apps and review their privacy policies before granting access.

3. Misuse of Login Credentials

In rare but serious cases, malicious AI integrations or compromised platforms could capture login credentials you enter during a conversation. If you share usernames, passwords, or OTPs (one-time passwords), these could be used to access your accounts and perform unauthorized actions – such as placing orders, transferring money, or changing account settings.

Real-World Consequence: You might wake up to find that someone used your credentials to order expensive items or access private services.

Best Practice: Never enter login details into any AI chat, and always use two-factor authentication (2FA) for added protection.

4. Phishing & Targeted Attacks

If chat logs containing personal information are accessed by cybercriminals, they can use that data to craft highly convincing phishing emails or social engineering attacks. For example, knowing your name, location, or recent purchases allows attackers to impersonate trusted services and trick you into clicking malicious links or revealing more sensitive data.

Best Practice: Be cautious of unsolicited messages and verify the sender before responding or clicking links.

5. Overtrusting AI Responses

AI chatbots are trained on vast datasets, but they can still generate inaccurate, outdated, or misleading information. Relying on AI responses without verifying facts can lead to poor decisions, especially in areas like health, finance, or legal advice.

Risk Example: Acting on incorrect medical advice or sharing false information publicly could have serious consequences.

Best Practice: Always cross-check AI-generated content with reputable sources before taking action or sharing it.

How to Protect Yourself

Here are simple steps you can take:

  • Never share sensitive login credentials or card details inside a chat.
  • Stick to official apps and platforms to reduce the risk of malicious AI clones.
  • Use 2-factor authentication (2FA) for all accounts, so even stolen passwords can’t be used easily.
  • Check permissions before connecting ChatGPT-5 to any service – don’t allow unnecessary access.
  • Regularly clear chat history if your platform stores conversations.

Final Thoughts

ChatGPT-5 is a tool, and like any tool, it can be used for good or misused. The AI itself isn’t plotting to steal your logins or credentials, but if you use it carelessly or through untrusted apps, your data could be at risk.

Golden rule: Enjoy the benefits of AI, but treat it like a stranger online – don’t overshare, and keep control of your personal data.

]]>
https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/feed/ 0 385942
Smart Failure Handling in HCL Commerce with Circuit Breakers https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/ https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/#respond Fri, 15 Aug 2025 05:48:28 +0000 https://blogs.perficient.com/?p=386135

In modern enterprise systems, stability and fault tolerance are not optional; they are essential. One proven approach to ensure robustness is the Circuit Breaker pattern, widely used in API development to prevent cascading failures. HCL Commerce takes this principle further by embedding circuit breakers into its HCL Cache to effectively manage Redis failures.

 What Is a Circuit Breaker?
The Circuit Breaker is a design pattern commonly used in API development to stop continuous requests to a service that is currently failing, thereby protecting the system from further issues. It helps maintain system stability by detecting failures and stopping the flow of requests until the issue is resolved.

The circuit breaker typically operates in three main (or “normal”) states. These are part of the standard global pattern of Circuit Breaker design.

Normal States:

  1. CLOSED:
  • At the start, the circuit breaker allows all outbound requests to external services without restrictions.
  • It monitors the success and failure of these calls.
  1. OPEN:
  • The circuit breaker rejects all external calls.
  • This state is triggered when the failure threshold is reached (e.g., 50% failure rate).
  • It remains in this state for a specified duration (e.g., 60 seconds).
  1. HALF_OPEN:
  • After the wait duration in the OPEN state, the circuit breaker transitions to HALF_OPEN.
  • It allows a limited number of calls to check if the external service has recovered.
  • If these calls succeed (e.g., receive a 200 status), the circuit breaker transitions back to  CLOSED.
  • If the error rate continues to be high, the circuit breaker reverts to the OPEN state.
Circuit Breaker Pattern

Circuit breaker pattern with normal states

Special States:

  1. FORCED_OPEN:
  • The circuit breaker is manually set to reject all external calls.
  • No calls are allowed, regardless of the external service’s status.
  1. DISABLED:
  • The circuit breaker is manually set to allow all external calls.
  • It does not monitor or track the success or failure of these calls.
Circuit breaker pattern with special states

Circuit breaker pattern with special states

Circuit Breaker in HCL Cache (for Redis)

In HCL Commerce, the HCL Cache layer interacts with Redis for remote coaching. But what if Redis becomes unavailable or slow? HCL Cache uses circuit breakers to detect issues and temporarily stop calls to Redis, thus protecting the rest of the system from being affected.

Behavior Overview:

  • If 20 consecutive failures occur in 10 seconds, the Redis connection is cut off.
  • The circuit remains open for 60 seconds.
  • At this stage, the circuit enters a HALF_OPEN state, where it sends limited test requests to evaluate if the external service has recovered.
  • If even 2 of these test calls fail, the circuit reopens for another 60 seconds.

Configuration Snapshot

To manage Redis outages effectively, HCL Commerce provides fine-grained configuration settings for both Redis client behavior and circuit breaker logic. These settings are defined in the Cache YAML file, allowing teams to tailor fault-handling based on their system’s performance and resilience needs.

 Redis Request Timeout Configuration

Slow Redis responses are not treated as failures unless they exceed the defined timeout threshold. The Redis client in HCL Cache supports timeout and retry configurations to control how persistent the system should be before declaring a failure:

timeout: 3000           # Max time (in ms) to wait for a Redis response
retryAttempts: 3        # Number of retry attempts on failure
retryInterval: 1500    # Specifies the delay (in milliseconds) between each retry attempt.

With the above configuration, the system will spend up to 16.5 seconds (3000 + 3 × (3000 + 1500)) trying to get a response before returning a failure. While these settings offer robustness, overly long retries can result in delayed user responses or log flooding, so tuning is essential.

Circuit Breaker Configuration

Circuit breakers are configured under the redis.circuitBreaker section of the Cache YAML file. Here’s an example configuration:

redis:
  circuitBreaker:
    scope: auto
    retryWaitTimeMs: 60000
    minimumFailureTimeMs: 10000
    minimumConsecutiveFailures: 20
    minimumConsecutiveFailuresResumeOutage: 2 
cacheConfigs:
  defaultCacheConfig:
    localCache:
      enabled: true
      maxTimeToLiveWithRemoteOutage: 300

Explanation of Key Fields:

  • scope: auto: Automatically determines whether the circuit breaker operates at the client or cache/shard level, depending on the topology.
  • retryWaitTimeMs (Default: 60000): Time to wait before attempting Redis connections after circuit breaker is triggered.
  • minimumFailureTimeMs (Default: 10000): Minimum duration during which consecutive failures must occur before opening the circuit.
  • minimumConsecutiveFailures (Default: 20): Number of continuous failures required to trigger outage mode.
  • minimumConsecutiveFailuresResumeOutage (Default: 2): Number of failures after retrying that will put the system back into outage mode.
  • maxTimeToLiveWithRemoteOutage: During Redis outages, local cache entries use this TTL value (in seconds) to serve data without invalidation messages.

Real-world Analogy

Imagine you have a web service that fetches data from an external API. Here’s how the circuit breaker would work:

  1. CLOSED: The service makes calls to the API and monitors the responses.
  2. OPEN: If the API fails too often (e.g., 50% of the time), the circuit breaker stops making calls for 60 seconds.
  3. HALF_OPEN: After 60 seconds, the circuit breaker allows a few calls to the API to see if it’s working again.
  4. CLOSED: If the API responds successfully, the circuit breaker resumes normal operation.
  5. OPEN: If the API still fails, the circuit breaker stops making calls again and waits.

Final Thought

By combining the classic circuit breaker pattern with HCL Cache’s advanced configuration, HCL Commerce ensures graceful degradation during Redis outages. It’s not just about availability—it’s about intelligent fault recovery.

For more detailed information, you can refer to the official documentation here:
🔗 HCL Commerce Circuit Breakers – Official Docs

]]>
https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/feed/ 0 386135
Mastering GitHub Copilot in VS Code https://blogs.perficient.com/2025/08/12/mastering-github-copilot-in-vs-code/ https://blogs.perficient.com/2025/08/12/mastering-github-copilot-in-vs-code/#respond Tue, 12 Aug 2025 07:55:43 +0000 https://blogs.perficient.com/?p=385832

Ready to go from “meh” to “whoa” with your AI coding assistant? Here’s how to get started.

You’ve installed GitHub Copilot. Now what?

Here’s how to actually get it to work for you – not just with you.

In the blog Using GitHub Copilot in VS Code, we have already seen how to use GitHub Copilot in VS Code.

1. Write for Copilot, Not Just Yourself

Copilot is like a teammate who’s really fast at coding but only understands what you clearly explain.

Start with Intention:

Use descriptive comments or function names to guide Copilot.

// Fetch user data from API and cache it locally
function fetchUserData() {

Copilot will often generate useful logic based on that. It works best when you think one step ahead.

2. Break Problems Into Small Pieces

Copilot shines when your code is modular.

Instead of writing:

function processEverything() {
  // 50 lines of logic
}

Break it down:

// Validate form input
function validateInput(data) {

}

// Submit form to backend
function submitForm(data) {

}

This way, you get smarter, more accurate completions.

3. Use Keyboard Shortcuts to Stay in Flow

Speed = flow. These shortcuts help you ride Copilot without breaking rhythm:

Action Shortcut (Windows) Shortcut (Mac)
Accept Suggestion Tab Tab
Next Suggestion Alt + ] Option + ]
Previous Suggestion Alt + [ Option + [
Dismiss Suggestion Esc Esc
Open Copilot Panel Ctrl + Enter Cmd + Enter

Power Tip: Hold Tab to preview full suggestion before accepting it.

4. Experiment With Different Prompts

Don’t settle for the first suggestion. Try giving Copilot:

  • Function names like: generateInvoicePDF()
  • Comments like: // Merge two sorted arrays
  • Descriptions like: // Validate email format

Copilot might generate multiple versions. Pick or tweak the one that fits best.

5. Review & Refactor – Always

Copilot is smart, but not perfect.

  • Always read the output. Don’t blindly accept.
  • Add your own edge case handling and error checks.
  • Use tools like ESLint or TypeScript for safety.

Think of Copilot as your fast-thinking intern. You still need to double-check their work.

6. Use It Across File Types

Copilot isn’t just for JS or Python. Try it in:

  • HTML/CSS → Suggest complete sections
  • SQL → Generate queries from comments
  • Markdown → Draft docs and README files
  • Dockerfiles, .env, YAML, Regex patterns

Write a comment like # Dockerfile for Node.js app – and watch the magic.

7. Pair It With Unit Tests

Use Copilot to write your test cases too:

// Test case for addTwoNumbers function
describe('addTwoNumbers', () => {

It will generate a full Jest test block. Use this to write tests faster – especially for legacy code.

8. Learn From Copilot (Not Just Use It)

Treat Copilot suggestions as learning opportunities:

  • Ask: “Why did it suggest that?”
  • Compare with your original approach
  • Check docs or MDN if you see unfamiliar code

It’s like having a senior dev whispering best practices in your ear.

9. Use Copilot Chat (If Available)

If you have access to GitHub Copilot Chat, try it. Ask questions like:

  • What does this error mean?
  • Explain this function
  • Suggest improvements for this code

It works like a Stack Overflow built into your IDE.

Quick Recap

Tip Benefit
Write clear comments Better suggestions
Break logic into chunks Modular, reusable code
Use shortcuts Stay in flow
Cycle suggestions Explore better options
Review output Avoid bugs
Test case generation Faster TDD
Learn as you go Level up coding skills

Final Thoughts: Practice With Purpose

To truly master Copilot:

  • Build small projects and let Copilot help
  • Refactor old code using Copilot suggestions
  • Try documenting your code with its help

You’ll slowly build trust – and skill.

]]>
https://blogs.perficient.com/2025/08/12/mastering-github-copilot-in-vs-code/feed/ 0 385832