Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Fri, 02 Jan 2026 08:25:16 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

#1. Introduction

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an API client that’s open-source, super speedy, and puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why’s Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads your collections stay local. No hidden syncing your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters API testing without unnecessary features.

Bottom line: Bruno fits the way we work today collaboratively, secure, and efficient. It’s not trying to do everything; it’s just good at API testing.

#2. Key Features

Bruno keeps it real with features that matter. Here’s the highlights:

  1. Totally Open-Source
  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A bunch of devs are pitching in on GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up
  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain text files play nicely with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning
  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up in a flash and zips through responses.
  • Great for solo endpoint tweaks or juggling big workflows without your machine groaning.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux just choose your platform and you’re good to go.

#3. Quick Install Guide

Windows:

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS:

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux:

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy for building requests on the fly.
  • CLI: For the terminal lovers. Automate tests, hook into CI/CD, or run collections like

          bruno run collection.bru –env dev.

#4. Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen begging for a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Drop in the URL, like https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Toss in headers: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, make folders for envs or APIs, and use vars for switching setups.

#5. Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed
  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy
  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag
  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

#6. Level Up with Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

#7. Community & Contribution

It’s community-driven:

#8. Conclusion

Bruno isn’t just another API testing tool it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
Microservices: The Emerging Complexity Driven by Trends and Alternatives to Over‑Design https://blogs.perficient.com/2025/12/31/microservices-the-emerging-complexity-driven-by-trends-and-alternatives-to-over-design/ https://blogs.perficient.com/2025/12/31/microservices-the-emerging-complexity-driven-by-trends-and-alternatives-to-over-design/#respond Wed, 31 Dec 2025 15:13:56 +0000 https://blogs.perficient.com/?p=389360

The adoption of microservice‑based architectures has grown exponentially over the past decade, often driven more by industry trends than by a careful evaluation of system requirements. This phenomenon has generated unnecessarily complex implementations—like using a bazooka to kill an ant. Distributed architectures without solid foundations in domain capabilities, workloads, operational independence, or real scalability needs have become a common pattern in the software industry. In many cases, organizations migrate without having a mature discipline in observability, traceability, automation, domain‑driven design, or an operational model capable of supporting highly distributed systems; as a consequence, they end up with distributed monoliths that require coordinated deployments and suffer cascading failures, losing the benefits originally promised by microservices (Iyer, 2025; Fröller, 2025).

Over‑Design

The primary issue in microservices is not rooted in their architectural essence, but in the over‑design that emerges when attempting to implement such architecture without having a clear roadmap of the application’s domains or of the contextual boundaries imposed by business rules. The decomposition produces highly granular, entity‑oriented services that often result in circular dependencies, duplicated business logic, excessive events without meaningful semantics, and distributed flows that are difficult to debug. Instead of achieving autonomy and independent scalability, organizations create a distributed monolith with operational complexity multiplied by the number of deployed services. A practical criterion to avoid this outcome is to postpone decomposition until stable boundaries and non‑functional requirements are fully understood, even adopting a monolith‑first approach before splitting (Fowler, 2015; Danielyan, 2025).

Minimal API and Modular Monolith as Alternatives to Reduce Complexity

In these scenarios, it is essential to explore alternatives that allow companies to design simpler microservices without sacrificing architectural clarity or separation of concerns. One such alternative is the use of Minimal APIs to reduce complexity in the presentation layer: this approach removes ceremony (controllers, conventions, annotations) and accelerates startup while reducing container footprint. It is especially useful for utility services, CRUD operations, and limited API surfaces (Anderson & Dykstra, 2024; Chauhan, 2024; Nag, 2025).

Another effective alternative is the Modular Monolith. A well‑modularized monolith enables isolating functional domains within internal modules that have clear boundaries and controlled interaction rules, simplifying deployment, reducing internal latency, and avoiding the explosion of operational complexity. Additionally, it facilitates a gradual migration toward microservices only when objective reasons exist (differentiated scaling needs, dedicated teams, different paces of domain evolution) (Bächler, 2025; Bauer, n.d.).

Improving the API Gateway and the Use of Event‑Driven Architectures (EDA)

The API Gateway is another critical component for managing external complexity: it centralizes security policies, versioning, rate limiting, and response transformation/aggregation, hiding internal topology and reducing client cognitive load. Patterns such as Backend‑for‑Frontend (BFF) and aggregation help decrease network trips and prevent each public service from duplicating cross‑cutting concerns (Microsoft, n.d.-b; AST Consulting, 2025).

A key principle for reducing complexity is to avoid decomposition by entities and instead guide service boundaries using business capabilities and bounded contexts. Domain‑Driven Design (DDD) provides a methodological compass to define coherent semantic boundaries; mapping bounded contexts to services (not necessarily in a 1:1 manner) reduces implicit coupling, prevents domain model ambiguity, and clarifies service responsibilities (Microsoft, n.d.-a; Polishchuk, 2025).

Finally, the use of Event‑Driven Architectures (EDA) should be applied judiciously. Although EDA enhances scalability and decoupling, poor implementation significantly increases debugging effort, introduces hidden dependencies, and complicates traceability. Mitigating these risks requires discipline in event design/versioning, the outbox pattern, idempotency, and robust telemetry (correlation IDs, DLQs), in addition to evaluating when orchestration (Sagas) is more appropriate than choreography (Three Dots Labs, n.d.; Moukbel, 2025).

Conclusion

The complexity associated with microservices arises not from the architecture itself, but from misguided adoption driven by trends. The key to reducing this complexity is prioritizing cohesion, clarity, and gradual evolution: Minimal APIs for small services, a Modular Monolith as a solid foundation, decomposition by real business capabilities and bounded contexts, a well‑defined gateway, and a responsible approach to events. Under these principles, microservices stop being a trend and become an architectural mechanism that delivers real value (Fowler, 2015; Anderson & Dykstra, 2024).

References

  • Anderson, R., & Dykstra, T. (2024, julio 29). Tutorial: Create a Minimal API with ASP.NET Core. Microsoft Learn. https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-web-api?view=aspnetcore-10.0
  • AST Consulting. (2025, junio 12). API Gateway in Microservices: Top 5 Patterns and Best Practices Guide. https://astconsulting.in/microservices/api-gateway-in-microservices-patterns
  • Bächler, S. (2025, enero 23). Modular Monolith: The Better Alternative to Microservices. ti&m. https://www.ti8m.com/en/blog/monolith
  • Bauer, R. A. (s. f.). On Modular Monoliths. https://www.raphaelbauer.com/posts/on-modular-monoliths/
  • Danielyan, M. (2025, febrero 4). When to Choose Monolith Over Microservices. https://mikadanielyan.com/blog/when-to-choose-monolith-over-microservices
  • Fowler, M. (2015, junio 3). Monolith First. https://martinfowler.com/bliki/MonolithFirst.html
  • Fröller, J. (2025, octubre 30). Many Microservice Architectures Are Just Distributed Monoliths. MerginIT Blog. https://merginit.com/blog/31102025-microservices-antipattern-distributed-monolit
  • Iyer, A. (2025, junio 3). Why 90% of Microservices Still Ship Like Monoliths. The New Stack. https://thenewstack.io/why-90-of-microservices-still-ship-like-monoliths/
  • Microsoft. (s. f.-a). Domain analysis for microservices. Azure Architecture Center. https://learn.microsoft.com/en-us/azure/architecture/microservices/model/domain-analysis
  • Microsoft. (s. f.-b). API gateways. Azure Architecture Center. https://learn.microsoft.com/en-us/azure/architecture/microservices/design/gateway
  • Moukbel, T. (2025). Event-Driven Architecture: Pitfalls and Best Practices. Undercode Testing. https://undercodetesting.com/event-driven-architecture-pitfalls-and-best-practices/
  • Nag, A. (2025, julio 29). Why Minimal APIs in .NET 8 Are Perfect for Microservices Architecture?. embarkingonvoyage.com. https://embarkingonvoyage.com/blog/technologies/why-minimal-apis-in-net-8-are-perfect-for-microservices-architecture/
  • Polishchuk. (2025, diciembre 12). Design Microservices: Using DDD Bounded Contexts. bool.dev. https://bool.dev/blog/detail/ddd-bounded-contexts
  • Three Dots Labs. (s. f.). Event-Driven Architecture: The Hard Parts. https://threedots.tech/episode/event-driven-architecture/
  • Chauhan, P. (2024, septiembre 30). Deep Dive into Minimal APIs in ASP.NET Core 8. https://www.prafulchauhan.com/blogs/deep-dive-into-minimal-apis-in-asp-net-core-8
]]>
https://blogs.perficient.com/2025/12/31/microservices-the-emerging-complexity-driven-by-trends-and-alternatives-to-over-design/feed/ 0 389360
Beyond the Version Bump: Lessons from Upgrading React Native 0.72.7 → 0.82 https://blogs.perficient.com/2025/12/24/beyond-the-version-bump-lessons-from-upgrading-react-native-0-72-7-%e2%86%92-0-82/ https://blogs.perficient.com/2025/12/24/beyond-the-version-bump-lessons-from-upgrading-react-native-0-72-7-%e2%86%92-0-82/#respond Wed, 24 Dec 2025 08:39:47 +0000 https://blogs.perficient.com/?p=389288

Introduction

When I started investigating the React Native upgrade from 0.72.7 to 0.82, my initial goal was simple:  

Check breaking changes and library compatibility. But very quickly, I realized this upgrade was not just a version bump. It was deeply tied to React Native’s New Architecture, especially Fabric UI Engine and TurboModules.  This blog shares what I discovered, what changed internally, and why this upgrade matters in real-world apps, not just release notes. 

Why I Started Digging Deeper  

At first glance:  

  • The app was already stable 
  • Performance was “acceptable” 
  • Most screens worked fine

Why should we even care about Fabric and TurboModules while upgrading?

The answer became clear when I compared how React Native worked internally in 0.72.7 vs 0.82. 

The Reality in React Native 0.72.7 (Old Architecture) 

In 0.72.7, even though the New Architecture existed, most apps were still effectively running on the old bridge model. 

What I Observed 

  • UI updates were asynchronous 
  • JS → Native communication relied on serialized messages 
  • Native modules were eagerly loaded 
  • Startup time increased as the app grew 

Performance issues appeared under: 

  • Heavy animations 
  • Large FlatLists 
  • Complex navigation stacks 

None of these were “bugs” they were architectural limitations. 

What Changed in React Native 0.82 

Screenshot 2025 12 23 At 3.51.45 pm

Screenshot 2025 12 23 At 3.52.03 pm

Screenshot 2025 12 23 At 3.52.38 pm

By the time I reached 0.82, it was clear that Fabric and TurboModules were no longer optional concepts they were becoming the default future. The upgrade forced me to understand why React Native was redesigned internally. 

My Understanding of Fabric UI Engine (After Investigation) 

Fabric is not just a rendering upgrade it fundamentally changes how UI updates happen. 

What Changed Compared to 0.72.7

Synchronous UI Updates

Earlier: 

  • UI updates waited for the JS bridge 

With Fabric: 

  • UI updates can happen synchronously 
  • JS and Native talk directly through JSI 
  • Result: noticeably smoother interactions 

This became obvious in: 

  • Gesture-heavy screens 
  • Navigation transitions 
  • Scroll performance 

Shared C++ Core

While upgrading, I noticed Fabric uses a shared C++ layer between: 

  • JavaScript 
  • iOS 
  • Android 

This reduces: 

  • Data duplication 
  • Platform inconsistencies 
  • Edge-case UI bugs 

From a maintenance point of view, this is huge. 

Better Support for Concurrent Rendering

Fabric is built with modern React features in mind. 

That means: 

  • Rendering can be interrupted 
  • High-priority UI updates are not blocked 
  • Heavy JS work doesn’t freeze the UI 

In practical terms: 

The app feels more responsive, even when doing more. TurboModules: The Bigger Surprise for Me, I initially thought TurboModules were just an optimization. After digging into the upgrade docs and native code, I realized they solve multiple real pain points I had faced earlier. 

What I Faced in 0.72.7 

  • All native modules are loaded at startup 
  • App launch time increased as features grew 
  • Debugging JS ↔ Native mismatches was painful 
  • Weak type safety caused runtime crashes 

What TurboModules Changed:

Lazy Loading by Default

With TurboModules: 

  • Native modules load only when accessed 
  • Startup time improves automatically 
  • Memory usage drops 

This alone makes a big difference in large apps. 

Direct JS ↔ Native Calls (No Bridge)

TurboModules use JSI instead of the old bridge. 

That means: 

  • No JSON serialization 
  • No async message queue 
  • Direct function calls 

From a performance perspective, this is a game-changer. 

Stronger Type Safety

Using codegen, the interface between JS and Native becomes: 

  • Explicit 
  • Predictable 
  • Compile-time safe 

Fabric + TurboModules Together (The Real Upgrade) 

What I realized during this migration is: 

Fabric and TurboModules don’t shine individually they shine together.  

Area   React Native 0.72.7   React Native 0.82  
UI Rendering   Async bridge   Synchronous (Fabric)  
Native Calls   Serialized   Direct (JSI)  
Startup Time   Slower   Faster  
Animations   Jank under load   Smooth  
Native Integration   Fragile   Strong & typed  
Scalability   Limited   Production-ready  

My Final Take After the Upgrade Investigation 

Upgrading from 0.72.7 to 0.82 made one thing very clear to me: 

This is not about chasing versions. This is about adopting a new foundation. 

 Fabric and TurboModules: 

  • Remove long-standing architectural bottlenecks 
  • Make React Native feel closer to truly native apps 
  • Prepare apps for future React features 
  • Reduce hidden performance debt 

If someone asks me now: 

“Is the New Architecture worth it?” 

My answer is simple: 

If you care about performance, scalability, and long-term maintenance yes, absolutely. 

]]>
https://blogs.perficient.com/2025/12/24/beyond-the-version-bump-lessons-from-upgrading-react-native-0-72-7-%e2%86%92-0-82/feed/ 0 389288
How to Secure Applications During Modernization on AWS https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/ https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/#respond Fri, 19 Dec 2025 06:40:17 +0000 https://blogs.perficient.com/?p=389050

Why Do We Need to Secure Our Applications?  

Cloud environments are very dynamic and interconnected. A single misconfiguration or exposed API key can lead to:  

  • Data breaches 
  • Compliance violations 
  • Costly downtime 

Attackers often target application-level weaknesses, not just infrastructure gaps. If any application handles sensitive data, financial transactions, or user credentials, security is critical. 

Common Mistakes Made When Building Applications

  • Hardcoding API keys and credentials 
  • Ignoring dependency vulnerabilities 
  • Skipping encryption/decryption for sensitive data 

Essential Security Best Practices

1. Identity and Access Management (IAM)

  • Create dedicated IAM roles for your Lambda functions, EC2 instances, or ECS tasks instead of hardcoding access keys in your application. 
  • We must regularly review who has permissions using the IAM Access Analyzer. 
  • We must avoid using the root account for day-to-day operations/ any operations as a developer. 

Role Creation

 

Role Creation1

2. Don’t Store/Share Secrets in Your Code

Your appsettings.json is not the right place for secret keys. Storing API keys or database passwords. 

  • We must use AWS Secrets Manager or Parameter Store to keep secrets safe. 
  • Fetch keys at runtime by using AWS SDK for .NET or the AWSSDK.Extensions.NETCore.Setup configuration provider 

Secretmanager Creation2

Secretmanager Reading

3. Always Encrypt Data 

Encryption is one of the best practices to encrypt our sensitive data 

  • Enable HTTPS by default for all your endpoints.  
  • Use AWS Certificate Manager (ACM) to issue and manage SSL/TLS certificates. 
  • In your application, make sure that all traffic is redirected to HTTPS by adding app.UseHttpsRedirection(); 
  • AWS KMS to encrypt your S3 buckets, RDS databases, and EBS volumes.
  • If you’re using SQL Server on RDS, enable Transparent Data Encryption (TDE). 

 Encrypt & Decrypt API Key with KMS 

Encryption Steps

Encryption Decrypt Code

4. Build a Secure Network Foundation

  • Must use VPCs with private subnets for backend services. 
  • Control the traffic with Security Groups and Network ACLs. 
  • Use VPC Endpoints to keep traffic within AWS’s private network  
  • Use AWS WAF to protect your APIs, and enable AWS Shield to guard against DDoS attacks. 

Security Group

Vpc Creation

5. Keep Your Code and Dependencies Clean

Even the best infrastructure can’t save a vulnerable codebase. 

  • Update your .NET SDK and NuGet packages regularly. 
  • Use Amazon Inspector for runtime and AWS environment security, and tools like Dependabot for Development-time dependency security to find vulnerabilities early. 
  • Add code review analysis tools (like SonarQube) in your CI/CD pipeline. 

AWS Inspector

6. Log Everything and Watch

  • Enable Amazon AWS CloudWatch for all central logging and use AWS X-Ray to trace requests through the application. 
  • Turn on CloudTrail to track every API call across your account. 
  • Enable GuardDuty for continuous threat detection. 

 

]]>
https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/feed/ 0 389050
Getting Started with Python for Automation https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/ https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/#respond Tue, 09 Dec 2025 14:00:21 +0000 https://blogs.perficient.com/?p=388867

Automation has become a core part of modern work, allowing teams to reduce repetitive tasks, save time, and improve accuracy. Whether it’s generating weekly reports, organizing files, processing large amounts of data, or interacting with web applications, automation helps individuals and companies operate more efficiently. Among all programming languages used for automation, Python is one of the most widely adopted because of its simplicity and flexibility. 

Why Python Is Perfect for Automation 

Python is known for having a clean and readable syntax, which makes it easy for beginners to start writing scripts without needing deep programming knowledge. The language is simple enough for non-developers, yet powerful enough for complex automation tasks. Another major advantage is the availability of thousands of libraries. These libraries allow Python to handle file operations, manage Excel sheets, interact with APIs, scrape websites, schedule tasks, and even control web browsers – all with minimal code. Because of this, Python becomes a single tool capable of automating almost any repetitive digital task. 

What You Can Automate with Python 

Python can automate everyday tasks that would otherwise require significant manual effort. Simple tasks like renaming multiple files, organizing folders, or converting file formats can be completed instantly using small scripts. It is also commonly used for automating Excel-based workflows, such as cleaning datasets, merging sheets, generating monthly summaries, or transforming data between formats. Python is equally powerful for web-related automation: collecting data from websites, making API calls, sending automated emails, downloading content, and filling out online forms. For more advanced uses, Python can also automate browser testing, server monitoring, and deployment processes. 

Setting Up Your Python Automation Environment 

Getting started is straightforward. After installing Python, you can use an editor like VS Code or PyCharm to write your scripts. Libraries required for automation can be installed using a single command, making setup simple. Once you have your environment ready, writing your first script usually takes only a few minutes. For example, a short script can rename files in a folder, send an email, or run a function at a specific time of the day. Python’s structure is beginner-friendly, so even basic programming knowledge is enough to start automating everyday tasks. 

Examples of Simple Automation 

A typical example is a script that automatically renames files. Instead of renaming hundreds of files one by one, Python can loop through the folder and rename them instantly. Another example is an automated email script that can send daily reminders or reports. Python can also schedule tasks so that your code runs every morning, every hour, or at any time you choose. These examples show how even small scripts can add real value to your workflow by reducing repetitive manual tasks. 

Best Practices When Building Automation 

As you begin writing automation scripts, it helps to keep the code organized and reliable. Using virtual environments ensures that your project libraries remain clean. Adding error-handling prevents scripts from stopping unexpectedly. Logging enables you to track what your script does and when it executes. Once your automation is ready, you can run it automatically using tools like Task Scheduler on Windows or cron on Linux, so the script works in the background without your involvement. 

How Companies Use Python Automation 

Python automation is widely used across industries. IT teams rely on it to monitor servers, restart services, and handle deployment tasks. Business teams use it to generate reports, clean data, update dashboards, and manage document workflows. Marketing teams use automation for scraping competitor information, scheduling social media posts, or tracking engagement. For developers, Python helps with testing, error checking, and system integration via APIs. Across all these areas, automation improves efficiency and reduces human error. 

Conclusion 

Python is an excellent starting point for anyone who wants to begin automating daily tasks. Its simplicity, combined with its powerful ecosystem of libraries, makes it accessible to beginners and useful for professionals. Even basic automation scripts can save hours of work, and as you grow more comfortable, you can automate more complex processes involving data, web interactions, and system management. Learning Python for automation not only makes your work easier but also adds valuable skills for professional growth. 

 

]]>
https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/feed/ 0 388867
Creators in Coding, Copycats in Class: The Double-Edged Sword of Artificial Intelligence https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/ https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/#respond Thu, 04 Dec 2025 00:30:15 +0000 https://blogs.perficient.com/?p=388808

“Powerful technologies require equally powerful ethical guidance.” (Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014).

The ethics of using artificial intelligence depend on how we apply its capabilities—either to enhance learning or to prevent irresponsible practices that may compromise academic integrity. In this blog, I share reflections, experiences, and insights about the impact of AI in our environment, analyzing its role as a creative tool in the hands of developers and as a challenge within the academic context.

Between industry and the classroom

As a Senior Developer, my professional trajectory has led me to delve deeply into the fascinating discipline of software architecture. Currently, I work as a Backend Developer specializing in Microsoft technologies, facing daily the challenges of building robust, scalable, and well-structured systems in the business world.

Alongside my role in the industry, I am privileged to serve as a university professor, teaching four courses. Three of them are fundamental parts of the software development lifecycle: Software Analysis and Design, Software Architecture, and Programming Techniques. This dual perspective—as both a professional and a teacher—has allowed me to observe the rapid changes that technology is generating both in daily development practice and in the formation of future engineers.

Exploring AI as an Accelerator in Software Development

One of the greatest challenges for those studying the software development lifecycle is transforming ideas and diagrams into functional, well-structured projects. I always encourage my students to use Artificial Intelligence as a tool for acceleration, not as a substitute.

For example, in the Software Analysis and Design course, we demonstrate how a BPMN 2.0 process diagram can serve as a starting point for modeling a system. We also work with class diagrams that reflect compositions and various design patterns. AI can intervene in this process in several ways:

  • Code Generation from Models: With AI-based tools, it’s possible to automatically turn a well-built class diagram into the source code foundation needed to start a project, respecting the relationships and patterns defined during modeling.
  • Rapid Project Architecture Setup: Using AI assistants, we can streamline the initial setup of a project by selecting the technology stack, creating folder structures, base files, and configurations according to best practices.
  • Early Validation and Correction: AI can suggest improvements to proposed models, detect inconsistencies, foresee integration issues, and help adapt the design context even before coding begins.

This approach allows students to dedicate more time to understanding the logic behind each component and design principle, instead of spending hours on repetitive setup and basic coding tasks. The conscious and critical use of artificial intelligence strengthens their learning, provides them with more time to innovate, and helps prepare them for real-world industry challenges.

But Not Everything Is Perfect: The Challenges in Programming Techniques

However, not everything is as positive as it seems. In “Programming Techniques,” a course that represents students’ first real contact with application development, the impact of AI is different compared to more advanced subjects. In the past, the repetitive process of writing code—such as creating a simple constructor public Person(), a function public void printFullName() or practicing encapsulation in Java with methods like public void setName(String name) and public String getName()—kept the fundamental programming concepts fresh and clear while coding.

This repetition was not just mechanical; it reinforced their understanding of concepts like object construction, data encapsulation, and procedural logic. It also played a crucial role in developing a solid foundation that made it easier to understand more complex topics, such as design patterns, in future courses.

Nowadays, with the widespread availability and use of AI-based tools and code generators, students tend to skip these fundamental steps. Instead of internalizing these concepts through practice, they quickly generate code snippets without fully understanding their structure or purpose. As a result, the pillars of programming—such as abstraction, encapsulation, inheritance, and polymorphism—are not deeply absorbed, which can lead to confusion and mistakes later on.

Although AI offers the promise of accelerating development and reducing manual labor, it is important to remember that certain repetition and manual coding are essential for establishing a solid understanding of fundamental principles. Without this foundation, it becomes difficult for students to recognize bad practices, avoid common errors, and truly appreciate the architecture and design of robust software systems.

Reflection and Ethical Challenges in Using AI

Recently, I explained the concept of reflection in microservices to my Software Architecture students. To illustrate this, I used the following example: when implementing the Abstract Factory design pattern within a microservices architecture, the Reflection technique can be used to dynamically instantiate concrete classes at runtime. This allows the factory to decide which object to create based on external parameters, such as a message type or specific configuration received from another service. I consider this concept fundamental if we aim to design an architecture suitable for business models that require this level of flexibility.

However, during a classroom exercise where I provided a base code, I asked the students to correct an error that I had deliberately injected. The error consisted of an additional parameter in a constructor—a detail that did not cause compilation failures, but at runtime, it caused 2 out of 5 microservices that consumed the abstract factory via reflection to fail. From their perspective, this exercise may have seemed unnecessary, which led many to ask AI to fix the error.

As expected, the AI efficiently eliminated the error but overlooked a fundamental acceptance criterion: that parameter was necessary for the correct functioning of the solution. The task was not to remove the parameter but to add it in the Factory classes where it was missing. Out of 36 students, only 3 were able to explain and justify the changes they made. The rest did not even know what modifications the AI had implemented.

This experience highlights the double-edged nature of artificial intelligence in learning: it can provide quick solutions, but if the context or the criteria behind a problem are not understood, the correction can be superficial and jeopardize both the quality and the deep understanding of the code.

I haven’t limited this exercise to architecture examples alone. I have also conducted mock interviews, asking basic programming concepts. Surprisingly, even among final-year students who are already doing their internships, the success rate is alarmingly low: approximately 65% to 70% of the questions are answered incorrectly, which would automatically disqualify them in a real technical interview.

Conclusion

Artificial intelligence has become increasingly integrated into academia, yet its use does not always reflect a genuine desire to learn. For many students, AI has turned into a tool for simply getting through academic commitments, rather than an ally that fosters knowledge, creativity, and critical thinking. This trend presents clear risks: a loss of deep understanding, unreflective automation of tasks, and a lack of internalization of fundamental concepts—all crucial for professional growth in technological fields.

Various authors have analyzed the impact of AI on educational processes and emphasize the importance of promoting its ethical and constructive use. As Luckin et al. (2016) suggest, the key lies in integrating artificial intelligence as support for skill development rather than as a shortcut to avoid intellectual effort. Similarly, Selwyn (2019) explores the ethical and pedagogical challenges that arise when technology becomes a quick fix instead of a resource for deep learning.

References:

]]>
https://blogs.perficient.com/2025/12/03/creators-in-coding-copycats-in-class-the-double-edged-sword-of-artificial-intelligence/feed/ 0 388808
Aligning Your Requirements with the Sitecore Ecosystem https://blogs.perficient.com/2025/11/07/sitecore-dxp-products-and-ecosystem/ https://blogs.perficient.com/2025/11/07/sitecore-dxp-products-and-ecosystem/#respond Fri, 07 Nov 2025 19:20:25 +0000 https://blogs.perficient.com/?p=388241

In my previous blogs, I outlined key considerations for planning a Sitecore migration and shared strategies for executing it effectively. The next critical step is to understand how your business and technical requirements align with the broader Sitecore ecosystem.
Before providing careful recommendations to a customer, it’s essential to map your goals—content management, personalization, multi-site delivery, analytics, and future scalability onto Sitecore’s composable and cloud-native offerings. This ensures that migration and implementation decisions are not only feasible but optimized for long-term value.
To revisit the foundational steps and execution strategies, check out these two helpful resources:
•  Planning Sitecore Migration: Things to Consider
•  Executing a Sitecore Migration: Development, Performance, and Beyond

Sitecore is not just a CMS; it’s a comprehensive digital experience platform.
Before making recommendations to a customer, it’s crucial to clearly define what is truly needed and to have a deep understanding of how powerful Sitecore is. Its Digital Experience Platform (DXP) capabilities, including personalization, marketing automation, and analytics—combined with cloud-native SaaS delivery, enable organizations to scale efficiently, innovate rapidly, and deliver highly engaging digital experiences.
By carefully aligning customer requirements with these capabilities, you can design solutions that not only meet technical and business needs but also maximize ROI, streamline operations, and deliver long-term value.

In this blog, I’ll summarize Sitecore’s Digital Experience Platform (DXP) offerings to explore how each can be effectively utilized to meet evolving business and technical needs.

1. Sitecore XM Cloud

Sitecore Experience Manager Cloud (XM Cloud) is a cloud-native, SaaS, hybrid headless CMS designed to help businesses create and deliver personalized, multi-channel digital experiences across websites and applications. It combines the flexibility of modern headless architecture with robust authoring tools, enabling teams to strike a balance between developer agility and marketer control.

Key Capabilities

  • Cloud-native: XM Cloud is built for the cloud, providing a secure, reliable, scalable, and enterprise-ready system. Its architecture ensures high availability and global reach without the complexity of traditional on-premises systems.
  • SaaS Delivery: Sitecore hosts, maintains, and updates XM Cloud regularly. Organizations benefit from automatic updates, new features, and security enhancements without the need for costly installations or manual upgrades. This ensures that teams always work with the latest technologies while reducing operational overhead.
  • Hybrid Headless: XM Cloud separates content and presentation, enabling developers to build custom front-end experiences using modern frameworks, while marketers utilize visual editing tools like the Page Builder to make real-time changes. This allows routine updates to be handled without developer intervention, maintaining speed and agility.
  • Developer Productivity: Developers can model content with data templates, design reusable components, and assign content through data sources. Sitecore offers SDKs like the Content SDK for building personalized Next.js apps, the ASP.NET Core SDK for .NET integrations, and the Cloud SDK for extending DXP capabilities into Content SDK and JSS applications connected to XM Cloud. Starter kits are provided for setting up the code base.
  • Global Content Delivery: With Experience Edge, XM Cloud provides scalable GraphQL endpoints to deliver content rapidly across geographies, ensuring consistent user experiences worldwide.
  • Extensibility & AI Integration: XM Cloud integrates with apps from the Sitecore Marketplace and leverages Sitecore Stream for advanced AI-powered content generation and optimization. This accelerates content creation while maintaining brand consistency.
  • Continuous Updates & Security: XM Cloud includes multiple interfaces, such as Portal, Deploy, Page Builder, Explorer, Forms, and Analytics, which are regularly updated. Deploy app to deploy to XM Cloud projects.

XM Cloud is ideal for organizations seeking a scalable, flexible, and future-proof content platform, allowing teams to focus on delivering compelling digital experiences rather than managing infrastructure.

2. Experience Platform (XP)

Sitecore Experience Platform (XP) is like an all-in-one powerhouse—it’s a complete box packed with everything you need for delivering personalized, data-driven digital experiences. While Experience Management (XM) handles content delivery, XP adds layers of personalization, marketing automation, and deep analytics, ensuring every interaction is contextually relevant and optimized for each visitor.

Key Capabilities

  • Content Creation & Management: The Content Editor and Experience Editor allow marketers and content authors to create, structure, and manage website content efficiently, supporting collaboration across teams.
  • Digital Marketing Tools: Built-in marketing tools enable the creation and management of campaigns, automating triggers and workflows to deliver personalized experiences across multiple channels.
  • Experience Analytics: XP provides detailed insights into website performance, visitor behavior, and campaign effectiveness. This includes metrics like page performance, conversions, and user engagement patterns.
  • Experience Optimization: Using analytics data, XP allows you to refine content and campaigns to achieve better results. A/B testing and multivariate testing help determine the most effective variations.
  • Path Analyzer: This tool enables you to analyze how visitors navigate through your site, helping you identify bottlenecks, drop-offs, and opportunities to enhance the user experience.
    By combining these capabilities, XP bridges content and marketing intelligence, enabling teams to deliver data-driven, personalized experiences while continuously refining and improving digital engagement.

By combining these capabilities, XP bridges content and marketing intelligence, enabling teams to deliver data-driven, personalized experiences while continuously refining and improving digital engagement.

3. Sitecore Content Hub

Sitecore Content Hub unifies content planning, creation, curation, and asset management into a single platform, enabling teams to collaborate efficiently and maintain control across the entire content lifecycle and digital channels.

Key Capabilities

  • Digital Asset Management (DAM): Content Hub organizes and manages images, videos, documents, and other digital assets. Assets can be tagged, annotated, searched, and shared efficiently, supporting teams in building engaging experiences without losing control over asset usage or consistency.
  • Campaign & Content Planning: Teams can plan campaigns, manage editorial calendars, and assign tasks to ensure smooth collaboration between marketing, creative, and operational teams. Structured workflows enforce version control, approvals, and accountability, ensuring that content moves systematically to the end user.
  • AI-Powered Enhancements: Advanced AI capabilities accelerate content operations. These intelligent features reduce manual effort, increase productivity, and help teams maintain brand consistency at scale.
  • Microservice Architecture & Integration & Multi-Channel Delivery: Content Hub is built on a microservice-based architecture, allowing flexible integration with external systems, headless CMS, and cloud development pipelines. Developers can extend capabilities or connect Content Hub to other platforms without disrupting core operations. Content Hub ensures that teams can deliver consistent, high-quality experiences across websites, social media, commerce, and other digital channels.

Sitecore Content Hub empowers organizations to manage content as a strategic asset, streamlining operations, enabling global collaboration, and providing the technical flexibility developers need to build integrated, scalable solutions.

strong>4. Sitecore Customer Data Platform (CDP)

Sitecore Customer Data Platform (CDP) enables organizations to collect customer data across all digital channels, providing a single, unified view of every user. By centralizing behavioral and transactional data, CDP allows businesses to deliver personalized experiences and data-driven marketing at scale.

Key Capabilities

  • Real-Time Data Collection: The Stream API captures live behavioral and transactional data from your applications and sends it to Sitecore CDP in real time. This ensures that customer profiles are always up-to-date and that personalization can be applied dynamically as users interact with your digital properties.
  • Batch Data Upload: For larger datasets, including guest data or offline orders, the Batch API efficiently uploads bulk information into CDP, keeping your customer data repository comprehensive and synchronized.
  • CRUD Operations: Sitecore CDP offers REST APIs for retrieving, creating, updating, and deleting customer data. This enables developers to integrate external systems, enrich profiles, or synchronize data between multiple platforms with ease.
  • Data Lake Export: With the Data Lake Export Service, all organizational data can be accessed from Amazon S3, allowing it to be downloaded locally or transferred to another S3 bucket for analysis, reporting, or integration with external systems.
  • SDK Integrations (Cloud SDK & Engage SDK): Developers can leverage Sitecore’s Cloud SDK and Engage SDK to streamline data collection, manage user information, and integrate CDP capabilities directly into applications. These SDKs simplify the process of connecting applications to XM Cloud and other services to CDP, enabling real-time engagement and seamless data synchronization.

Sitecore CDP captures behavioral and transactional interactions across channels, creating a unified, real-time profile for each customer. These profiles can be used for advanced segmentation, targeting, and personalization, which in turn informs marketing strategies and customer engagement initiatives.
By integrating CDP with other components of the Sitecore ecosystem—such as DXP, XM Cloud, and Content Hub —organizations can efficiently orchestrate personalized, data-driven experiences across websites, apps, and other digital touchpoints.

5. Sitecore Personalize

Sitecore Personalize enables organizations to deliver seamless, consistent, and highly relevant experiences across websites, mobile apps, and other digital channels. By leveraging real-time customer data, predictive insights, and AI-driven decisioning, it ensures that the right content, offers, and messages get delivered to the target customer/audience.

Key Capabilities

  • Personalized Experiences: Deliver tailored content and offers based on real-time user behavior, predictive analytics, and unified customer profiles. Personalization can be applied across web interactions, server-side experiences, and triggered channels, such as email or SMS, ensuring every interaction is timely and relevant.
  • Testing and Optimization: Conduct A/B/n tests and evaluate which variations perform best based on actual customer behavior. This enables continuous optimization of content, campaigns, and personalization strategies.
  • Performance Analytics: Track user interactions and measure campaign outcomes to gain actionable insights. Analytics support data-driven refinement of personalization, ensuring experiences remain effective and relevant.
  • Experiences and Experiments: Helps to create a tailored experience for each user depending on interaction and any other relevant user data.
  • AI-Driven Assistance: The built-in Code Assistant can turn natural language prompts into JavaScript, allowing developers to quickly create custom conditions, session traits, and programmable personalization scenarios without writing code from scratch.

By combining real-time data from CDP, content from XM Cloud and Content Hub, and AI-driven decisioning, Sitecore Personalize allows organizations to orchestrate truly unified, intelligent, and adaptive customer experiences. This empowers marketers and developers to respond dynamically to signals, test strategies, and deliver interactions that drive engagement and value, along with a unique experience for users.

6. Sitecore Send

Sitecore Send is a cloud-based email marketing platform that enables organizations to create, manage, and optimize email campaigns. By combining automation, advanced analytics, and AI-driven capabilities, marketing teams can design, execute, and optimize email campaigns efficiently without relying heavily on IT support.

Key Capabilities

  • Campaign Creation & Management: Sitecore Send offers a no-code campaign editor that enables users to design campaigns through drag-and-drop and pre-built templates. Marketers can create campaigns quickly, trigger messages automatically, and also perform batch sends.
  • A/B Testing & Optimization: Campaigns can be A/B tested to determine which version resonates best with the target audience, helping improve open rates, click-through rates, and overall engagement.
  • AI-Powered Insights: Built-in AI capabilities help optimize send times, segment audiences, and predict engagement trends, ensuring messages are timely, relevant, and impactful.
  • API Integration: The Sitecore Send API enables developers to integrate email marketing functionality directly into applications. It supports tasks such as:
    • Creating and managing email lists
    • Sending campaigns programmatically
    • Retrieving real-time analytics
    • Automating repetitive tasks
    • This API-driven approach allows teams to streamline operations, accelerate campaign delivery, and leverage programmatic control over their marketing initiatives.

Sitecore Send integrates seamlessly with the broader Sitecore ecosystem, using real-time data from CDP and leveraging content from XM Cloud or Content Hub. Combined with personalization capabilities, it ensures that email communications are targeted, dynamic, and aligned with overall customer experience strategies.
By centralizing email marketing and providing programmatic access, Sitecore Send empowers organizations to deliver scalable, data-driven campaigns while maintaining full control over creative execution and performance tracking.

7. Sitecore Search

Sitecore Search is a headless search and discovery platform that delivers fast, relevant, and personalized results across content and products. It enables organizations to create predictive, AI-powered, intent-driven experiences that drive engagement, conversions, and deeper customer insights.

Key Capabilities

  • Personalized Search & Recommendations: Uses visitor interaction tracking and AI/ML algorithms to deliver tailored search results and product/content recommendations in real time.
  • Headless Architecture: Decouples search and discovery from presentation, enabling seamless integration across websites, apps, and other digital channels.
  • Analytics & Optimization: Provides rich insights into visitor behavior, search performance, and business impact, allowing continuous improvement of search relevance and engagement.
  • AI & Machine Learning Core: Sophisticated algorithms analyze large datasets—including visitor location, preferences, interactions, and purchase history to deliver predictive, personalized experiences.

With Sitecore Search, organizations can provide highly relevant, omnichannel experiences powered by AI-driven insights and advanced analytics.

8. Sitecore Discover

Sitecore Discover is an AI-driven product search similar to sitecore search, but this is more product and commerce-centric. It enables merchandisers and marketers to deliver personalized shopping experiences across websites and apps. By tracking user interactions, it generates targeted recommendations using AI recipes, such as similar products and items bought together, which helps increase engagement and conversions. Merchandisers can configure pages and widgets via the Customer Engagement Console (CEC) to create tailored, data-driven experiences without developer intervention.

Search vs. Discover

  • Sitecore Search: Broad content/product discovery, developer-driven, AI/ML-powered relevance, ideal for general omnichannel search. Optimized for content and product discovery.
  • Sitecore Discover: Commerce-focused product recommendations, merchandiser-controlled, AI-driven personalization for buying experiences. Optimized for commerce personalization and merchandising.

9. Sitecore Connect

Sitecore Connect is an integration tool that enables seamless connections between Sitecore products and other applications in your ecosystem, creating end-to-end, connected experiences for websites and users.

Key Capabilities

  • Architecture: Built around recipes and connectors, Sitecore Connect offers a flexible and scalable framework for integrations.
  • Recipes: Automated workflows that define triggers (events occurring in applications) and actions (tasks executed when specific events occur), enabling process automation across systems.
  • Connectors: Manage connectivity and interactivity between applications, enabling seamless data exchange and coordinated workflows without requiring complex custom coding.

With Sitecore Connect, organizations can orchestrate cross-system processes, synchronize data, and deliver seamless experiences across digital touchpoints, all while reducing manual effort and integration complexity.

10. OrderCloud

OrderCloud is a cloud-based, API-first, headless commerce and marketplace platform designed for B2B, B2C, and B2X scenarios. It provides a flexible, scalable, and fully customizable eCommerce architecture that supports complex business models and distributed operations.

Key Capabilities

  • Headless & API-First: Acts as the backbone of commerce operations, allowing businesses to build and connect multiple experiences such as buyer storefronts, supplier portals, or admin dashboards—on top of a single commerce platform.
  • Customizable Commerce Solutions: Supports large and complex workflows beyond traditional shopping carts, enabling tailored solutions for distributed organizations.
  • Marketplace & Supply Chain Support: Facilitates selling across extended networks, including suppliers, franchises, and partners, while centralizing order management and commerce operations.

OrderCloud empowers organizations to scale commerce operations, extend digital selling capabilities, and create fully customized eCommerce experiences, all while leveraging a modern, API-first headless architecture.

Final Thoughts

Sitecore’s composable DXP products and its suite of SDKs empower organizations to build scalable, personalized, and future-ready digital experiences. By understanding how each component fits into your architecture and aligns with your  business goals, you can make informed decisions that drive long-term value. Whether you’re modernizing legacy systems or starting fresh in the cloud, aligning your strategy with Sitecore’s capabilities ensures a smoother migration and a more impactful digital transformation.

]]>
https://blogs.perficient.com/2025/11/07/sitecore-dxp-products-and-ecosystem/feed/ 0 388241
Seamless Integration of DocuSign with Appian: A Step-by-Step Guide https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/ https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/#respond Wed, 05 Nov 2025 09:13:16 +0000 https://blogs.perficient.com/?p=388176

Introduction

In today’s digital-first business landscape, streamlining document workflows is essential for operational efficiency and compliance. DocuSign, a global leader in electronic signatures, offers secure and legally binding digital signing capabilities. When integrated with Appian, a powerful low-code automation platform, organizations can automate approval processes, reduce manual effort, and enhance document governance.

This guide walks you through the process of integrating DocuSign as a Connected System within Appian, enabling seamless eSignature workflows across your enterprise applications.

 

Why DocuSign?

DocuSign empowers organizations to manage agreements digitally with features that ensure security, compliance, and scalability.

Key Capabilities:

  • Legally Binding eSignatures compliant with ESIGN Act (U.S.), eIDAS (EU), and ISO 27001.
  • Workflow Automation for multi-step approval processes.
  • Audit Trails for full visibility into document activity.
  • Reusable Templates for standardized agreements.
  • Enterprise-Grade Security with encryption and access controls.
  • Pre-built Integrations with platforms like CRM, ERP, and BPM—including Appian.

Integration Overview

Appian’s native support for DocuSign as a Connected System simplifies integration, allowing developers to:

  • Send documents for signature
  • Track document status
  • Retrieve signed documents
  • Manage signers and templates

Prerequisites

Before starting, ensure you have:

  1. Appian Environment with admin access
  2. DocuSign Developer or Production Account
  3. API Credentials: Integration Key, Client Secret, and RSA Key

Step-by-Step Integration

Step 1: Register Your App in DocuSign

  1. Log in to the DocuSign Developer Portal
  2. Navigate to Apps and KeysAdd App
  3. Generate:
    • Integration Key
    • Secret Key
    • RSA Key
  1. Add your Appian environment’s Redirect URI:

https://<your-appian-environment>/suite/rest/authentication/callback

  1. Enable GET and POST methods and save changes.

Step 2: Configure OAuth in Appian

  1. In Appian’s Admin Console, go to Authentication → Web API Authentication
  2. Add DocuSign credentials under Appian OAuth 2.0 Clients
  3. Ensure all integration details match those from DocuSign

Step 3: Create DocuSign Connected System

  1. Open Appian DesignerConnected Systems
  2. Create a new system:
    • Type: DocuSign
    • Authentication: Authorization Code Grant
    • Client ID: DocuSign Integration Key
    • Client Secret: DocuSign Secret Key
    • Base URL:
      • Development: https://account-d.docusign.com
      • Production: https://account.docusign.com
  1. Click Test Connection to validate setup

Docusign Blog 1

Docusign Blog 2

Docusign Blog 3

Step 4: Build Integration Logic

  1. Go to IntegrationsNew Integration
  2. Select the DocuSign Connected System
  3. Configure actions:
    • Send envelope
    • Check envelope status
    • Retrieve signed documents
  4. Save and test the integration

Docusign Blog 4

Step 5: Embed Integration in Your Appian Application

  1. Add integration logic to Appian interfaces and process models
  2. Use forms to trigger DocuSign actions
  3. Monitor API usage and logs for performance and troubleshooting

Integration Opportunities

🔹 Legal Document Processing

Automate the signing of SLAs, MOUs, and compliance forms using DocuSign within Appian workflows. Ensure secure access, maintain version control, and simplify recurring agreements with reusable templates.

🔹 Finance Approvals

Digitize approvals for budgets, expenses, and disclosures. Route documents to multiple signers with conditional logic and securely store signed records for audit readiness.

🔹 Healthcare Consent Forms

Send consent forms electronically before appointments. Automatically link signed forms to patient records while ensuring HIPAA-compliant data handling.

Conclusion

Integrating DocuSign with Appian enables organizations to digitize and automate document workflows with minimal development effort. This powerful combination enhances compliance, accelerates approvals, and improves user experience across business processes.

For further details, refer to:

]]>
https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/feed/ 0 388176
Spring Boot + OpenAI : A Developer’s Guide to Generative AI Integration https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/ https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/#respond Mon, 27 Oct 2025 08:02:27 +0000 https://blogs.perficient.com/?p=387157

Introduction

In this blog, we’ll explore how to connect OpenAI’s API with a Spring Boot application, step by step.

We’ll cover the setup process, walk through the implementation with a practical example.

By integrating OpenAI with Spring Boot, you can create solutions that are not only powerful but also scalable and reliable.

Prerequisites

  • Java 17+
  • Maven
  • Spring Boot (3.x recommended)
  • OpenAI API Key (get it from platform.openai.com)
  • Basic knowledge of REST APIs

OpenAI’s platform helps developers to understand how to prompt a models to generate meaningful text. It’s basically a cheat sheet for how to communicate to AI so it gives you smart and useful answers by providing prompts. 

Implementation in Spring Boot

To integrate OpenAI’s GPT-4o-mini model into a Spring Boot application, we analyzed the structure of a typical curl request and response provided by OpenAI.

API docs reference:

https://platform.openai.com/docs/overview

https://docs.spring.io/spring-boot/index.html

Curl Request

<html>
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "assistant", "content": "Hello"},
      {"role": "user", "content": "Hi"}
    ]
  }'
</html>

Note-

“role”: “user” – Represents the end-user interacting with the assistant

“role”: “assistant” – Represents the assistant’s response.

The response generated from the model and it looks like this:

{
  "id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
  "object": "chat.completion",
  "created": 1741569952,
  "model": "gpt-4o-mini-2025-04-14",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?",
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 19,
    "completion_tokens": 10,
    "total_tokens": 29,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "service_tier": "default"
}

 

Controller Class:

In below snippet, we will explore a simple spring boot controller to interact with Open AI’s API. When end user sends a prompt to that url (e.g /bot/chat?prompt=what is spring boot), the controller reads the model name and API url from applocation.properties file. It then creates a request using prompt provided and sends it to Open AI using rest call(RestTemplate). After verifying the request, OpenAI sends back a response.

@RestController
@RequestMapping("/bot")
public class GenAiController {

    @Value("${openai.model}")
    private String model;

    @Value(("${openai.api.url}"))
    private String apiURL;

    @Autowired
    private RestTemplate template;

    @GetMapping("/chat")
    public String chat(@RequestParam("prompt") String prompt) {
        GenAiRequest request = new GenAiRequest(model, prompt);
        System.out.println("Request: " + request );
        GenAIResponse genAIResponse = template.postForObject(apiURL, request, GenAIResponse.class);
        return genAIResponse.getChoices().get(0).getMessage().getContent();
    }

 

Configuration Class:

Annotated with @Configuration, this class defines beans and settings for the application context. Pulling the Open API key from properties file and the a customized RestTemplate is created and configured to include the Authorization Bearer <API_KEY> header in all requests. This setup ensures that every call to OpenAI’s API is authenticated without manually adding headers in each request.

@Configuration
public class OpenAIAPIConfiguration {

    @Value("${openai.api.key}")
     private String openaiApiKey;

    @Bean
    public RestTemplate template(){
        RestTemplate restTemplate=new RestTemplate();
        restTemplate.getInterceptors().add((request, body, execution) -> {
            request.getHeaders().add("Authorization", "Bearer " + openaiApiKey);
            return execution.execute(request, body);
        });
        return restTemplate;
    }
    
}

Require getters and setters for request and response classes:

Based on the Curl structure and response, we generated the corresponding request and response java classes with appropriate getters and setters with selected attributes to repsesent request and response object. These getter/setter classes help turn JSON data into objects we can use in code, and also turn our code’s data back into JSON when interacting to the OpenAI API. We implemented a bot using the gpt-4o-mini model, integrating it with a REST controller and also handled the authentication via the API key.

//Request
@Data
public class GenAiRequest {

    private String model;
    private List<GenAIMessage> messages;

    public List<GenAIMessage> getMessages() {
        return messages;
    }

    public GenAiRequest(String model, String prompt) {
        this.model = model;
        this.messages = new ArrayList<>();
        this.messages.add(new GenAIMessage("user",prompt));
    }
}

@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIMessage {

    private String role;
    private String content;   
    
    public String getContent() {
        return content;
    }
    public void setContent(String content) {
        this.content = content;
    }
}

//Response
@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIResponse {

    private List<Choice> choices;

    public List<Choice> getChoices() {
        return choices;
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public static class Choice {

        private int index;
        private GenAIMessage message;
        public GenAIMessage getMessage() {
            return message;
        }
        public void setMessage(GenAIMessage message) {
            this.message = message;
        }

    }

}

 

Essential Configuration for OpenAI Integration in Spring Boot

To connect your Spring Boot application with OpenAI’s API, you need to define a few key properties in your application.properties or application.yml file:

  • server.port: Specifies the port on which your Spring Boot application will run. You can set it to any available port like 8080, 9090, etc. (The default port for a Spring Boot application is 8080)
  • openai.model: Defines the OpenAI model to be used. In this case, gpt-4o-mini is selected for lightweight and efficient responses.
  • openai.api.key: Your secret API key from OpenAI. This is used to authenticate requests. Make sure to keep it secure and never expose it publicly.
  • openai.api.url: The endpoint URL for OpenAI’s chat completion API. (This is where your application sends prompts and receives responses)
server.port=<add server port>
openai.model=gpt-4o-mini
openai.api.key=	XXXXXXXXXXXXXXXXXXXXXXXXXXXX
openai.api.url=https://api.openai.com/v1/chat/completions

 

Postman Collection:

GET API: http://localhost:<port>/bot/chat?prompt=What is spring boot used for ?

Content-Type: application/json

Prompt

Usage of Spring Boot + OpenAI Integration

  • AI-Powered Chatbots: Build intelligent assistants for customer support, internal helpdesks, or onboarding systems.
  • Content Generation Tools: Automate blog writing, email drafting, product descriptions, or documentation, generate personalized content based on user input.
  • Code Assistance & Review: Create tools that help developers write, refactor, or review code using AI, Integrate with IDEs or CI/CD pipelines for smart suggestions.
  • Data Analysis & Insights: Use AI to interpret data, generate summaries, answer questions about datasets combine with Spring Boot APIs to serve insights to dashboards or reports.
  • Search Enhancement: Implement semantic search or question-answering systems over documents or databases, use embeddings and GPT to improve relevance and accuracy.
  • Learning & Training Platforms: Provide personalized tutoring, quizzes, and explanations using AI & adapt content based on user performance and feedback.
  • Email & Communication Automation: Draft, summarize, or translate emails and messages, integrate with enterprise communication tools.
  • Custom usages: In a business-to-business context, usage can be customized according to specific client requirements.
]]>
https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/feed/ 0 387157
Perficient Wins Silver w3 Award for AI Utility Integration https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/ https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/#respond Fri, 24 Oct 2025 15:49:49 +0000 https://blogs.perficient.com/?p=387677

We’re proud to announce that we’ve been honored with a Silver w3 Award in the Emerging Tech Features – AI Utility Integration category for our work with a top 20 U.S. utility provider. This recognition from the Academy of Interactive and Visual Arts (AIVA) celebrates our commitment to delivering cutting-edge, AI-powered solutions that drive real-world impact in the energy and utilities sector.

“Winning this w3 Award speaks to our pragmatism–striking the right balance between automation capabilities and delivering true business outcomes through purposeful AI adoption,” said Mwandama Mutanuka, Managing Director of Perficient’s Intelligent Automation practice. “Our approach focuses on understanding the true cost of ownership, evaluating our clients’ existing automation tech stack, and building solutions with a strong business case to drive impactful transformation.”

Modernizing Operations with AI

The award-winning solution centered on the implementation of a ServiceNow Virtual Agent to streamline internal service desk operations for a major utility provider serving millions of homes and businesses across the United States. Faced with long wait times and a high volume of repetitive service requests, the client sought a solution that would enhance productivity, reduce costs, and improve employee satisfaction.

Our experts delivered a two-phase strategy that began with deploying an out-of-the-box virtual agent capable of handling low-complexity, high-volume requests. We then customized the solution using ServiceNow’s Conversational Interfaces module, tailoring it to the organization’s unique needs through data-driven topic recommendations and user behavior analysis. The result was an intuitive, AI-powered experience that allowed employees and contractors to self-serve common IT requests, freeing up service desk agents to focus on more complex work and significantly improving operational efficiency.

Driving Adoption Through Strategic Change Management

Adoption is the key to unlocking the full value of any technology investment. That’s why our team partnered closely with the client’s corporate communications team to launch a robust change management program. We created a branded identity for the virtual agent, developed engaging training materials, and hosted town halls to build awareness and excitement across the organization. This holistic approach ensured high engagement and a smooth rollout, setting the foundation for long-term success.

Looking Ahead

The w3 Award is a reflection of our continued dedication to innovation, collaboration, and excellence. As we look to the future, we remain committed to helping enterprises across industries harness the full power of AI to transform their operations. Explore the full success story to learn more about how we’re powering productivity with AI, and visit the w3 Awards Winners Gallery to see our recognition among the best in digital innovation.

For more information on how Perficient can help your business with integrated AI services, contact us today.

]]>
https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/feed/ 0 387677
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 2 https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/ https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/#comments Fri, 03 Oct 2025 07:25:24 +0000 https://blogs.perficient.com/?p=387517

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Talend Components:

Key components for batch processing as mention below:

  • tDBConnection: Establish and manage database connections within a job & allow configuration with single connection to reuse within Talend job.
  • tFileInputDelimited: For reading data from flat files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Data transformation allows you to map input data with output data and enables you to perform data filtering, complex data manipulation, typecasting, and multiple input source joins.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tPreJob, tPostJob: PreJob start the execution before the job & PostJob at the end of the job.
  • tDBOutput: Supports wide range of databases & used to write data to various databases.
  • tDBCommit:It retains and verifies the alterations applied to a connected database throughout a Talend job, guaranteeing that it permanently records the data changes.
  • tDBClose:  It explicitly close a database connection that was opened by a tDBConnection component.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.
  • tDie: We can stop the job execution explicitly if it fails. In addition, we can create a customized warning message and exit code.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after the execution, we can process it to insert it into MySQL database table as a target & we can achieve this without batch processing. But this data flow will take quite a longer time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into batch of records into MySQL database table at the target location.

Talend Job Design

Talend Job Design 

Solution:

  • Establish the database connection at the start of the execution so that we can reuse.
  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

Filter the dataset using tFilterRow

Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write the batch data MySQL database table using tDBOutput
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.
  • At the end of the job execution, we are committing the database modification & closing the connection to release the database resource.

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 1

]]>
https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/feed/ 3 387517
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572