Integration + IT Modernization Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/integration-it-modernization/ Expert Digital Insights Wed, 07 Jan 2026 15:22:02 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Integration + IT Modernization Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/integration-it-modernization/ 32 32 30508587 Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an open-source, high-performance API client that puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No,” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why is Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads, your collections stay local. No hidden syncing; your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters, API testing without unnecessary features.

Bottom line: Bruno fits the way we work today, collaboratively, securely, and efficiently. It’s not trying to do everything; it’s just good at API testing.

Key Features

Bruno keeps it real with features that matter. Here are the highlights:

  1. Totally Open-Source

  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A group of developers is contributing to GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up

  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain-text files integrate well with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning

  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up quickly and zips through responses.
  • Great for solo endpoint tweaks or managing large workflows without your machine slowing.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux. Just choose your platform, and you’re good to go.

#3. Quick Install Guide

Windows

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy-to-build requests on the fly.
  • CLI: For the terminal lovers. Automate tests, integrate with CI/CD, or run collections: bruno run collection.bru –env dev.

Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen prompting you to create a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Enter the URL: https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Add the header: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it, and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, create folders for environments or APIs, and use variables to switch setups.

Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed

  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy

  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag

  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

Level up With Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

Community & Contribution

It’s community-driven:

Conclusion

Bruno isn’t just another API testing tool; it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
How to Secure Applications During Modernization on AWS https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/ https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/#respond Fri, 19 Dec 2025 06:40:17 +0000 https://blogs.perficient.com/?p=389050

Why Do We Need to Secure Our Applications?  

Cloud environments are very dynamic and interconnected. A single misconfiguration or exposed API key can lead to:  

  • Data breaches 
  • Compliance violations 
  • Costly downtime 

Attackers often target application-level weaknesses, not just infrastructure gaps. If any application handles sensitive data, financial transactions, or user credentials, security is critical. 

Common Mistakes Made When Building Applications

  • Hardcoding API keys and credentials 
  • Ignoring dependency vulnerabilities 
  • Skipping encryption/decryption for sensitive data 

Essential Security Best Practices

1. Identity and Access Management (IAM)

  • Create dedicated IAM roles for your Lambda functions, EC2 instances, or ECS tasks instead of hardcoding access keys in your application. 
  • We must regularly review who has permissions using the IAM Access Analyzer. 
  • We must avoid using the root account for day-to-day operations/ any operations as a developer. 

Role Creation

 

Role Creation1

2. Don’t Store/Share Secrets in Your Code

Your appsettings.json is not the right place for secret keys. Storing API keys or database passwords. 

  • We must use AWS Secrets Manager or Parameter Store to keep secrets safe. 
  • Fetch keys at runtime by using AWS SDK for .NET or the AWSSDK.Extensions.NETCore.Setup configuration provider 

Secretmanager Creation2

Secretmanager Reading

3. Always Encrypt Data 

Encryption is one of the best practices to encrypt our sensitive data 

  • Enable HTTPS by default for all your endpoints.  
  • Use AWS Certificate Manager (ACM) to issue and manage SSL/TLS certificates. 
  • In your application, make sure that all traffic is redirected to HTTPS by adding app.UseHttpsRedirection(); 
  • AWS KMS to encrypt your S3 buckets, RDS databases, and EBS volumes.
  • If you’re using SQL Server on RDS, enable Transparent Data Encryption (TDE). 

 Encrypt & Decrypt API Key with KMS 

Encryption Steps

Encryption Decrypt Code

4. Build a Secure Network Foundation

  • Must use VPCs with private subnets for backend services. 
  • Control the traffic with Security Groups and Network ACLs. 
  • Use VPC Endpoints to keep traffic within AWS’s private network  
  • Use AWS WAF to protect your APIs, and enable AWS Shield to guard against DDoS attacks. 

Security Group

Vpc Creation

5. Keep Your Code and Dependencies Clean

Even the best infrastructure can’t save a vulnerable codebase. 

  • Update your .NET SDK and NuGet packages regularly. 
  • Use Amazon Inspector for runtime and AWS environment security, and tools like Dependabot for Development-time dependency security to find vulnerabilities early. 
  • Add code review analysis tools (like SonarQube) in your CI/CD pipeline. 

AWS Inspector

6. Log Everything and Watch

  • Enable Amazon AWS CloudWatch for all central logging and use AWS X-Ray to trace requests through the application. 
  • Turn on CloudTrail to track every API call across your account. 
  • Enable GuardDuty for continuous threat detection. 

 

]]>
https://blogs.perficient.com/2025/12/19/how-to-secure-applications-during-modernization-on-aws/feed/ 0 389050
Getting Started with Python for Automation https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/ https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/#respond Tue, 09 Dec 2025 14:00:21 +0000 https://blogs.perficient.com/?p=388867

Automation has become a core part of modern work, allowing teams to reduce repetitive tasks, save time, and improve accuracy. Whether it’s generating weekly reports, organizing files, processing large amounts of data, or interacting with web applications, automation helps individuals and companies operate more efficiently. Among all programming languages used for automation, Python is one of the most widely adopted because of its simplicity and flexibility. 

Why Python Is Perfect for Automation 

Python is known for having a clean and readable syntax, which makes it easy for beginners to start writing scripts without needing deep programming knowledge. The language is simple enough for non-developers, yet powerful enough for complex automation tasks. Another major advantage is the availability of thousands of libraries. These libraries allow Python to handle file operations, manage Excel sheets, interact with APIs, scrape websites, schedule tasks, and even control web browsers – all with minimal code. Because of this, Python becomes a single tool capable of automating almost any repetitive digital task. 

What You Can Automate with Python 

Python can automate everyday tasks that would otherwise require significant manual effort. Simple tasks like renaming multiple files, organizing folders, or converting file formats can be completed instantly using small scripts. It is also commonly used for automating Excel-based workflows, such as cleaning datasets, merging sheets, generating monthly summaries, or transforming data between formats. Python is equally powerful for web-related automation: collecting data from websites, making API calls, sending automated emails, downloading content, and filling out online forms. For more advanced uses, Python can also automate browser testing, server monitoring, and deployment processes. 

Setting Up Your Python Automation Environment 

Getting started is straightforward. After installing Python, you can use an editor like VS Code or PyCharm to write your scripts. Libraries required for automation can be installed using a single command, making setup simple. Once you have your environment ready, writing your first script usually takes only a few minutes. For example, a short script can rename files in a folder, send an email, or run a function at a specific time of the day. Python’s structure is beginner-friendly, so even basic programming knowledge is enough to start automating everyday tasks. 

Examples of Simple Automation 

A typical example is a script that automatically renames files. Instead of renaming hundreds of files one by one, Python can loop through the folder and rename them instantly. Another example is an automated email script that can send daily reminders or reports. Python can also schedule tasks so that your code runs every morning, every hour, or at any time you choose. These examples show how even small scripts can add real value to your workflow by reducing repetitive manual tasks. 

Best Practices When Building Automation 

As you begin writing automation scripts, it helps to keep the code organized and reliable. Using virtual environments ensures that your project libraries remain clean. Adding error-handling prevents scripts from stopping unexpectedly. Logging enables you to track what your script does and when it executes. Once your automation is ready, you can run it automatically using tools like Task Scheduler on Windows or cron on Linux, so the script works in the background without your involvement. 

How Companies Use Python Automation 

Python automation is widely used across industries. IT teams rely on it to monitor servers, restart services, and handle deployment tasks. Business teams use it to generate reports, clean data, update dashboards, and manage document workflows. Marketing teams use automation for scraping competitor information, scheduling social media posts, or tracking engagement. For developers, Python helps with testing, error checking, and system integration via APIs. Across all these areas, automation improves efficiency and reduces human error. 

Conclusion 

Python is an excellent starting point for anyone who wants to begin automating daily tasks. Its simplicity, combined with its powerful ecosystem of libraries, makes it accessible to beginners and useful for professionals. Even basic automation scripts can save hours of work, and as you grow more comfortable, you can automate more complex processes involving data, web interactions, and system management. Learning Python for automation not only makes your work easier but also adds valuable skills for professional growth. 

 

]]>
https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/feed/ 0 388867
Seamless Integration of DocuSign with Appian: A Step-by-Step Guide https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/ https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/#respond Wed, 05 Nov 2025 09:13:16 +0000 https://blogs.perficient.com/?p=388176

Introduction

In today’s digital-first business landscape, streamlining document workflows is essential for operational efficiency and compliance. DocuSign, a global leader in electronic signatures, offers secure and legally binding digital signing capabilities. When integrated with Appian, a powerful low-code automation platform, organizations can automate approval processes, reduce manual effort, and enhance document governance.

This guide walks you through the process of integrating DocuSign as a Connected System within Appian, enabling seamless eSignature workflows across your enterprise applications.

 

Why DocuSign?

DocuSign empowers organizations to manage agreements digitally with features that ensure security, compliance, and scalability.

Key Capabilities:

  • Legally Binding eSignatures compliant with ESIGN Act (U.S.), eIDAS (EU), and ISO 27001.
  • Workflow Automation for multi-step approval processes.
  • Audit Trails for full visibility into document activity.
  • Reusable Templates for standardized agreements.
  • Enterprise-Grade Security with encryption and access controls.
  • Pre-built Integrations with platforms like CRM, ERP, and BPM—including Appian.

Integration Overview

Appian’s native support for DocuSign as a Connected System simplifies integration, allowing developers to:

  • Send documents for signature
  • Track document status
  • Retrieve signed documents
  • Manage signers and templates

Prerequisites

Before starting, ensure you have:

  1. Appian Environment with admin access
  2. DocuSign Developer or Production Account
  3. API Credentials: Integration Key, Client Secret, and RSA Key

Step-by-Step Integration

Step 1: Register Your App in DocuSign

  1. Log in to the DocuSign Developer Portal
  2. Navigate to Apps and KeysAdd App
  3. Generate:
    • Integration Key
    • Secret Key
    • RSA Key
  1. Add your Appian environment’s Redirect URI:

https://<your-appian-environment>/suite/rest/authentication/callback

  1. Enable GET and POST methods and save changes.

Step 2: Configure OAuth in Appian

  1. In Appian’s Admin Console, go to Authentication → Web API Authentication
  2. Add DocuSign credentials under Appian OAuth 2.0 Clients
  3. Ensure all integration details match those from DocuSign

Step 3: Create DocuSign Connected System

  1. Open Appian DesignerConnected Systems
  2. Create a new system:
    • Type: DocuSign
    • Authentication: Authorization Code Grant
    • Client ID: DocuSign Integration Key
    • Client Secret: DocuSign Secret Key
    • Base URL:
      • Development: https://account-d.docusign.com
      • Production: https://account.docusign.com
  1. Click Test Connection to validate setup

Docusign Blog 1

Docusign Blog 2

Docusign Blog 3

Step 4: Build Integration Logic

  1. Go to IntegrationsNew Integration
  2. Select the DocuSign Connected System
  3. Configure actions:
    • Send envelope
    • Check envelope status
    • Retrieve signed documents
  4. Save and test the integration

Docusign Blog 4

Step 5: Embed Integration in Your Appian Application

  1. Add integration logic to Appian interfaces and process models
  2. Use forms to trigger DocuSign actions
  3. Monitor API usage and logs for performance and troubleshooting

Integration Opportunities

🔹 Legal Document Processing

Automate the signing of SLAs, MOUs, and compliance forms using DocuSign within Appian workflows. Ensure secure access, maintain version control, and simplify recurring agreements with reusable templates.

🔹 Finance Approvals

Digitize approvals for budgets, expenses, and disclosures. Route documents to multiple signers with conditional logic and securely store signed records for audit readiness.

🔹 Healthcare Consent Forms

Send consent forms electronically before appointments. Automatically link signed forms to patient records while ensuring HIPAA-compliant data handling.

Conclusion

Integrating DocuSign with Appian enables organizations to digitize and automate document workflows with minimal development effort. This powerful combination enhances compliance, accelerates approvals, and improves user experience across business processes.

For further details, refer to:

]]>
https://blogs.perficient.com/2025/11/05/seamless-integration-of-docusign-with-appian-a-step-by-step-guide/feed/ 0 388176
Spring Boot + OpenAI : A Developer’s Guide to Generative AI Integration https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/ https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/#respond Mon, 27 Oct 2025 08:02:27 +0000 https://blogs.perficient.com/?p=387157

Introduction

In this blog, we’ll explore how to connect OpenAI’s API with a Spring Boot application, step by step.

We’ll cover the setup process, walk through the implementation with a practical example.

By integrating OpenAI with Spring Boot, you can create solutions that are not only powerful but also scalable and reliable.

Prerequisites

  • Java 17+
  • Maven
  • Spring Boot (3.x recommended)
  • OpenAI API Key (get it from platform.openai.com)
  • Basic knowledge of REST APIs

OpenAI’s platform helps developers to understand how to prompt a models to generate meaningful text. It’s basically a cheat sheet for how to communicate to AI so it gives you smart and useful answers by providing prompts. 

Implementation in Spring Boot

To integrate OpenAI’s GPT-4o-mini model into a Spring Boot application, we analyzed the structure of a typical curl request and response provided by OpenAI.

API docs reference:

https://platform.openai.com/docs/overview

https://docs.spring.io/spring-boot/index.html

Curl Request

<html>
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "assistant", "content": "Hello"},
      {"role": "user", "content": "Hi"}
    ]
  }'
</html>

Note-

“role”: “user” – Represents the end-user interacting with the assistant

“role”: “assistant” – Represents the assistant’s response.

The response generated from the model and it looks like this:

{
  "id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
  "object": "chat.completion",
  "created": 1741569952,
  "model": "gpt-4o-mini-2025-04-14",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?",
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 19,
    "completion_tokens": 10,
    "total_tokens": 29,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "service_tier": "default"
}

 

Controller Class:

In below snippet, we will explore a simple spring boot controller to interact with Open AI’s API. When end user sends a prompt to that url (e.g /bot/chat?prompt=what is spring boot), the controller reads the model name and API url from applocation.properties file. It then creates a request using prompt provided and sends it to Open AI using rest call(RestTemplate). After verifying the request, OpenAI sends back a response.

@RestController
@RequestMapping("/bot")
public class GenAiController {

    @Value("${openai.model}")
    private String model;

    @Value(("${openai.api.url}"))
    private String apiURL;

    @Autowired
    private RestTemplate template;

    @GetMapping("/chat")
    public String chat(@RequestParam("prompt") String prompt) {
        GenAiRequest request = new GenAiRequest(model, prompt);
        System.out.println("Request: " + request );
        GenAIResponse genAIResponse = template.postForObject(apiURL, request, GenAIResponse.class);
        return genAIResponse.getChoices().get(0).getMessage().getContent();
    }

 

Configuration Class:

Annotated with @Configuration, this class defines beans and settings for the application context. Pulling the Open API key from properties file and the a customized RestTemplate is created and configured to include the Authorization Bearer <API_KEY> header in all requests. This setup ensures that every call to OpenAI’s API is authenticated without manually adding headers in each request.

@Configuration
public class OpenAIAPIConfiguration {

    @Value("${openai.api.key}")
     private String openaiApiKey;

    @Bean
    public RestTemplate template(){
        RestTemplate restTemplate=new RestTemplate();
        restTemplate.getInterceptors().add((request, body, execution) -> {
            request.getHeaders().add("Authorization", "Bearer " + openaiApiKey);
            return execution.execute(request, body);
        });
        return restTemplate;
    }
    
}

Require getters and setters for request and response classes:

Based on the Curl structure and response, we generated the corresponding request and response java classes with appropriate getters and setters with selected attributes to repsesent request and response object. These getter/setter classes help turn JSON data into objects we can use in code, and also turn our code’s data back into JSON when interacting to the OpenAI API. We implemented a bot using the gpt-4o-mini model, integrating it with a REST controller and also handled the authentication via the API key.

//Request
@Data
public class GenAiRequest {

    private String model;
    private List<GenAIMessage> messages;

    public List<GenAIMessage> getMessages() {
        return messages;
    }

    public GenAiRequest(String model, String prompt) {
        this.model = model;
        this.messages = new ArrayList<>();
        this.messages.add(new GenAIMessage("user",prompt));
    }
}

@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIMessage {

    private String role;
    private String content;   
    
    public String getContent() {
        return content;
    }
    public void setContent(String content) {
        this.content = content;
    }
}

//Response
@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIResponse {

    private List<Choice> choices;

    public List<Choice> getChoices() {
        return choices;
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public static class Choice {

        private int index;
        private GenAIMessage message;
        public GenAIMessage getMessage() {
            return message;
        }
        public void setMessage(GenAIMessage message) {
            this.message = message;
        }

    }

}

 

Essential Configuration for OpenAI Integration in Spring Boot

To connect your Spring Boot application with OpenAI’s API, you need to define a few key properties in your application.properties or application.yml file:

  • server.port: Specifies the port on which your Spring Boot application will run. You can set it to any available port like 8080, 9090, etc. (The default port for a Spring Boot application is 8080)
  • openai.model: Defines the OpenAI model to be used. In this case, gpt-4o-mini is selected for lightweight and efficient responses.
  • openai.api.key: Your secret API key from OpenAI. This is used to authenticate requests. Make sure to keep it secure and never expose it publicly.
  • openai.api.url: The endpoint URL for OpenAI’s chat completion API. (This is where your application sends prompts and receives responses)
server.port=<add server port>
openai.model=gpt-4o-mini
openai.api.key=	XXXXXXXXXXXXXXXXXXXXXXXXXXXX
openai.api.url=https://api.openai.com/v1/chat/completions

 

Postman Collection:

GET API: http://localhost:<port>/bot/chat?prompt=What is spring boot used for ?

Content-Type: application/json

Prompt

Usage of Spring Boot + OpenAI Integration

  • AI-Powered Chatbots: Build intelligent assistants for customer support, internal helpdesks, or onboarding systems.
  • Content Generation Tools: Automate blog writing, email drafting, product descriptions, or documentation, generate personalized content based on user input.
  • Code Assistance & Review: Create tools that help developers write, refactor, or review code using AI, Integrate with IDEs or CI/CD pipelines for smart suggestions.
  • Data Analysis & Insights: Use AI to interpret data, generate summaries, answer questions about datasets combine with Spring Boot APIs to serve insights to dashboards or reports.
  • Search Enhancement: Implement semantic search or question-answering systems over documents or databases, use embeddings and GPT to improve relevance and accuracy.
  • Learning & Training Platforms: Provide personalized tutoring, quizzes, and explanations using AI & adapt content based on user performance and feedback.
  • Email & Communication Automation: Draft, summarize, or translate emails and messages, integrate with enterprise communication tools.
  • Custom usages: In a business-to-business context, usage can be customized according to specific client requirements.
]]>
https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/feed/ 0 387157
Perficient Wins Silver w3 Award for AI Utility Integration https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/ https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/#respond Fri, 24 Oct 2025 15:49:49 +0000 https://blogs.perficient.com/?p=387677

We’re proud to announce that we’ve been honored with a Silver w3 Award in the Emerging Tech Features – AI Utility Integration category for our work with a top 20 U.S. utility provider. This recognition from the Academy of Interactive and Visual Arts (AIVA) celebrates our commitment to delivering cutting-edge, AI-powered solutions that drive real-world impact in the energy and utilities sector.

“Winning this w3 Award speaks to our pragmatism–striking the right balance between automation capabilities and delivering true business outcomes through purposeful AI adoption,” said Mwandama Mutanuka, Managing Director of Perficient’s Intelligent Automation practice. “Our approach focuses on understanding the true cost of ownership, evaluating our clients’ existing automation tech stack, and building solutions with a strong business case to drive impactful transformation.”

Modernizing Operations with AI

The award-winning solution centered on the implementation of a ServiceNow Virtual Agent to streamline internal service desk operations for a major utility provider serving millions of homes and businesses across the United States. Faced with long wait times and a high volume of repetitive service requests, the client sought a solution that would enhance productivity, reduce costs, and improve employee satisfaction.

Our experts delivered a two-phase strategy that began with deploying an out-of-the-box virtual agent capable of handling low-complexity, high-volume requests. We then customized the solution using ServiceNow’s Conversational Interfaces module, tailoring it to the organization’s unique needs through data-driven topic recommendations and user behavior analysis. The result was an intuitive, AI-powered experience that allowed employees and contractors to self-serve common IT requests, freeing up service desk agents to focus on more complex work and significantly improving operational efficiency.

Driving Adoption Through Strategic Change Management

Adoption is the key to unlocking the full value of any technology investment. That’s why our team partnered closely with the client’s corporate communications team to launch a robust change management program. We created a branded identity for the virtual agent, developed engaging training materials, and hosted town halls to build awareness and excitement across the organization. This holistic approach ensured high engagement and a smooth rollout, setting the foundation for long-term success.

Looking Ahead

The w3 Award is a reflection of our continued dedication to innovation, collaboration, and excellence. As we look to the future, we remain committed to helping enterprises across industries harness the full power of AI to transform their operations. Explore the full success story to learn more about how we’re powering productivity with AI, and visit the w3 Awards Winners Gallery to see our recognition among the best in digital innovation.

For more information on how Perficient can help your business with integrated AI services, contact us today.

]]>
https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/feed/ 0 387677
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 2 https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/ https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/#comments Fri, 03 Oct 2025 07:25:24 +0000 https://blogs.perficient.com/?p=387517

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Talend Components:

Key components for batch processing as mention below:

  • tDBConnection: Establish and manage database connections within a job & allow configuration with single connection to reuse within Talend job.
  • tFileInputDelimited: For reading data from flat files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Data transformation allows you to map input data with output data and enables you to perform data filtering, complex data manipulation, typecasting, and multiple input source joins.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tPreJob, tPostJob: PreJob start the execution before the job & PostJob at the end of the job.
  • tDBOutput: Supports wide range of databases & used to write data to various databases.
  • tDBCommit:It retains and verifies the alterations applied to a connected database throughout a Talend job, guaranteeing that it permanently records the data changes.
  • tDBClose:  It explicitly close a database connection that was opened by a tDBConnection component.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.
  • tDie: We can stop the job execution explicitly if it fails. In addition, we can create a customized warning message and exit code.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after the execution, we can process it to insert it into MySQL database table as a target & we can achieve this without batch processing. But this data flow will take quite a longer time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into batch of records into MySQL database table at the target location.

Talend Job Design

Talend Job Design 

Solution:

  • Establish the database connection at the start of the execution so that we can reuse.
  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

Filter the dataset using tFilterRow

Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write the batch data MySQL database table using tDBOutput
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.
  • At the end of the job execution, we are committing the database modification & closing the connection to release the database resource.

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 1

]]>
https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/feed/ 3 387517
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572
Why It’s Time to Move from SharePoint On-Premises to SharePoint Online https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/ https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/#respond Tue, 09 Sep 2025 14:53:50 +0000 https://blogs.perficient.com/?p=387013

In today’s fast-paced digital workplace, agility, scalability, and collaboration aren’t just nice to have—they’re business-critical. If your organization is still on Microsoft SharePoint On-Premises, now is the time to make the move to SharePoint Online. Here’s why this isn’t just a technology upgrade—it’s a strategic leap forward.

1. Work Anywhere, Without Barriers

SharePoint Online empowers your workforce with secure access to content from virtually anywhere. Whether your team is remote, hybrid, or on the go, they can collaborate in real time without being tethered to a corporate network or VPN.

2. Always Up to Date

Forget about manual patching and version upgrades. SharePoint Online is part of Microsoft 365, which means you automatically receive the latest features, security updates, and performance improvements—without the overhead of managing infrastructure.

3. Reduce Costs and Complexity

Maintaining on-premises servers is expensive and resource-intensive. By moving to SharePoint Online, you eliminate hardware costs, reduce IT overhead, and streamline operations. Plus, Microsoft handles the backend, so your team can focus on innovation instead of maintenance.

4. Enterprise-Grade Security and Compliance

Microsoft invests heavily in security, offering built-in compliance tools, data loss prevention, and advanced threat protection. SharePoint Online is designed to meet global standards and industry regulations, giving you peace of mind that your data is safe.

5. Seamless Integration with Microsoft 365

SharePoint Online integrates effortlessly with Microsoft Teams, OneDrive, Power Automate, and Power BI—enabling smarter workflows, better insights, and more connected experiences across your organization.

6. Scalability for the Future

Whether you’re a small business or a global enterprise, SharePoint Online scales with your needs. You can easily add users, expand storage, and adapt to changing business demands without worrying about infrastructure limitations.

Why Perficient for Your SharePoint Online Migration 

Migrating to SharePoint Online is more than a move to the cloud—it’s a chance to transform how your business works. At Perficient, we help you turn common migration challenges into measurable wins:
  • 35% boost in collaboration efficiency
  • Up to 60% cost savings per user
  • 73% reduction in data breach risk
  • 100+ IT hours saved each month
Our Microsoft 365 Modernization solutions don’t just migrate content—they build a secure, AI-ready foundation. From app modernization and AI-powered search to Microsoft Copilot integration, Perficient positions your organization for the future.
]]>
https://blogs.perficient.com/2025/09/09/why-its-time-to-move-from-sharepoint-on-premises-to-sharepoint-online/feed/ 0 387013
Part 2: Implementing Azure Virtual WAN – A Practical Walkthrough https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/ https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/#respond Thu, 21 Aug 2025 09:33:21 +0000 https://blogs.perficient.com/?p=386292

In Part 1 (Harnessing the Power of AWS Bedrock through CloudFormation / Blogs / Perficient), we discussed what Azure Virtual WAN is and why it’s a powerful solution for global networking. Now, let’s get hands-on and walk through the actual implementation—step by step, in a simple, conversational way.

Architecturediagram

1.     Creating the Virtual WAN – The Network’s Control Plane

Virtual WAN is the heart of a global network, not just another resource. It replaces: Isolated VPN gateways per region, Manual ExpressRoute configurations, and complex peering relationships.

Setting it up is easy:

  • Navigate to Azure Portal → Search “Virtual WAN”
  • Click Create and configure.
  • Name: Naming matters for enterprise environments
  • Resource Group: Create new rg-network-global (best practice for lifecycle management)
  • Type: Standard (Basic lacks critical features like ExpressRoute support)

Azure will set up the Virtual WAN in a few seconds. Now, the real fun begins.

2. Setting Up the Virtual WAN Hub – The Heart of The Network

The hub is where all connections converge. It’s like a major airport hub where traffic from different locations meets and gets efficiently routed. Without a hub, you’d need to configure individual gateways for every VPN and ExpressRoute connection, leading to higher costs and management overhead.

  • Navigate to the Virtual WAN resource → Click Hubs → New Hub.
  • Configure the Hub.
  • Region: Choose based on: Primary user locations & Azure service availability (some regions lack certain services)
  • Address Space: Assign a private IP range (e.g., 10.100.0.0/24).

Wait for Deployment, this takes about 30 minutes (Azure is building VPN gateways, ExpressRoute gateways, and more behind the scenes).

Once done, the hub is ready to connect everything: offices, cloud resources, and remote users.

3. Connecting Offices via Site-to-Site VPN – Building Secure Tunnels

Branches and data centres need a reliable, encrypted connection to Azure. Site-to-Site VPN provides this over the public internet while keeping data secure. Without VPN tunnels, branch offices would rely on slower, less secure internet connections to access cloud resources, increasing latency and security risks.

  • In the Virtual WAN Hub, go to VPN (Site-to-Site) → Create VPN Site.
  • Name: branch-nyc-01
  • Private Address Space: e.g., 192.168.100.0/24 (must match on-premises network)
  • Link Speed: Set accurately for Azure’s QoS calculations
  • Download VPN Configuration: Azure provides a config file—apply it to the office’s VPN device (like a Cisco or Fortinet firewall).
  • Lastly, connect the VPN Site to the Hub.
  • Navigate to VPN connections → Create connection → Link the office to the hub.

Now, the office and Azure are securely connected.

4. Adding ExpressRoute – The Private Superhighway

For critical applications (like databases or ERP systems), VPNs might not provide enough bandwidth or stability. ExpressRoute gives us a dedicated, high-speed connection that bypasses the public internet. Without ExpressRoute, latency-sensitive applications (like VoIP or real-time analytics) could suffer from internet congestion or unpredictable performance.

  • Order an ExpressRoute Circuit: We can do this via the Azure Portal or through an ISP (like AT&T or Verizon).
  • Authorize the Circuit in Azure
  • Navigate to the Virtual WAN Hub → ExpressRoute → Authorize.
  • Linking it to Hub: Once it is authorized, connect the ExpressRoute circuit to the hub.

Now, the on-premises network has a dedicated, high-speed connection to Azure—no internet required.

5. Enabling Point-to-Site VPN for Remote Workers – The Digital Commute

Employees working from home need secure access to internal apps without exposing them to the public internet. P2S VPN lets them “dial in” securely from anywhere. Without P2S VPN, remote workers might resort to risky workarounds like exposing RDP or databases to the internet.

  • Configure P2S in The Hub
  • Navigate to VPN (Point-to-Site) → Configure.
  • Set Up Authentication: Choose certificate-based auth (secure and easy to manage) and upload the root/issuer certificates.
  • Assign an IP Pool. e.g., 192.168.100.0/24 (this is where remote users will get their IPs).
  • Download & Distribute the VPN Client

Employees install this on their laptops to connect securely. Now, the team can access Azure resources from anywhere just like they’re in the office.

6. Linking Azure Virtual Networks (VNets) – The Cloud’s Backbone

Applications in one VNet (e.g., frontend servers) often need to talk to another (e.g., databases). Rather than complex peering, the Virtual WAN handles routing automatically. Without VNet integration, it needs manual peering and route tables for every connection, creating a management nightmare at scale.

  • VNets need to be attached.
  • Navigate to The Hub → Virtual Network Connections → Add Connection.
  • Select the VNets. e.g., Connect vnet-app (for applications) and vnet-db (for databases).
  • Azure handles the Routing: Traffic flows automatically through the hub-no manual route tables needed.

Now, the cloud resources communicate seamlessly.

Monitoring & Troubleshooting

Networks aren’t “set and forget.” We need visibility to prevent outages and quickly fix issues. We can use tools like Azure Monitor, which tracks VPN/ExpressRoute health—like a dashboard showing all trains (data packets) moving smoothly. Again, Network Watcher can help to diagnose why a branch can’t connect.

Common Problems & Fixes

  • When VPN connections fail, the problem is often a mismatched shared key—simply re-enter it on both ends.
  • If ExpressRoute goes down, check with your ISP—circuit issues usually require provider intervention.
  • When VNet traffic gets blocked, verify route tables in the hub—missing routes are a common culprit.
]]>
https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/feed/ 0 386292
Invoke the Mapbox Geocoding API to Populate the Location Autocomplete Functionality https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/ https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/#respond Thu, 21 Aug 2025 08:01:53 +0000 https://blogs.perficient.com/?p=381495

While working on one of my projects, I needed to implement an autocomplete box using Mapbox Geocoding APIs in a React/Next.js application. The goal was to filter a list of hospitals based on the selected location. The location results from the API include coordinates, which I compared with the coordinates of the hospitals in my list.

The API returns various properties, including coordinates, under the properties section (as shown in the image below). These coordinates (latitude and longitude) can be used to filter the hospital list by matching them with the selected location.

Mapboxresultproperties

The API requires an access token, which can be obtained by signing up on the Mapbox platform. You can refer to the Geocoding API documentation for more details. The documentation provides a variety of APIs that can be used depending on your specific requirements.

Below are some example APIs taken from the same link.

# A basic forward geocoding request
# Find Los Angeles

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Los%20Angeles&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Find a town called 'Chester' in a specific region
# Add the proximity parameter with local coordinates
# This ensures the town of Chester, New Jersey is in the results

curl "https://api.mapbox.com/search/geocode/v6/forward?q=chester&proximity=-74.70850,40.78375&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Specify types=country to search only for countries named Georgia
# Results will exclude the American state of Georgia

curl "https://api.mapbox.com/search/geocode/v6/forward?q=georgia&types=country&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Limit the results to two results using the limit option
# Even though there are many possible matches
# for "Washington", this query will only return two results.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Washington&limit=2&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

# Search for the Place feature "Kaaleng" in the Ilemi Triangle. Specifying the cn worldview will return the country value South Sudan. Not including leaving the worldview parameter would default to the us worldview and return the country value Kenya.

curl "https://api.mapbox.com/search/geocode/v6/forward?q=Kaaleng&worldview=cn&access_token=YOUR_MAPBOX_ACCESS_TOKEN"

The implementation leverages React hooks along with state management for handling component behavior and data flow.

How to Create an Autocomplete Component in React

  1. Create a React component.
  2. Sign up and apply the access token and API URL to the constants.
  3. Create a type to bind the structure of the API response results.
  4. Use the useEffect hook to invoke the API.
  5. Map the fetched results to the defined type.
  6. Apply CSS to style the component and make the autocomplete feature visually appealing.
#constants.ts

export const APIConstants = {
  accessToken: 'YOUR_MAPBOX_ACCESS_TOKEN',
  geoCodeSearchForwardApiUrl: 'https://api.mapbox.com/search/geocode/v6/forward',
  searchWordCount: 3,
};
#LocationResultProps.ts

type Suggetions = {
  properties: {
    feature_type: string;
    full_address: string;
    name: string;
    name_preferred: string;
    coordinates: {
      longitude: number;
      latitude: number;
    };
  };
};
export type LocationResults = {
  features: Array<Suggetions>;
};
#Styles.ts

export const autoComplete = {
  container: {
    width: '250px',
    margin: '20px auto',
  },
  input: {
    width: '100%',
    padding: '10px',
    fontSize: '16px',
    border: '1px solid #ccc',
    borderRadius: '4px',
  },
  dropdown: {
    top: '42px',
    left: '0',
    right: '0',
    backgroundColor: '#fff',
    border: '1px solid #ccc',
    borderTop: 'none',
    maxHeight: '150px',
    listStyleType: 'none',
    padding: '0',
    margin: '0',
    zIndex: 1000,
  },
  item: {
    padding: '5px',
    cursor: 'pointer',
    borderBottom: '1px solid #eee',
  },
};

#LocationSearchInput.tsx

import React, { useEffect, useState } from 'react';
import { APIConstants } from 'lib/constants';
import { autoComplete } from '../Styles';
import { LocationResults } from 'lib/LocationResultProps';

export const Default = (): JSX.Element => {
  const apiUrlParam: string[][] = [
    //['country', 'us%2Cpr'],
    ['types', 'region%2Cpostcode%2Clocality%2Cplace%2Cdistrict%2Ccountry'],
    ['language', 'en'],
    //['worldview', 'us'],
  ];

  const [inputValue, setInputValue] = useState<string>('');
  const [results, setresults] = useState<LocationResults>();
  const [submitted, setSubmitted] = useState<boolean>(false);

  // When the input changes, reset the "submitted" flag.
  const handleChange = (value: string) => {
    setSubmitted(false);
    setInputValue(value);
  };
  const handleSubmit = (value: string) => {
    setSubmitted(true);
    setInputValue(value);
  };

  // Fetch results when the input value changes
  useEffect(() => {
    if (inputValue.length < APIConstants?.searchWordCount) {
      setresults(undefined);
      return;
    }
    if (submitted) {
      return;
    }
    const queryInputParam = [
      ['q', inputValue],
      ['access_token', APIConstants?.accessToken ?? ''],
    ];

    const fetchData = async () => {
      const queryString = apiUrlParam
        .concat(queryInputParam)
        .map((inner) => inner.join('='))
        .join('&');
      const url = APIConstants?.geoCodeSearchForwardApiUrl + '?' + queryString;

      try {
        const response: LocationResults = await (await fetch(url)).json();
        setresults(response);
        console.log(response);
      } catch (err: unknown) {
        console.error('Error obtaining location results for autocomplete', err);
      }
    };

    fetchData();
  }, [inputValue]);

  return (
    <div>
      <div style={autoComplete.container}>
        <input
          style={autoComplete.input}
          onChange={(e) => handleChange(e.target?.value)}
          value={inputValue}
          placeholder="Find Location"
        />

        {inputValue &&
          !submitted &&
          results?.features?.map((x) => {
            return (
              <ul style={autoComplete.dropdown}>
                <li style={autoComplete.item}>
                  <span onClick={() => handleSubmit(x?.properties?.full_address)}>
                    {x?.properties?.full_address}
                  </span>
                </li>
              </ul>
            );
          })}
      </div>
    </div>
  );
};

Finally, we can search for a location using a zip code, state, or country.

Recording 20250520 135312 (1)

 

Additionally, the reverse geocoding API is used similarly, requiring only minor adjustments to the parameters and API URL. The location autocomplete box offers a wide range of use cases. It can be integrated into user forms such as registration or contact forms, where exact location coordinates or a full address need to be captured upon selection. Each location result includes various properties. Based on the user’s input, whether it’s a city, ZIP code, or state, the autocomplete displays matching results.

 

]]>
https://blogs.perficient.com/2025/08/21/invoke-the-mapbox-geocoding-api-to-populate-the-location-autocomplete-functionality/feed/ 0 381495
Why Value-Based Care Needs Digital Transformation to Succeed https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/ https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/#comments Tue, 12 Aug 2025 19:18:46 +0000 https://blogs.perficient.com/?p=385579

The pressure is on for healthcare organizations to deliver more—more value, more equity, more impact. That’s where a well-known approach is stepping back into the spotlight.

If you’ve been around healthcare conversations lately, you’ve probably heard the resurgence of term value-based care. And there’s a good reason for that. It’s not just a buzzword—it’s reshaping how we think about health, wellness, and the entire care experience.

What Is Value-Based Care, Really?

At its core, value-based care is a shift away from the old-school fee-for-service model, where providers got paid for every test, procedure, or visit, regardless of whether it actually helped the patient. Instead, value-based care rewards providers for delivering high-quality, efficient care that leads to better health outcomes.

It’s not about how much care is delivered, it’s about how effective that care is.

This shift matters because it places patients at the center of everything. It’s about making sure people get the right care, at the right time, in the right setting. That means fewer unnecessary tests, fewer duplicate procedures, and less of the fragmentation that’s plagued the system for decades.

The results? Better experiences for patients. Lower costs. Healthier communities.

Explore More: Access to Care Is Evolving: What Consumer Insights and Behavior Models Reveal

Benefits and Barriers of Value-Based Care in Healthcare Transformation

There’s a lot to be excited about, and for good reason! When we focus on prevention, chronic disease management, and whole-person wellness, we can avoid costly hospital stays and emergency room visits. That’s not just good for the healthcare system, it’s good for people, families, and communities. It moves us closer to the holy grail in healthcare: the quintuple aim. Achieving it means delivering better outcomes, elevating experiences for both patients and clinicians, reducing costs, and advancing health equity.

The challenge? Turning value-based care into a scalable, sustainable reality isn’t easy.

Despite more than a decade of pilots, programs, and well-intentioned reforms, only a small number of healthcare organizations have been able to scale their value-based care models effectively. Why? Because many still struggle with some pretty big roadblocks—like outdated technology, disconnected systems, siloed data, and limited ability to manage risk or coordinate care.

That’s where digital transformation comes in.

To make value-based care real and sustainable, healthcare organizations are rethinking their infrastructure from the ground up. They’re adopting cloud-based platforms and interoperable IT systems that allow for seamless data exchange across providers, payers, and patients. They’re tapping into advanced analytics, intelligent automation, and AI to identify at-risk patients, personalize care, and make smarter decisions faster.

As organizations work to enable VBC through digital transformation, it’s critical to really understand what the current research says. Our recent study, Access to Care: The Digital Imperative for Healthcare Leaders, backs up these trends, showing that digital convenience is no longer a differentiator—it’s a baseline expectation.

Findings show that nearly half of consumers have opted for digital-first care instead of visiting their regular physician or provider.

This shift highlights how important it is to offer simple and intuitive self-service digital tools that help people get what they need—fast. When it’s easy to find and access care, people are more likely to trust you, stick with you, and come back when they need you again.

You May Also Enjoy: How Innovative Healthcare Organizations Integrate Clinical Intelligence

Redesigning Care Models for a Consumer-Centric, Digitally Enabled Future

Care models are also evolving. Instead of reacting to illness, we’re seeing a stronger focus on prevention, early intervention, and proactive outreach. Consumer-centric tools like mobile apps, patient portals, and personalized health reminders are becoming the norm, not the exception. It’s all part of a broader movement to meet people where they are and give them more control over their health journey.

But here’s an important reminder: none of these efforts work in a vacuum.

Value-based care isn’t just a technology upgrade or a process tweak. It’s a cultural shift.

Success requires aligning people, processes, data, and technology in a way that’s intentional and strategic. It’s about creating an integrated system that’s designed to improve outcomes and then making those improvements stick.

So, while the road to value-based care may be long and winding, the destination is worth it. It’s not just a different way of delivering care—it’s a smarter, more sustainable one.

Success In Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data

Reimagine Healthcare Transformation With Confidence

If you’re exploring how to modernize your digital front door, consider starting with a strategic assessment. Align your goals, audit your content, and evaluate your tech stack. The path to better outcomes starts with a smarter, simpler way to help patients find care.

We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading healthcare organizations.

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data + Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

Our approach to designing and implementing AI and machine learning (ML) solutions promotes secure and responsible adoption and ensures demonstrated and sustainable business value.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/08/12/why-value-based-care-needs-digital-transformation-to-succeed/feed/ 1 385579