Amazon Web Services Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/amazon-web-services/ Expert Digital Insights Tue, 18 Nov 2025 13:20:08 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Amazon Web Services Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/amazon-web-services/ 32 32 30508587 A Tool For CDOs to Keep Their Cloud Secure: AWS GuardDuty Is the Saw and Perficient Is the Craftsman https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/ https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/#respond Tue, 18 Nov 2025 13:20:08 +0000 https://blogs.perficient.com/?p=388374

In the rapidly expanding realm of cloud computing, Amazon Web Services (AWS) provides the infrastructure for countless businesses to operate and innovate. But with an ever-increasing amount of data, applications, and workloads on the cloud protecting this data poses significant security challenges. As a firm’s data, applications, and workloads migrate to the cloud, protecting them from both sophisticated threats as well as brute force digital attacks is of paramount importance. This is where Amazon GuardDuty enters as a powerful, vigilant sentinel.

What is Amazon GuardDuty?

At its core, Amazon GuardDuty is a continuous security monitoring service designed to protect your AWS accounts and workloads. The software serves as a 24/7 security guard for your entire AWS environment, not just individual applications, and is constantly scanning for malicious activity and unauthorized behavior.

The software works by analyzing a wide variety of data sources within your firm’s AWS account—including AWS CloudTrail event logs, VPC flow logs, and DNS query logs—using machine learning, threat intelligence feeds, and anomaly detection techniques.

If an external party tries a brute-force login, a compromised instance is communicating with a known malicious IP address, or an unusual API call is made, GuardDuty is there to spot it and can be configured to trigger automated actions through services can trigger automated actions through services like Amazon CloudWatch Events and AWS Lambda when a threat is found as well as alert human administrators to take action.

When a threat is detected, GuardDuty generates a finding with a severity level (high, medium, or low) and a score. The severity and score both help minimize time spent on more routine exceptions while highlighting significant events to your data security team.

Why is GuardDuty So Important?

In today’s digital landscape, relying solely on traditional, static security measures is not sufficient. Cybercriminals are constantly evolving their tactics, which is why GuardDuty is an essential component of your AWS security strategy:

  1. Proactive, Intelligent Threat Detection

GuardDuty moves beyond simple rule-based systems. Its use of machine learning allows it to detect anomalies that human security administrators might miss, identifying zero-day threats and subtle changes in behavior that indicate a compromise. It continuously learns and adapts to new threats without requiring manual updates from human security administrators.

  1. Near Real-Time Monitoring and Alerting

Speed is critical in incident response. GuardDuty provides findings in near real-time, delivering detailed security alerts directly to the AWS Management Console, Amazon EventBridge, and Amazon Security Hub. This immediate notification allows your firm’s security teams to investigate and remediate potential issues quickly, minimizing potential damage and alerting your firm’s management.

  1. Broad Protection Across AWS Services

GuardDuty doesn’t just watch over your firm’s Elastic Compute Cloud (“EC2”) instances. GuardDuty also protects a wide array of AWS services, including:

  • Simple Storage Service (“S3”) Buckets: Detecting potential data exfiltration or policy changes that expose sensitive data.
  • EKS/Kubernetes: Monitoring for threats to your container workloads.  No more running malware or mining bitcoin in your firm’s containers.
  • Databases (Aurora; RDS – MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server; and Redshift): Identifying potential compromise or unauthorized access to data.

Conclusion:

In the cloud, security is a shared responsibility. While AWS manages the security of the cloud infrastructure itself, you are responsible for security in the cloud—protecting your data, accounts, and workloads. Amazon GuardDuty is an indispensable tool in fulfilling that responsibility. It provides an automated, intelligent, and scalable layer of defense that empowers you to stay ahead of malicious actors.

To enable Amazon GuardDuty, consider contacting Perficient to help enable, configure, and train staff. Perficient is an AWS partner and has achieved Premier Tier Services Partner status, the highest tier in the Amazon Web Services (AWS) Partner Network. This elevated status reflects Perficient’s expertise, long-term investment, and commitment to delivering customer solutions on AWS.

Besides the firm’s Partner Status, Perficient has demonstrated significant expertise in areas like cloud migration, modernization, and AI-driven solutions, with a large team of AWS-certified professionals.

In addition to these competencies, Perficient has been designated for specific service deliveries, such as AWS Glue Service Delivery, and also has available Amazon-approved software in the AWS Marketplace.

Our financial services experts continuously monitor the financial services landscape and deliver pragmatic, scalable solutions that meet the required mandate and more. Reach out to Perficient’s Director and Head of Payments Practice Amanda Estiverne-Colas to discover why Perficient has been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and 25+ leading payment + card processing companies.

 

]]>
https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/feed/ 0 388374
Terraform Code Generator Using Ollama and CodeGemma https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/ https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/#comments Thu, 25 Sep 2025 10:34:37 +0000 https://blogs.perficient.com/?p=387185

In modern cloud infrastructure development, writing Terraform code manually can be time-consuming and error-prone—especially for teams that frequently deploy modular and scalable environments. There’s a growing need for tools that:

  • Allow natural language input to describe infrastructure requirements.
  • Automatically generate clean, modular Terraform code.
  • Integrate with cloud authentication mechanisms.
  • Save and organize code into execution-ready files.

This model bridges the gap between human-readable Infrastructure descriptions and machine-executable Terraform scripts, making infrastructure-as-code more accessible and efficient. To build this model, we utilize CodeGemma, a lightweight AI model optimized for coding tasks, which runs locally via Ollama.

Qadkyxzvpwpsnkuajbujylwozlw36aeyw Mos4qgcxocvikd9fqwlwi18nu1eejv9khrb52r Ak3lastherfdzlfuhwfzzf4kelmucdplzzkdezh90a

In this blog, we explore how to build a Terraform code generator web app using:

  • Flask for the web interface
  • Ollama’s CodeGemma model for AI-powered code generation
  • Azure CLI authentication using service principal credentials
  • Modular Terraform file creation based on user queries

This tool empowers developers to describe infrastructure needs in natural language and receive clean, modular Terraform code ready for deployment.

Technologies Used

CodeGemma

CodeGemma is a family of lightweight, open-source models optimized for coding tasks. It supports code generation from natural language.

Running CodeGemma locally via Ollama means:

  • No cloud dependency: You don’t need to send data to external APIs.
  • Faster response times: Ideal for iterative development.
  • Privacy and control: Your infrastructure queries and generated code stay on your machine.
  • Offline capability: Ideal for use in restricted or secure environments.
  • Zero cost: Since the model runs locally, there’s no usage fee or subscription required—unlike cloud-based AI services.

Flask

We chose Flask as the web framework for this project because of its:

  • Simplicity and flexibility: Flask is a lightweight and easy-to-set-up framework, making it ideal for quick prototyping.

Initial Setup

  • Install Python.
winget install Python.Python.3
ollama pull codegemma:7b
ollama run codegemma:7b
  • Install the Ollama Python library to use Gemma 3 in your Python projects.
pip install ollama

Folder Structure

Folder Structure

 

Code

from flask import Flask, jsonify, request, render_template_string
from ollama import generate
import subprocess
import re
import os

app = Flask(__name__)
# Azure credentials
CLIENT_ID = "Enter your credentials here."
CLIENT_SECRET = "Enter your credentials here."
TENANT_ID = "Enter your credentials here."

auth_status = {"status": "not_authenticated", "details": ""}
input_fields_html = ""
def authenticate_with_azure():
    try:
        result = subprocess.run(
            ["cmd.exe", "/c", "C:\\Program Files\\Microsoft SDKs\\Azure\\CLI2\\wbin\\az.cmd",
             "login", "--service-principal", "-u", CLIENT_ID, "-p", CLIENT_SECRET, "--tenant", TENANT_ID],
            capture_output=True, text=True, check=True
        )
        auth_status["status"] = "success"
        auth_status["details"] = result.stdout
    except subprocess.CalledProcessError as e:
        auth_status["status"] = "failed"
        auth_status["details"] = e.stderr
    except Exception as ex:
        auth_status["status"] = "terminated"
        auth_status["details"] = str(ex)

@app.route('/', methods=['GET', 'POST'])
def home():
    terraform_code = ""
    user_query = ""
    input_fields_html = ""

    if request.method == 'POST':
        user_query = request.form.get('query', '')

        base_prompt = (
            "Generate modular Terraform code using best practices. "
            "Create separate files for main.tf, vm.tf, vars.tf, terraform.tfvars, subnet.tf, kubernetes_cluster etc. "
            "Ensure the code is clean and execution-ready. "
            "Use markdown headers like ## Main.tf: followed by code blocks."
        )

        full_prompt = base_prompt + "\n" + user_query
        try:
            response_cleaned = generate(model='codegemma:7b', prompt=full_prompt)
            terraform_code = response_cleaned.get('response', '').strip()
        except Exception as e:
            terraform_code = f"# Error generating code: {str(e)}"

            provider_block = f"""
              provider "azurerm" {{
              features {{}}
              subscription_id = "Enter your credentials here."
              client_id       = "{CLIENT_ID}"
              client_secret   = "{CLIENT_SECRET}"
              tenant_id       = "{TENANT_ID}"
            }}"""
            terraform_code = provider_block + "\n\n" + terraform_code

        with open('main.tf', 'w', encoding='utf-8') as f:
            f.write(terraform_code)


        # Create output directory
        output_dir = r"C:\Users\riya.achkarpohre\Desktop\AI\test7\terraform_output"
        os.makedirs(output_dir, exist_ok=True)

        # Define output paths
        paths = {
            "main.tf": os.path.join(output_dir, "Main.tf"),
            "vm.tf": os.path.join(output_dir, "VM.tf"),
            "subnet.tf": os.path.join(output_dir, "Subnet.tf"),
            "vpc.tf": os.path.join(output_dir, "VPC.tf"),
            "vars.tf": os.path.join(output_dir, "Vars.tf"),
            "terraform.tfvars": os.path.join(output_dir, "Terraform.tfvars"),
            "kubernetes_cluster.tf": os.path.join(output_dir, "kubernetes_cluster.tf")
        }

        # Split response using markdown headers
        sections = re.split(r'##\s*(.*?)\.tf:\s*\n+```(?:terraform)?\n', terraform_code)

        # sections = ['', 'Main', '<code>', 'VM', '<code>', ...]
        for i in range(1, len(sections), 2):
            filename = sections[i].strip().lower() + '.tf'
            code_block = sections[i + 1].strip()

            # Remove closing backticks if present
            code_block = re.sub(r'```$', '', code_block)

            # Save to file if path is defined
            if filename in paths:
                with open(paths[filename], 'w', encoding='utf-8') as f:
                    f.write(code_block)
                    print(f"\n--- Written: {filename} ---")
                    print(code_block)
            else:
                print(f"\n--- Skipped unknown file: {filename} ---")

        return render_template_string(f"""
        <html>
        <head><title>Terraform Generator</title></head>
        <body>
            <form method="post">
                <center>
                    <label>Enter your query:</label><br>
                    <textarea name="query" rows="6" cols="80" placeholder="Describe your infrastructure requirement here..."></textarea><br><br>
                    <input type="submit" value="Generate Terraform">
                </center>
            </form>
            <hr>
            <h2>Generated Terraform Code:</h2>
            <pre>{terraform_code}</pre>
            <h2>Enter values for the required variables:</h2>
            <h2>Authentication Status:</h2>
            <pre>Status: {auth_status['status']}\n{auth_status['details']}</pre>
        </body>
        </html>
        """)

    # Initial GET request
    return render_template_string('''
    <html>
    <head><title>Terraform Generator</title></head>
    <body>
        <form method="post">
            <center>
                <label>Enter your query:</label><br>
                <textarea name="query" rows="6" cols="80" placeholder="Describe your infrastructure requirement here..."></textarea><br><br>
                <input type="submit" value="Generate Terraform">
            </center>
        </form>
    </body>
    </html>
    ''')

authenticate_with_azure()
@app.route('/authenticate', methods=['POST'])
def authenticate():
    authenticate_with_azure()
    return jsonify(auth_status)

if __name__ == '__main__':
    app.run(debug=True)

Open Visual Studio, create a new file named file.py, and paste the code into it. Then, open the terminal and run the script by typing:

python file.py

Flask Development Server

Out1

Code Structure Explanation

  • Azure Authentication
    • The app uses the Azure CLI (az.cmd) via Python’s subprocess.run() to authenticate with Azure using a service principal. This ensures secure access to Azure resources before generating Terraform code.
  • User Query Handling
    • When a user submits a query through the web form, it is captured using:
user_query = request.form.get('query', '')
  • Prompt Construction
    • The query is appended to a base prompt that instructs CodeGemma to generate modular Terraform code using best practices. This prompt includes instructions to split the code into files, such as main.tf, vm.tf, subnet.tf, etc.
  • Code Generation via CodeGemma
    • The prompt is sent to the CodeGemma:7b model using:
response_cleaned = generate(model='codegemma:7b', prompt=full_prompt)
  • Saving the Full Response
    • The entire generated Terraform code is first saved to a main.tf file as a backup.
  • Output Directory Setup
    • A specific output directory is created using os.makedirs() to store the split .tf files:
output_dir = r"C:\Users\riya.achkarpohre\Desktop\AI\test7\terraform_output"
  • File Path Mapping
    • A dictionary maps expected filenames (such as main.tf and vm.tf) to their respective output paths. This ensures each section of the generated code is saved correctly.
  • Code Splitting Logic
    • The response is split using a regex-based approach, based on markdown headers like ## main.tf: followed by Terraform code blocks. This helps isolate each module.
  • Conditional File Writing
    • For each split section, the code checks if the filename exists in the predefined path dictionary:
      • If defined, the code block is written to the corresponding file.
      • If not defined, the section is skipped and logged as  “unknown file”.
  • Web Output Rendering
    • The generated code and authentication status are displayed on the webpage using render_template_string().

Terminal

Term1

The Power of AI in Infrastructure Automation

This project demonstrates how combining AI models, such as CodeGemma, with simple tools like Flask and Terraform can revolutionize the way we approach cloud infrastructure provisioning. By allowing developers to describe their infrastructure in natural language and instantly receive clean, modular Terraform code, we eliminate the need for repetitive manual scripting and reduce the chances of human error.

Running CodeGemma locally via Ollama ensures:

  • Full control over data
  • Zero cost for code generation
  • Fast and private execution
  • Seamless integration with existing workflows

The use of Azure CLI authentication adds a layer of real-world applicability, making the generated code deployable in enterprise environments.

Whether you’re a cloud engineer, DevOps practitioner, or technical consultant, this tool empowers you to move faster, prototype smarter, and deploy infrastructure with confidence.

As AI continues to evolve, tools like this will become essential in bridging the gap between human intent and machine execution, making infrastructure-as-code not only powerful but also intuitive.

]]>
https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/feed/ 3 387185
Perficient Quoted in Forrester Report on Intelligent Healthcare Organizations https://blogs.perficient.com/2025/08/29/perficient-quoted-in-forrester-report-on-intelligent-healthcare-organizations/ https://blogs.perficient.com/2025/08/29/perficient-quoted-in-forrester-report-on-intelligent-healthcare-organizations/#respond Fri, 29 Aug 2025 14:45:01 +0000 https://blogs.perficient.com/?p=386542

Empathy, Resilience, Innovation, and Speed: The Blueprint for Intelligent Healthcare Transformation

Forrester’s recent report, Becoming An Intelligent Healthcare Organization Is An Attainable Goal, Not A Lost Cause, confirms what healthcare executives already know: transformation is no longer optional.

Perficient is proud to be quoted in this research, which outlines a pragmatic framework for becoming an intelligent healthcare organization (IHO)—one that scales innovation, strengthens clinical and operational performance, and delivers measurable impact across the enterprise and the populations it serves.

Why Intelligent Healthcare Is No Longer Optional

Healthcare leaders are under pressure to deliver better outcomes, reduce costs, and modernize operations, all while navigating fragmented systems and siloed departments. The journey to transformation requires more than technology; it demands strategic clarity, operational alignment, and a commitment to continuous improvement.

Forrester reports, “Among business and technology professionals at large US healthcare firms, only 63% agree that their IT organization can readily reallocate people and technologies to serve the newest business priority; 65% say they have enterprise architecture that can quickly and efficiently support major changes in business strategy and execution.”

Despite widespread investment in digital tools, many healthcare organizations struggle to translate those investments into enterprise-wide impact. Misaligned priorities, inconsistent progress across departments, and legacy systems often create bottlenecks that stall innovation and dilute momentum.

Breaking Through Transformation Barriers

These challenges aren’t just technical or organizational. They’re strategic. Enterprise leaders can no longer sit on the sidelines and play the “wait and see” game. They must shift from reactive IT management to proactive digital orchestration, where technology, talent, and transformation are aligned to business outcomes.

Business transformation is not a fleeting trend. It’s an essential strategy for healthcare organizations that want to remain competitive as the marketplace evolves.

Forrester’s report identifies four hallmarks of intelligent healthcare organizations, emphasizing that transformation is not a destination but a continuous practice.

Four Hallmarks of An Intelligent Healthcare Organization (IHO)

To overcome transformation barriers, healthcare organizations must align consumer expectations, digital infrastructure, clinical workflows, and data governance with strategic business goals.

1. Empathy At Scale: Human-Centered, Trust-Enhancing Experiences

A defining trait of intelligent healthcare organizations is a commitment to human-centered experiences.

  • Driven By: Continuous understanding of consumer needs
  • Supported By: Strategic technology investments that enable timely, personalized interventions and touchpoints

As Forrester notes, “The most intelligent organizations excel at empathetic, swift, and resilient innovation to continuously deliver new value for customers and stay ahead of the competition.”

Empathy is a performance driver. Organizations that prioritize human-centered care see higher engagement, better adherence, and stronger loyalty.

Our experts help clients reimagine care journeys using journey sciences, predictive analytics, integrated CRM and CDP platforms, and cloud-native architectures that support scalable personalization. But personalization without protection is a risk. That’s why empathy must extend beyond experience design to include ethical, secure, and responsible AI adoption.

Healthcare organizations face unique constraints, including HIPAA, PHI, and PII regulations that limit the utility of plug-and-play AI solutions. To meet these challenges, we apply our PACE framework—Policies, Advocacy, Controls, and Enablement—to ensure AI is not only innovative but also rooted in trust.

  • Policies establish clear boundaries for acceptable AI usage, tailored to healthcare’s regulatory landscape.
  • Advocacy builds cross-functional understanding and adoption through education and collaboration.
  • Controls implement oversight, auditing, and risk mitigation to protect patient data and ensure model integrity.
  • Enablement equips teams with the tools and environments needed to innovate confidently and securely.

This approach ensures AI is deployed with purpose, aligned to business goals, and embedded with safeguards that protect consumers and care teams alike. It also supports the creation of reusable architectures that blend scalable services with real-time monitoring, which is critical for delivering fast, reliable, and compliant AI applications.

Responsible AI isn’t a checkbox. It’s a continuous practice. And in healthcare, it’s the difference between innovation that inspires trust and innovation that invites scrutiny.

2. Designing for Disruption: Resilience as a Competitive Advantage

Patient-led experiences must be grounded in a clear-eyed understanding that market disruption isn’t simply looming. It’s already here. To thrive, healthcare leaders must architect systems that flex under pressure and evolve with purpose. Resilience is more than operational; it’s also behavioral, cultural, and strategic.

Perficient’s Access to Care research reveals that friction in the care journey directly impacts health outcomes, loyalty, and revenue:

  • More than 50% of consumers who experienced scheduling friction took their care elsewhere, resulting in lost revenue, trust, and care continuity
  • 33% of respondents acted as caregivers, yet this persona is often overlooked in digital strategies
  • Nearly 1 in 4 respondents who experienced difficulty scheduling an appointment stated that the friction led to delayed care, and they believed their health declined as a result
  • More than 45% of consumers aged 18–64 have used digital-first care instead of their regular provider, and 92% of them believe the quality is equal or better

This sentiment should be a wakeup call for leaders. It clearly signals that consumers expect healthcare to meet both foundational needs (cost, access) and lifestyle standards (convenience, personalization, digital ease). When systems fail to deliver, patients disengage. And when caregivers—who often manage care for entire households—encounter barriers, the ripple effect is exponential.

To build resilience that drives retention and revenue, leaders must design systems that anticipate needs and remove barriers before they impact care. Resilient operations must therefore be designed to:

  • Reduce friction across the care journey, especially in scheduling and follow-up
  • Support caregivers with multi-profile tools, shared access, and streamlined coordination
  • Enable digital-first engagement that mirrors the ease of consumer platforms like Amazon and Uber

Consumers are blending survival needs with lifestyle demands. Intelligent healthcare organizations address both simultaneously.

Resilience also means preparing for the unexpected. Whether it’s regulatory shifts, staffing shortages, or competitive disruption, IHOs must be able to pivot quickly. That requires leaders to reimagine patient (and member) access as a strategic lever and prioritize digital transformation that eases the path to care.

3. Unified Innovation: Aligning Strategy, Tech, and Teams

Innovation without enterprise alignment is just noise—activity without impact. When digital initiatives are disconnected from business strategy, consumer needs, or operational realities, they create confusion, dilute resources, and fail to deliver meaningful outcomes. Fragmented innovation may look impressive in isolation, but without coordination, it lacks the momentum to drive true transformation.

To deliver real results, healthcare leaders must connect strategy, execution, and change readiness. In Forrester’s report, a quote from an interview with Priyal Patel emphasizes the importance of a shared strategic vision:

Priyal Patel“Today’s decisions should be guided by long-term thinking, envisioning your organization’s business needs five to 10 years into the future.” — Priyal Patel, Director, Perficient


Our approach begins with strategic clarity. Using our Envision Framework, we help healthcare organizations rapidly identify opportunities, define a consumer-centric vision, and develop a prioritized roadmap that aligns with business goals and stakeholder expectations. This framework blends real-world insights with pragmatic planning, ensuring that innovation is both visionary and executable.

We also recognize that transformation is not just technical—it’s human. Organizational change management (OCM) ensures that teams are ready, willing, and able to adopt new ways of working. Through structured engagement, training, and sustainment, we help clients navigate the behavioral shifts required to scale innovation across departments and disciplines.

This strategic rigor is especially critical in healthcare, where innovation must be resilient, compliant, and deeply empathetic. As highlighted in our 2025 Digital Healthcare Trends report, successful organizations are those that align innovation with measurable business outcomes, ethical AI adoption, and consumer trust.

Perficient’s strategy and transformation services connect vision to execution, ensuring that innovation is sustainable. We partner with healthcare leaders to identify friction points and quick wins, build a culture of continuous improvement, and empower change agents across the enterprise.

You May Enjoy: Driving Company Growth With a Product-Driven Mindset

4. Speed With Purpose and Strategic Precision

The ability to pivot, scale, and deliver quickly is becoming a defining trait of tomorrow’s healthcare leaders. The way forward requires a comprehensive digital strategy that builds the capabilities, agility, and alignment to stay ahead of evolving demands and deliver meaningful impact.

IHOs act quickly without sacrificing quality. But speed alone isn’t enough. Perficient’s strategic position emphasizes speed with purpose—where every acceleration is grounded in business value, ethical AI adoption, and measurable health outcomes.

Our experts help healthcare organizations move fast by:

This approach supports the Quintuple Aim: better outcomes, lower costs, improved experiences, clinician well-being, and health equity. It also ensures that innovation is not just fast. It’s focused, ethical, and sustainable.

Speed with purpose means:

  • Rapid prototyping that validates ideas before scaling
  • Real-time data visibility to inform decisions and interventions
  • Cross-functional collaboration that breaks down silos and accelerates execution
  • Outcome-driven KPIs that measure impact, not just activity

Healthcare leaders don’t need more tools. They need a strategy that connects business imperatives, consumer demands, and an empowered workforce to drive transformation forward. Perficient equips organizations to move with confidence, clarity, and control.

Collaborating to Build Intelligent Healthcare Organizations

We believe our inclusion in Forrester’s report underscores our role as a trusted advisor in intelligent healthcare transformation. From insight to impact, our healthcare expertise equips leaders to modernize, personalize, and scale care. We drive resilient, AI-powered transformation to shape the experiences and engagement of healthcare consumers, streamline operations, and improve the cost, quality, and equity of care.

We have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently ranks us as one of the largest healthcare consulting firms.

Our strategic partnerships with industry-leading technology innovators—including AWS, Microsoft, Salesforce, Adobe, and more—accelerate healthcare organizations’ ability to modernize infrastructure, integrate data, and deliver intelligent experiences. Together, we shatter boundaries so you have the AI-native solutions you need to boldly advance business.

Ready to advance your journey as an intelligent healthcare organization?

We’re here to help you move beyond disconnected systems and toward a unified, data-driven future—one that delivers better experiences for patients, caregivers, and communities. Let’s connect and explore how you can lead with empathy, intelligence, and impact.

]]>
https://blogs.perficient.com/2025/08/29/perficient-quoted-in-forrester-report-on-intelligent-healthcare-organizations/feed/ 0 386542
2025 Modern Healthcare Survey Ranks Perficient Among the 10 Largest Management Consulting Firms https://blogs.perficient.com/2025/08/28/modern-healthcare-ranks-perficient-among-the-10-largest-management-consulting-firms/ https://blogs.perficient.com/2025/08/28/modern-healthcare-ranks-perficient-among-the-10-largest-management-consulting-firms/#comments Thu, 28 Aug 2025 07:45:26 +0000 https://blogs.perficient.com/?p=296761

Modern Healthcare has once again recognized Perficient among the largest healthcare management consulting firms in the U.S., ranking us ninth in its 2025 survey. This honor reflects not only our growth but also our commitment to helping healthcare leaders navigate complexity with clarity, precision, and purpose.

What’s Driving Demand: Innovation with Intent

As provider, payer, and MedTech organizations face mounting pressure to modernize, our work is increasingly focused on connecting digital investments to measurable business and health outcomes. The challenges are real—and so are the opportunities.

Healthcare leaders are engaging our experts to tackle shifts from digital experimentation to enterprise alignment in business-critical areas, including:

  • Digital health transformation that eases access to care.
  • AI and data analytics that accelerate insight, guide clinical decisions, and personalize consumer experiences.
  • Workforce optimization that supports clinicians, streamlines operations, and restores time to focus on patients, members, brokers, and care teams.

These investments represent strategic maturity that reshapes how care is delivered, experienced, and sustained.

Operational Challenges: Strategy Meets Reality

Serving healthcare clients means working inside a system that resists simplicity. Our industry, technical, and change management experts help leaders address three persistent tensions:

  1. Aligning digital strategy with enterprise goals. Innovation often lacks a shared compass. We translate divergent priorities—clinical, operational, financial—into unified programs that drive outcomes.
  2. Controlling costs while preserving agility. Budgets are tight, but the need for speed and competitive relevancy remains. Our approach favors scalable roadmaps and solutions that deliver early wins and can flex as the health care marketplace and consumer expectations evolve.
  3. Preparing the enterprise for AI. Many of our clients have discovered that their AI readiness lags behind ambition. We help build the data foundations, governance frameworks, and workforce capabilities needed to operationalize intelligent systems.

Related Insights: Explore the Digital Trends in Healthcare

Consumer Expectations: Access Is the New Loyalty

Our Access to Care research, based on insights from more than 1,000 U.S. healthcare consumers, reveals a fundamental shift: if your healthcare organization isn’t delivering a seamless, personalized, and convenient experience, consumers will go elsewhere. And they won’t always come back.

Many healthcare leaders still view competition as other hospitals or clinics in their region. But today’s consumer has more options—and they’re exercising them. From digital-first health experiences to hyper-local disruptors and retail-style health providers focused on accessibility and immediacy, the competitive field is rapidly expanding.

  • Digital convenience is now a baseline. More than half of consumers who encountered friction while scheduling care went elsewhere.
  • Caregivers are underserved. One in three respondents manage care for a loved one, yet most digital strategies treat the patient as a single user.
  • Digital-first care is mainstream. 45% of respondents aged 18–64 have already used direct-to-consumer digital care, and 92% of those adopters believe the quality is equal or better to the care offered by their regular health care system.

These behaviors demand a rethinking of access, engagement, and loyalty. We help clients build experiences that are intuitive, inclusive, and aligned with how people actually live and seek care.

Looking Ahead: Complexity Accelerates

With intensified focus on modernization, data strategy, and responsible AI, healthcare leaders are asking harder questions. We’re helping them find and activate answers that deliver value now and build resilience for what’s next.

Our technology partnerships with Adobe, AWS, Microsoft, Salesforce, and other platform leaders allow us to move quickly, integrate deeply, and co-innovate with confidence. We bring cross-industry expertise from financial services, retail, and manufacturing—sectors where personalization and operational excellence are already table stakes. That perspective helps healthcare clients leapfrog legacy thinking and adopt proven strategies. And our fluency in HIPAA, HITRUST, and healthcare data governance ensures that our digital solutions are compliant, resilient, and future-ready.

Optimized, Agile Strategy and Outcomes for Health Insurers, Providers, and MedTech

Discover why we been trusted by the 10 largest U.S. health systems, 10 largest U.S. health insurers, and 14 of the 20 largest medical device firms. We are recognized in analyst reports and regularly awarded for our excellence in solution innovation, industry expertise, and being a great place to work.

Contact us to explore how we can help you forge a resilient, impactful future that delivers better experiences for patients, caregivers, and communities.

]]>
https://blogs.perficient.com/2025/08/28/modern-healthcare-ranks-perficient-among-the-10-largest-management-consulting-firms/feed/ 2 296761
Perficient Earns AWS Premier Tier Services Partner Status and Elevates AI Innovation in the Cloud https://blogs.perficient.com/2025/08/25/perficient-earns-aws-premier-tier-services-partner-status-and-elevates-ai-innovation-in-the-cloud/ https://blogs.perficient.com/2025/08/25/perficient-earns-aws-premier-tier-services-partner-status-and-elevates-ai-innovation-in-the-cloud/#respond Mon, 25 Aug 2025 19:39:26 +0000 https://blogs.perficient.com/?p=386488

At Perficient, we don’t just embrace innovation, we engineer it. That’s why we’re proud to share that we’ve achieved Amazon Web Services (AWS) Premier Tier Services Partner status, a milestone that solidifies our position as a leader in delivering transformative AI-first solutions.

This top-tier AWS designation reflects the depth of our technical expertise, the success of our client outcomes, and our commitment to helping enterprises modernize and thrive in a digital world. But what sets us apart isn’t just cloud proficiency; it’s how we can blend AI into every layer of digital transformation.

“We’re thrilled to join an elite group of technology innovators holding the AWS Premier Tier Services Partner status. This achievement is a testament to our strategic commitment to AWS, our partner-to-partner model, and the transformative outcomes we deliver for our clients,” said Santhosh Nair, senior vice president, Perficient. “Together with AWS, we’re building and deploying AI-first solutions at scale with speed and precision. From real-time analytics to AI-first product development, our approach empowers enterprises to innovate faster, personalize customer experiences, and unlock new business value.”

Combining the Power of AWS and AI

Whether it’s through intelligent automation, predictive analytics, or generative AI, we help organizations infuse intelligence across their operations using AWS’s scalable infrastructure. Our solutions are built to adapt, evolve, and deliver measurable outcomes from streamlining clinical workflows in healthcare to enhancing customer experiences in financial services.

As an AWS Premier Tier Services Partner, we now gain even more direct access to AWS tools, early service previews, and strategic collaboration opportunities, allowing us to deliver smarter, faster, and more impactful AI-first solutions for our clients.

Unlocking What’s Next

Our talented cloud and AI teams continue to push boundaries, helping clients harness the full potential of cloud and data while solving their toughest challenges with precision and innovation.

Ready to explore what AI and cloud transformation could look like for your business? Let’s talk.

]]>
https://blogs.perficient.com/2025/08/25/perficient-earns-aws-premier-tier-services-partner-status-and-elevates-ai-innovation-in-the-cloud/feed/ 0 386488
Automated Code Review with AWS Bedrock and Lambda https://blogs.perficient.com/2025/08/12/automated-code-review-with-aws-bedrock-and-lambda/ https://blogs.perficient.com/2025/08/12/automated-code-review-with-aws-bedrock-and-lambda/#respond Wed, 13 Aug 2025 02:06:04 +0000 https://blogs.perficient.com/?p=385698

In today’s fast-moving world of software development, keeping our code clean, secure, and efficient is more important. While manual code reviews are great for catching issues, they can take a lot of time & even then, some problems might slip through.

This blog shows how to build a lightweight, automated code review system using AWS Bedrock and AWS Lambda. With AI-powered analysis, it checks our code for bugs, security flaws, performance issues, and style tips—without needing heavy infrastructure. It’s fast, innovative, and cost-effective.

Why We Use Automated Code Review

Our automated code review system solves these problems by providing instant, AI-powered feedback. It quickly analyses code for bugs, security flaws, performance issues, and style improvements. Built on a serverless, pay-per-use model, it’s cost-effective and requires no infrastructure management. The AI ensures consistent quality across all reviews and is available 24/7. Whether you’re reviewing a single function or an entire file, the system scales effortlessly and integrates smoothly into our existing development workflow.

Prerequisites

  • AWS Services: API Gateway, Lambda, Bedrock
  • Development: Python 3.9+, code editor (e.g., VS Code), curl/Postman
  • Knowledge: Basics of AWS, Python, REST APIs, and JSON

 

Architecture Diagram

Ai Demo

 

How to Implement an Automated Code Review System with AWS Bedrock and AWS Lambda

Step 1: Lambda Function Implementation

To get started, first create an IAM role for the Lambda function with the correct permissions, mainly access to AWS Bedrock. Then, set up a Lambda function using Python 3.9 or above. We will create it from scratch in the AWS Console, where we will write the logic to handle incoming code, prepare it for analysis, and connect to the AI model via Bedrock.

Refer to the sample Code.

Lambda Setup

Step 2: API Gateway Configuration

Next, set up a REST API in AWS API Gateway. Create a /review resource and add a POST method to handle incoming code submissions. Link this method to the Lambda function using proxy integration, so the whole request is passed through. Finally, deploy the API to a production stage to make it live and ready for use.

Api Gateway Seyup

Step 3: Build the Lambda function

To test the setup and see how Amazon Bedrock responds to different types of code, you can run the following examples using curl / Postman.

Example 1: Basic Function Test

This sends a simple addition function to check if the system responds correctly.

curl -X POST \
https://your-api-id.execute-api.region.amazonaws.com/prod/review \
-H "Content-Type: application/json" \
-d '{"code_snippet": "def add(a, b):\n    return a + b"}'

Example 2: Bug Detection Test

This tests how the system handles a division by zero error.

curl -X POST \
https://your-api-id.execute-api.region.amazonaws.com/prod/review \
-H "Content-Type: application/json" \
-d '{"code_snippet": "def divide(a, b):\n    return a / b\n\nresult = divide(10, 0)"}'

Example 3: Security Vulnerability Test

These checks for SQL injection risks in a query-building function.

curl -X POST \
https://your-api-id.execute-api.region.amazonaws.com/prod/review \
-H "Content-Type: application/json" \
-d '{"code_snippet": "def get_user(user_id):\n    query = \"SELECT * FROM users WHERE id = \" + user_id\n    return execute_query(query)"}'

Make sure to replace your-api-id and region with actual API Gateway details. We will get the below OUTPUT as shown in the screenshots below.

Demo3

AI Review for the code will show in the Body Section.

Demo4

Seamless Integration with GitHub, VS Code, and Web Interface

The code review system can be further easily integrated into our development workflow. You can connect it with GitHub to trigger automated reviews on pull requests, use it within VS Code through extensions or REST API calls for instant feedback while coding, and even build a simple HTML interface to paste and test code snippets directly in the browser. This makes it accessible and useful across different stages of development.

Below is the representation of integration with HTML.

Demo5

Results and Impact

The AI-powered code review system effectively identifies a wide range of issues, including runtime errors like division by zero, security vulnerabilities such as SQL injection, performance inefficiencies, and code style problems. It also promotes best practices like proper documentation and error handling. When integrated into development workflows, teams have seen up to a 50% reduction in manual review time, earlier bug detection, consistent code quality across developers, and valuable learning support for junior team members.

Conclusion

We’ve successfully built a production-ready, automated code review system that’s both efficient and scalable. Using advanced AI models through AWS Bedrock, the system delivers deep code analysis covering bugs, security risks, performance issues, and style improvements. Thanks to AWS’s serverless architecture, it remains cost-effective and easy to maintain. Its REST API design allows smooth integration with existing tools and workflows, while the use of managed services ensures scalability and reliability without infrastructure headaches.

]]>
https://blogs.perficient.com/2025/08/12/automated-code-review-with-aws-bedrock-and-lambda/feed/ 0 385698
Creating Data Lakehouse using Amazon S3 and Athena https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/ https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/#respond Thu, 31 Jul 2025 10:41:17 +0000 https://blogs.perficient.com/?p=385527

As organizations accumulate massive amounts of structured and unstructured data, consequently, the need for flexible, scalable, and cost-effective data architectures becomes more important than ever. Moreover, with the increasing complexity of data environments, organizations must prioritize solutions that can adapt and grow. In addition, the demand for real-time insights and seamless integration across platforms further underscores the importance of robust data architecture. As a result, Data Lakehouse — combining the best of data lakes and data warehouses — comes into play. In this blog post, we’ll walk through how to build a serverless, pay-per-query Data Lakehouse using Amazon S3 and Amazon Athena.

What Is a Data Lakehouse?

A Data Lakehouse is a modern architecture that blends the flexibility and scalability of data lakes with the structured querying capabilities and performance of data warehouses.

  • Data Lakes (e.g., Amazon S3) allow storing raw, unstructured, semi-structured, or structured data at scale.
  • Data Warehouses (e.g., Redshift, Snowflake) offer fast SQL-based analytics but can be expensive and rigid.

Lakehouse unify both, enabling:

  • Schema enforcement and governance
  • Fast SQL querying over raw data
  • Simplified architecture and lower cost

Flow

Tools We’ll Use

  • Amazon S3: For storing structured or semi-structured data (CSV, JSON, Parquet, etc.)
  • Amazon Athena: For querying that data using standard SQL

This setup is perfect for teams that want low cost, fast setup, and minimal maintenance.

Step 1: Organize Your S3 Bucket

Structure your data in S3 in a way that supports performance:

s3://Sample-lakehouse/

└── transactions/

└── year=2024/

└── month=04/

└── data.parquet

Best practices:

  • Use columnar formats like Parquet or ORC
  • Partition by date or region for faster filtering
  • In addition, compressing files (e.g., Snappy or GZIP) can help reduce scan costs.

 Step 2: Create a Table in Athena

You can create an Athena table manually via SQL. Athena uses a built-in Data Catalog

CREATE EXTERNAL TABLE IF NOT EXISTS transactions (

transaction_id STRING,

customer_id STRING,

amount DOUBLE,

transaction_date STRING

)

PARTITIONED BY (year STRING, month STRING)

STORED AS PARQUET

LOCATION ‘s3://sample-lakehouse/transactions/’;

Then run:

MSCK REPAIR TABLE transactions;

This tells Athena to scan the S3 directory and register your partitions.

Step 3: Query the Data

Once the table is created, querying is as simple as:

SELECT year, month, SUM(amount) AS total_sales

FROM transactions

WHERE year = ‘2024’ AND month = ’04’

GROUP BY year, month;

Benefits of This Minimal Setup

Benefit Description
Serverless No infrastructure to manage
Fast Setup Just create a table and query
Cost-effective Pay only for storage and queries
Flexible Works with various data formats
Scalable Store petabytes in S3 with ease

Building a data Lakehouse using Amazon S3 and Athena offers a modern, scalable, and cost-effective approach to data analytics. With minimal setup and no server management, you can unlock insights from your data quickly while maintaining flexibility and governance. Furthermore, this streamlined approach reduces operational overhead and accelerates time-to-value. Whether you’re a startup or an enterprise, this setup provides the foundation for data-driven decision-making at scale. In fact, it empowers teams to focus more on innovation and less on infrastructure.

]]>
https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/feed/ 0 385527
Document Summarization with AI on Amazon Bedrock https://blogs.perficient.com/2025/07/30/document-summarization-with-ai-on-amazon-bedrock/ https://blogs.perficient.com/2025/07/30/document-summarization-with-ai-on-amazon-bedrock/#comments Wed, 30 Jul 2025 07:03:11 +0000 https://blogs.perficient.com/?p=385454

Objective

Enable automated document summarization by allowing us to upload TXT, PDF, or DOCX files, extracting content, summarizing it using Amazon Bedrock, and delivering the summary either via API response or by storing it for future retrieval.

Why This Is Needed

  • Organizations face information overload with a large number of documents.
  • Manual summarization is time-consuming and inconsistent.
  • AI enables faster, accurate, and scalable content summarization.
  • Amazon Bedrock provides easy access to powerful foundation models without managing infrastructure.
  • Helps improve decision-making by delivering quick, reliable insights.

Architecture Overview

Blog Ai1

  1. Uploads a document (TXT, PDF, or DOCX) to an S3 bucket.
  2. S3 triggers a Lambda function.
  3. Extracted content is passed to Amazon Bedrock for summarization (e.g., Claude 3 Sonnet).
  4. The summary is stored in Amazon S3.
  5. Lambda returns a response confirming successful summarization and storage.

AWS Services We Used

  • Amazon S3: Used to upload and store original documents like TXT, PDF, or DOCX files.
  • AWS Lambda: Handles the automation logic, triggered by S3 upload, it parses content and invokes Bedrock.
  • Amazon Bedrock: Provides powerful foundation models (Claude, Titan, or Llama 3) for generating document summaries.
  • IAM Roles: Securely manage permissions across services to ensure least-privilege access control.

Step-by-Step Guide

1. Create an S3 Bucket

  1. Navigate to AWS Console → S3 → Create Bucket
  • Example bucket name: kunaldoc-bucket

Note: Use the US East (N. Virginia) region (us-east-1) since Amazon Bedrock is not available in most of the regions (e.g., not available in Ohio).

  1. Inside the bucket, create two folders:
  • uploads/ – to store original documents (TXT, PDF, DOCX)
  • summaries/ – to save the AI-generated summaries.

Picture2

Step 2: Enable Amazon Bedrock Access

  1. Go to the Amazon Bedrock console.
  2. Navigate to Model access from the left menu.

Picture3

  1.  Select and enable access to the foundation models to be used, such as:
    • Claude 3.5 Sonnet (I used)
    • Meta Llama 3
    • Anthropic Claude
  2. Wait for the status to show as Access granted (this may take a few minutes).

Note: Make sure you’re in the same region as your Lambda function (e.g., us-east-1 / N. Virginia).

Picture4

Step 3: Set Up IAM Role for Lambda

  1. Go to IAM > Roles > Create Role
  2. Choose Lambda as the trusted entity type
  3. Attach these AWS managed policies:
  • AmazonS3FullAccess
  • AmazonBedrockFullAccess
  • AWSLambdaBasicExecutionRole
  1. Name the role something like: LambdaBedrockExecutionRole

This role allows Lambda functions to securely access S3, invoke Amazon Bedrock, and write logs to CloudWatch.

Picture5

Step 4: Create the Lambda Function

  1. Go to AWS Lambda > Create Function
  2. Set the Function Name: docSummarizerLambda (I used that)
  3. Select Runtime: Python 3.9
  4. Choose the Execution Role you created earlier.

(LambdaBedrockExecutionRole)

  1. Upload your code:
  • I added the lambda_function.py code to the GitHub repo.
  • Dependencies (like lxml, PDF) are also included in the same GitHub repo.
  • Download the dependencies zip file to your local machine and attach it as a Lambda Layer during Lambda configuration.

Picture6

This Lambda function handles document parsing, model invocation, and storing the generated summary

Step 5: Set S3 as the Trigger for Lambda

  1. Go to your Lambda function → Configuration → Triggers → Click “Add trigger”
  2. Select Source as S3
  3. Choose the S3 bucket you created earlier (which contains uploads/ and summaries/ folders)
  4. Set the Event type – PUT
  5. Under Prefix, enter: uploads/
  6. Leave Suffix empty (optional)
  7. Click “Add” to finalize the trigger.

Picture7

This ensures your Lambda function is automatically invoked whenever a new file is uploaded to the uploads/ folder in your bucket.

Step 6: Add Lambda Layer for Dependencies

To include external Python libraries (like lxml, pdfminer.six, or python-docx), create a Lambda Layer:

  1. Download the dependencies ZIP

  • Clone or download the dependencies folder from the GitHub repo.
  1. Create the Layer

  • Go to AWS Lambda > Layers > Create layer
  • Name it (e.g., kc-lambda-layer)
  • Upload the ZIP file you downloaded
  • Set the compatible runtime to Python 3.9
  • Click Create

Picture8

  1. Attach Layer to Lambda Function

  • Open your Lambda function
  • Go to Configuration > Layers
  • Click Add a layer > Custom layers
  • Select the one you just downloaded
  • Click Add

Picture9

Picture10

The final version of the Lambda function is shown below:

Picture11

 

Step 7: Upload a Document

  1. Navigate to S3 > uploads/  folder.
  2. Upload your document

Picture12

Once uploaded, the Lambda function is automatically triggered and performs the following actions:

  • Sends content to Bedrock for AI-based summarization.
  • Saves the summary in the summaries/ folder in the same S3 bucket.

Sample data of Document Summarization with AI on Amazon Bedrock file:

Picture13

Step 8: Monitor Lambda Logs in CloudWatch

Debug or verify your Lambda execution:

  1. Go to your Lambda Function in the AWS Console.
  2. Click on the Monitor tab → then View CloudWatch Logs.
  3. Open the Log stream to inspect detailed logs and execution steps.

This helps track any errors or view how the document was processed and summarized.

Picture14

Step 9: View Output Summary

  1. Navigate to your S3 bucket → open the summaries/ folder.
  2. Download the generated file (e.g., script_summary.txt).

Picture15

Results

We can see that the summary for the document summarization with the AI.txt file is successfully generated and saved as document summarization with_summary.txt inside the summaries/ folder.

Picture16

Conclusion

With this serverless workflow, you’ve built an automated document summarization pipeline using Amazon S3, Lambda, and Bedrock. This solution allows us to upload documents in various formats (TXT, PDF, DOCX) and receive concise summaries stored securely in S3 without manual intervention. It’s scalable, cost-effective, and ideal for document-heavy workflows like legal, academic, or business reporting.

We can further enhance it by adding an API Gateway to fetch summaries on demand or integrating DynamoDB for indexing and search.

]]>
https://blogs.perficient.com/2025/07/30/document-summarization-with-ai-on-amazon-bedrock/feed/ 1 385454
Moderate Image Uploads with AI/GenAI & AWS Rekognition https://blogs.perficient.com/2025/07/23/moderate-image-uploads-with-ai-genai-aws-rekognition/ https://blogs.perficient.com/2025/07/23/moderate-image-uploads-with-ai-genai-aws-rekognition/#respond Thu, 24 Jul 2025 01:40:22 +0000 https://blogs.perficient.com/?p=385005

As we all know, in the world of reels, Photos, and videos, Everyone is creating content and uploading to public-facing applications, such as social media. There is no control over the type of images users upload to the website. Here, we will discuss how to restrict inappropriate photos.

The AWS Rekognition Service can help you restrict this. AWS Rekognition Content moderation can detect inappropriate or unwanted content and provide levels for the images. By using that, not only can you make your business site compliant, but it also saves you a lot of cost, as you must pay only for what you use, without minimum fees, licenses, or upfront commitments.

This will require Lambda and API Gateway.

Implementing AWS Rekognition

To implement the solution, you must create a Lambda function and an API Gateway.

So the flow of solution will be like in the following manner:
2

Lambda

To create Lambda, you can follow the steps below:
1. Go to AWS Lambda Service and click on Create function, add information, and click on Create function.

123

  1. Make sure to add the permission below in Lambda Execution role:
{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "rekognition:*",
            "Resource": "*"
        }
    ]
}
  1. You can use the Python code below:

Below, the sample Python code sends the image to the AWS Rekognition service, retrieves the moderation level from it, and based on that, decides whether it is safe or unsafe.

responses = rekognition.detect_moderation_labels(
            Image={'Bytes': image_bytes},
            MinConfidence=80
        )
        print(responses)
        response = str(responses)    
        filter_keywords = ["Weapons","Graphic Violence","Death and Emaciation","Crashes","Products","Drugs & Tobacco Paraphernalia & Use","Alcohol Use","Alcoholic Beverages","Explicit Nudity","Explicit Sexual Activity","Sex Toys","Non-Explicit Nudity","Obstructed Intimate Parts","Kissing on the Lips","Female Swimwear or Underwear","Male Swimwear or Underwear","Middle Finger","Swimwear or Underwear","Nazi Party","White Supremacy","Extremist","Gambling" ]
        def check_for_unsafe_keywords(response: str):
            response_lower = response.lower()
            unsafe_keywords_found = [
                keyword for keyword in filter_keywords if keyword.lower() in response_lower
            ]
            return unsafe_keywords_found
        unsafe = check_for_unsafe_keywords(response)
        if unsafe:
            print("Unsafe keywords found:", unsafe)
            return {
                'statusCode': 403,
                'headers': {'Content-Type': 'application/json'
                },
                'body': json.dumps({
                    "Unsafe": "Asset is Unsafe",
                    "labels": unsafe
                })
            }
        else:
            print("No unsafe content detected.")
            return {
                'statusCode': 200,
                'headers': {'Content-Type': 'application/json'},
                'body': json.dumps({
                   "safe": "Asset is safe",
                    "labels": unsafe
                })
            }

4.  Then click on the Deploy button to deploy the code.

AWS Api Gateway

You need to create an API gateway that can help you send an image to Lambda to process with AWS Rekognition and then send the response to the User.

Sample API Integration:


Picture11

 

Once this is all set, when you send the image to the API gateway in the body, you will receive a response in a safe or unsafe manner.

Picture9

Conclusion

With this solution, your application prevents the uploading of any inappropriate or unwanted images. It is very cost-friendly too. It also helps make your site compliant.

 

]]>
https://blogs.perficient.com/2025/07/23/moderate-image-uploads-with-ai-genai-aws-rekognition/feed/ 0 385005
Mitigate DNS Vulnerabilities Proactively with Amazon Route 53 Resolver DNS Firewall https://blogs.perficient.com/2025/07/02/mitigate-dns-vulnerabilities-proactively-with-amazon-route-53-resolver-dns-firewall/ https://blogs.perficient.com/2025/07/02/mitigate-dns-vulnerabilities-proactively-with-amazon-route-53-resolver-dns-firewall/#respond Wed, 02 Jul 2025 10:48:29 +0000 https://blogs.perficient.com/?p=383529

In today’s cloud-first world, securing your DNS layer is more critical than ever. DNS (Domain Name System) is a foundational element of network infrastructure, but it’s often overlooked as a security risk. Attackers frequently exploit DNS to launch phishing campaigns, exfiltrate data, and communicate with command-and-control servers. Proactive DNS security is no longer optional – it’s essential.

To strengthen DNS-layer security, Amazon Route 53 Resolver DNS Firewall provides robust control over DNS traffic by enabling the use of domain lists, allowing specific domains to be explicitly permitted or denied. Complementing these custom lists are AWS Managed Domain Lists, which autonomously block access to domains identified as malicious, leveraging threat intelligence curated by AWS and its trusted security partners. While this method is highly effective in countering known threats, cyber adversaries are increasingly employing sophisticated evasion techniques that go undetected by conventional blocklists. In this blog, I’ll explore DNS vulnerabilities, introduce Route 53 Resolver DNS Firewall, and walk you through practical strategies to safeguard your cloud resources.

By analyzing attributes such as query entropy, length, and frequency, the service can detect and intercept potentially harmful DNS traffic, even when interacting with previously unknown domains. This proactive approach enhances defense against advanced tactics, such as DNS tunneling and domain generation algorithms (DGAs), which attackers often use to establish covert communication channels or maintain malware connectivity with command-and-control servers.

In this blog, I’ll guide you through a hands-on journey into the world of DNS-layer threats and the tools available to defend against them. You’ll discover how to configure effective Route 53 Resolver DNS Firewall Advanced rules. I’ll also walk through a real-world threat detection scenario, demonstrating how the service seamlessly integrates with AWS Security Hub to provide enhanced visibility and actionable alerts. By the end of this post, you’ll be equipped with the knowledge to implement DNS Firewall rules that deliver intelligent, proactive protection for your AWS workloads.

Risks Linked to DNS Tunneling and Domain Generation Algorithms

DNS tunneling and Domain Generation Algorithms (DGAs) are sophisticated techniques employed by cyber adversaries to establish hidden communication channels and evade traditional security measures.

DNS Tunneling: This method exploits the DNS protocol by encapsulating non-DNS data within DNS queries and responses. Since DNS traffic is typically permitted through firewalls and security devices to facilitate normal internet operations, attackers leverage this trust to transmit malicious payloads or exfiltrate sensitive data without detection. The risks associated with DNS tunneling are significant, including unauthorized data transfer, persistent command-and-control (C2) communication, and the potential for malware to bypass network restrictions. Detecting such activity requires vigilant monitoring for anomalies such as unusually large DNS payloads, high-frequency queries to unfamiliar domains, and irregular query patterns.

Domain Generation Algorithms (DGAs): DGAs enable malware to generate a vast number of pseudo-random domain names, which are used to establish connections with Command and Control (C2) servers. This dynamic approach makes it challenging for defenders to block malicious domains using traditional blacklisting techniques, as the malware can swiftly switch to new domains if previous ones are taken down. The primary risks posed by DGAs include the resilience of malware infrastructures, difficulty in predicting and blocking malicious domains, and the potential for widespread distribution of malware updates. Effective mitigation strategies involve implementing advanced threat intelligence, machine learning models to detect anomalous domain patterns, and proactive domain monitoring to identify and block suspicious activities.

Understanding and addressing the threats posed by DNS tunneling and DGAs are crucial for maintaining robust cybersecurity defenses.

Let’s See How DNS Firewall Works

Route 53 Resolver DNS Firewall Advanced enhances DNS-layer security by intelligently analyzing DNS queries in real time to detect and block threats that traditional firewalls or static domain blocklists might miss. Here’s a breakdown of how it operates:

  1. Deep DNS Query Inspection

When a DNS query is made from resources within your VPC, it is routed through the Amazon Route 53 Resolver. DNS Firewall Advanced inspects each query before it is resolved. It doesn’t just match the domain name against a list—it analyses the structure, behaviour, and characteristics of the domain itself.

  1. Behavioural Analysis Using Machine Learning

The advanced firewall uses machine learning models trained on massive datasets of real-world domain traffic. These models understand what “normal” DNS behaviour looks like and can flag anomalies such as:

  • Randomized or algorithm-generated domain names (used by DGAs)
  • Unusual query patterns
  • High entropy in domain names
  • Excessive subdomain nesting (common in DNS tunnelling)

This allows it to detect suspicious domains, even if they’ve never been seen before.

  1. Confidence Thresholds

Each suspicious query is scored based on how closely it resembles malicious behaviour. You can configure confidence levels—High, Medium, or Low:

  • High Confidence: Detects obvious threats, with minimal false positives (ideal for production).
  • Medium Confidence: Balanced sensitivity for broader detection.
  • Low Confidence: Aggressive detection for highly secure or test environments
  1. Action Controls (Block, Alert, Allow)

Based on your configured rules and confidence thresholds, the firewall can:

  • Block the DNS query
  • Alert (log the suspicious activity, but allow the query)
  • Allow known safe queries

These controls give you flexibility to tailor the firewall’s behavior to your organization’s risk tolerance.

  1. Rule Groups and Customization

You can organize rules into rule groups, apply AWS Managed Domain Lists, and define custom rules based on your environment’s needs. You can also associate these rule groups with specific VPCs, ensuring DNS protection is applied at the network boundary.

  1. Real-Time Response Without Latency

Despite performing deep inspections, the firewall processes each DNS request in under a millisecond. This ensures there is no perceptible impact on application performance.

Blank Diagram

The above figure shows Route 53 DNS Firewall logs ingested into CloudWatch and analysed through Contributor Insights.

Demonstration

To begin, I’ll demonstrate how to manually create a Route 53 Resolver DNS Firewall Advanced rule using the AWS Management Console. This rule will be configured to block DNS queries identified as high-confidence DNS tunneling attempts.

Step 1: Navigate to Route 53 Resolver DNS Firewall

  • Sign in to the AWS Management Console.
  • In the search bar, type “Route 53” and select “Route 53 Resolver”.
  • In the left navigation pane, choose “DNS Firewall Rule groups” under the DNS Firewall section.

Picture1

Step 2: Create a New Rule Group

  • Click on “Create rule group”.
  • Enter a name and optional description (e.g., BlockHighConfidenceDNS
  • Click Next to proceed to add rules.

Picture2

Step 3: Add a Rule to the Rule Group

  • Click “Add rule”.

Picture3

  • For Rule name, enter a name (e.g., BlockTunnelingHighConfidence).

Picture4

  • Under DNS Firewall, Advanced protection
    1. Select DNS tunneling detection.
    2. For the Confidence threshold, select High.
    3. Leave the Query Type field blank to apply the rule to all query types.
  • Under the Action Section:
    1. Set the Action to Block.
    2. For the Response type, choose OVERRIDE.
    3. In the Record value field, enter: dns-firewall-advanced-block.
    4. For the Record type, select CNAME.
    5. Click Add rule to save the configuration.

Picture5

Monitoring and Insights

Route 53 Resolver query logging offers comprehensive visibility into DNS queries originating from resources within your VPCs, allowing you to monitor and analyze DNS traffic for both security and compliance purposes. When enabled, query logging captures key details for each DNS request—such as the queried domain name, record type, response code, and the source VPC or instance. This capability becomes especially powerful when paired with Route 53 Resolver DNS Firewall, as it enables you to track blocked DNS queries and refine your security rules based on real traffic behavior within your environment. Below are sample log entries generated when the DNS Firewall identifies and acts upon suspicious activity, showcasing the depth of information available for threat analysis and incident response.

Example log entry: DNS tunneling block

The following is an example of a DNS tunneling block.

Picture6

Key Indicators of DNS Tunneling

  • query_name: Very long, random-looking domain name—typical of data being exfiltrated via DNS.
  • rcode: NXDOMAIN indicates no valid domain exists—often seen in tunneling.
  • answers: The query response was overridden with a controlled CNAME (dns-firewall-advanced-block.).
  • firewall_rule_action: Shows this was an intentional BLOCK action.
  • firewall_protection: Labeled as DNS_TUNNELING, indicating why the query was blocked.
  • srcids: Helps trace back to the source EC2 instance making the suspicious request.

Example log entry: DNS tunneling alert

Picture7

Use Case

This type of alert is useful in:

  • Monitoring mode during firewall tuning.
  • Staging environments where you want visibility without enforcement.
  • Incident investigations—tracking which resources may be compromised or leaking data.

Final Thoughts

Amazon Route 53 Resolver DNS Firewall Advanced marks a significant advancement in protecting organizations against sophisticated DNS-layer threats. As discussed, DNS queries directed to the Route 53 Resolver take a distinct route that bypasses conventional AWS security measures such as security groups, network ACLs, and even AWS Network Firewall, introducing a potential security blind spot within many environments. In this post, I’ve examined how attackers exploit this gap using techniques like DNS tunneling and domain generation algorithms (DGAs), and how Route 53 Resolver DNS Firewall Advanced leverages real-time pattern recognition and anomaly detection to mitigate these risks. You also explored how to set up the service via the AWS Management Console and deploy it using a CloudFormation template that includes pre-configured rules to block high-confidence threats and alert on suspicious activity. Additionally, you saw how enabling query logging enhances visibility into DNS behavior and how integrating with AWS Security Hub consolidates threat insights across your environment. By adopting these capabilities, you can better safeguard your infrastructure from advanced DNS-based attacks that traditional blocklists often miss, strengthening your cloud security posture without compromising performance.

]]>
https://blogs.perficient.com/2025/07/02/mitigate-dns-vulnerabilities-proactively-with-amazon-route-53-resolver-dns-firewall/feed/ 0 383529
Building Smarter APIs with OpenAPI, AWS Bedrock & SageMaker Studio in Drupal 10 https://blogs.perficient.com/2025/06/20/building-smarter-apis-with-openapi-aws-bedrock-sagemaker-studio-in-drupal-10/ https://blogs.perficient.com/2025/06/20/building-smarter-apis-with-openapi-aws-bedrock-sagemaker-studio-in-drupal-10/#respond Fri, 20 Jun 2025 14:24:19 +0000 https://blogs.perficient.com/?p=383209

As AI continues to reshape how we build digital experiences, combining cloud-based AI services with modern CMS platforms like Drupal is becoming the new normal. Whether you’re looking to power up content generation, provide smart recommendations, or summarize long-form text — this blog walks you through using OpenAPI, AWS Bedrock, and Amazon SageMaker Studio in a Drupal 10 environment. 

Introduction to AI Services 

  • What is AWS Bedrock? 

AWS Bedrock is a fully managed service that allows you to build and scale generative AI applications using foundation models (FMs) from leading AI companies like Anthropic (Claude), Meta (LLaMA), Stability AI, and Amazon’s own Titan models — all without having to train or manage your own infrastructure. 

Key Capabilities: 

  • Text summarization 
  • Q&A bots 
  • Content creation 
  • Code generation 

It’s serverless, fast, and integrates easily with other AWS services. 

  • What is Amazon SageMaker Studio? 

SageMaker Studio is a web-based, end-to-end ML development environment that enables data scientists and engineers to: 

  • Clean and prepare data 
  • Train, tune, and deploy machine learning models 
  • Monitor performance 
  • Run real-time inferences 

It provides a visual interface for managing the ML lifecycle and integrates with AWS Bedrock to leverage pre-trained foundation models. 

Use SageMaker when you want full control over your model pipeline, including custom training, while still tapping into AWS-hosted tools. 

  • What is OpenAPI Schema? 

OpenAPI (formerly known as Swagger) is an industry-standard specification used to describe RESTful APIs in a machine-readable format (YAML or JSON). It helps developers: 

  • Define API endpoints 
  • Standardize request/response formats 
  • Document authentication and parameters 
  • Auto-generate SDKs and test tools 
  • Sample OpenAPI Schema (YAML) 

paths:
 /summary:
   post:
     summary: Get summary from Bedrock
     requestBody:
       content:
         application/json:
           schema:
             type: object
             properties:
               prompt:
                 type: string
     responses:
       ‘200’:
         description: AI-generated summary
 

This schema becomes the contract between your AI service and the Drupal frontend. 

 

How to Call AWS Bedrock or SageMaker and Get JSON Data 

Here’s how you can trigger Bedrock from an external app and get a JSON response: 

  • Sample Python (Boto3) Code: 

import boto3
import json

bedrock = boto3.client(‘bedrock-runtime’, region_name=’us-east-1′)

response = bedrock.invoke_model(
   modelId=’anthropic.claude-v2′,
   body=json.dumps({
       “prompt”: “Summarize this article on climate change.”,
       “max_tokens_to_sample”: 300
   }),
   contentType=’application/json’,
   accept=’application/json’
)

print(response[‘body’].read().decode(‘utf-8’))
 

Sample Output: 

json 

{
 “completion”: “The article discusses climate change trends, policy updates, and future predictions…”
}
 

You now have usable JSON data that can be consumed by any CMS including Drupal. 

 

How to Integrate JSON Data in Drupal 10
 

  • Create a REST Resource Plugin 

Define a custom REST endpoint in Drupal that acts as a middleware: 

  • Accepts content or a request 
  • Sends it to Bedrock or SageMaker 
  • Returns the AI response in real time 

This is perfect if you want Drupal to act as a bridge between the editor and the AI model. 

Conclusion 

The combination of OpenAPI, AWS Bedrock, and SageMaker Studio offers a scalable and intelligent backend for modern web applications. With Drupal 10 acting as the frontend layer, you can create experiences that are dynamic, personalized, and AI-powered — all while maintaining control and security. 

 

]]>
https://blogs.perficient.com/2025/06/20/building-smarter-apis-with-openapi-aws-bedrock-sagemaker-studio-in-drupal-10/feed/ 0 383209
Boost Cloud Efficiency: AWS Well-Architected Cost Tips https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/ https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/#respond Mon, 09 Jun 2025 06:36:11 +0000 https://blogs.perficient.com/?p=378814

In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.

What is the AWS Well-Architected Framework?

The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:

  • Operational Excellence – Continuously monitor and improve systems and processes.
  • Security – Protect data, systems, and assets through risk assessments and mitigation strategies.
  • Reliability – Ensure workloads perform as intended and recover quickly from failures.
  • Performance Efficiency – Use resources efficiently and adapt to changing requirements.
  • Cost Optimization – Avoid unnecessary costs and maximize value.
  • Sustainability – Minimize environmental impact by optimizing resource usage and energy consumption

98bb6d5d218aea2968fc8e8bba96ef68b6a7730c 1600x812

Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected

AWS Well -Architected Timeline

Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.

Oip

AWS Well-Architected Tool

To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.

How it Works:

  • Select a workload.
  • Answer a series of questions aligned with the framework.
  • Review insights and recommendations.
  • Generate reports and track improvements over time.

Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/

Go Deeper with Labs and Lenses

AWS also Provides:

Deep Dive: Cost Optimization Pillar

Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.

Why It Matters:

  • Understand your spending patterns.
  • Ensure costs support growth, not hinder it.
  • Maintain control as usage scales.

5 Best Practices for Cost Optimization

  1. Practice Cloud Financial Management
  • Build a cost optimization team.
  • Foster collaboration between finance and tech teams.
  • Use budgets and forecasts.
  • Promote cost-aware processes and culture.
  • Quantify business value through automation and lifecycle management.
  1. Expenditure and Usage Awareness
  • Implement governance policies.
  • Monitor usage and costs in real-time.
  • Decommission unused or underutilized resources.
  1. Use Cost-Effective Resources
  • Choose the right services and pricing models.
  • Match resource types and sizes to workload needs.
  • Plan for data transfer costs.
  1. Manage Demand and Supply
  • Use auto-scaling, throttling, and buffering to avoid over-provisioning.
  • Align resource supply with actual demand patterns.
  1. Optimize Over Time
  • Regularly review new AWS features and services.
  • Adopt innovations that reduce costs and improve performance.

Conclusion

The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.

]]>
https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/feed/ 0 378814