Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Tue, 18 Feb 2025 21:17:31 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 The Colorful World of Azure DevOps Boards https://blogs.perficient.com/2025/02/18/the-colorful-world-of-azure-devops-boards/ https://blogs.perficient.com/2025/02/18/the-colorful-world-of-azure-devops-boards/#respond Tue, 18 Feb 2025 18:08:51 +0000 https://blogs.perficient.com/?p=377362

Out of the box, Azure DevOps provides black-and-white capabilities in terms of how it can be utilized to support a project and its code repository. Over time, teams establish and settle into work processes, often continuing to use those basic settings, which can lead to a mundane operation and, perhaps, risk losing sight of the end goal.

Even if a project is customized in terms of workflow, custom state options, or custom fields, sometimes it is still difficult to know where things stand and what is important to focus on.

There are a few ways in which Azure DevOps can aid in making those items visible and obvious, to better help guide a team.

Leverage color to draw attention

When viewing a Board in Azure DevOps, it can often be overwhelming to look at or find specific work items. Consider: what is most important for the team to complete or prioritize, and, what could be a unique identifier to locate those items? These are the items we want the team to notice and work on first.

There are a couple of ways in which Azure DevOps allows us to style work items on a board:

  • Card Styles
  • Tag Colors

Let’s take an example of Card Styles: We want the client to quickly and easily see if items on the Board are blocked. In our board settings, we can use the Board Settings > Cards > Styles options to apply some rules to make any work items which contain the tag ‘Blocked’ to appear Red in color.

Example Settings:

Blocked Card Style Settings 

Example Card Preview:

Blocked Card

Another use case for applying Card Styles could be that we want our team members to prioritize and focus on any Bug work items which have a Priority of 1. In the same settings dialog, we can add another styling rule so that any Bug work item which has a Priority of ‘1’ should appear Yellow in color. This will make it extremely easy to find those Priority 1 Bugs when viewing the board, so that it is obvious to any team member who is assigned to one.

Example Card Preview:

Priority 1 Card

Let’s look at one more use case – we want our team to easily recognize work items containing the tag ‘content.’ In this example, this tag means that the work item will require manual content steps, along with the code changes. In the Board Settings > Cards > Tag Colors options, we can configure a rule so that this specific tag will appear in Pink while viewing the board.

Example Card Preview:

Content Tags

TIP: While it is great to provide color styling rules to work items, it is best to reserve those rules only items needing specific, frequent attention. Consider this before applying any styling setting on a project’s Board.

Find key details in a dash on your Dashboard

Lastly, Dashboards are a fantastic way to provide fast, summary information regarding the progress of a team or project. Consider creating dashboards to display results of queries that you often find yourself referencing for reporting or oversight. Like the Backlog and Boards views, keep the dashboards focused on the most valuable information. Make it easily visible by organizing the most important widgets to the top of the dashboard.

In this example below, the team wanted to automate a way of finding work items which were mis-placed in the backlog or were without tags. A series of queries were created and used to provide data of matching results. In the first screenshot, there are no results, and all the tiles are equal to 0 – this is the ideal state. In the second screenshot, there are results in one of the tables and three of the tiles have a matching result of 1, in which case the tile is configured to turn Red in color. This makes it very easy for a team member to notice and take action to make sure specific work items are addressed quickly.

Screenshot 1:

Dashboard Ex 1

Screenshot 2:

Dashboard Ex 2

 

TIPS:

    • Create multiple dashboards, each with their own purpose, to prevent having just 1 or 2 dashboards being overwhelmed by too much information.
    • The ‘Chart for Work Items’ widget on the dashboards also allows for the color options to be customized. Consider this in cases where you want to draw attention to a specific attribute, such as work item State.

Paint the picture for your team

To help keep the team focused and from settling into a mundane work pattern, keep the most important data in Azure DevOps accessible and visible on project Boards, Backlogs, and Dashboards. Use visual indicators like color to help enable the team to quickly find what is most important to use their time most efficiently towards the project’s goal.

By using these simply tips and tricks, it will help to paint a masterpiece of a project that both the team and client will be better engaged in.

]]>
https://blogs.perficient.com/2025/02/18/the-colorful-world-of-azure-devops-boards/feed/ 0 377362
Creating a detailed Content Audit & Mapping Strategy for your next site build https://blogs.perficient.com/2025/02/18/creating-a-detailed-content-audit-mapping-strategy-for-your-next-site-build/ https://blogs.perficient.com/2025/02/18/creating-a-detailed-content-audit-mapping-strategy-for-your-next-site-build/#respond Tue, 18 Feb 2025 17:15:46 +0000 https://blogs.perficient.com/?p=377355

The mark of a successful website is more than just a collection of pagee. It’s a structured ecosystem where every piece of content serves a purpose, or it should. When building a new site or migrating to a new platform, content mapping is a critical step that determines how information is organized, accessed, and optimized for performance. Without a thoughtful strategy, businesses risk losing valuable content, creating navigation confusion, and impacting search visibility. It should be a process that is constantly reviewed and refined.

Content mapping starts with a deep understanding of what already exists and how it needs to evolve. This process is especially important when working with both structured and unstructured data—two very different types of content that require distinct approaches. Structured data, such as product catalogs, customer profiles, or metadata, follows a defined format and is easier to categorize. Unstructured data, including blog posts, images, and videos, lacks a rigid framework and demands extra effort to classify, tag, and optimize. While structured data migration is often straightforward, unstructured content requires strategic planning to ensure it remains accessible and meaningful within a new digital experience.

Why a Content Audit is Important

A content audit is the first step in developing a solid content mapping strategy. This involves evaluating existing content to determine what should be migrated, what needs to be refined, and what should be left behind. Without this step, businesses risk carrying over outdated or redundant content, which can clutter the new site and dilute the user experience.

A well-executed audit not only catalogs content but also assesses its performance. Understanding which pages drive the most engagement and which fail to connect with audiences helps inform how content is structured in the new environment. This process also highlights gaps—areas where fresh content is needed to align with business goals or audience expectations.

Beyond performance, content audits reveal inconsistencies in voice, formatting, or taxonomy. A new site presents an opportunity to standardize these elements, ensuring that every piece of content follows best practices for branding, SEO, and user experience.

Taxonomy and Metadata

Once content is audited and mapped, the next step is defining a clear taxonomy and metadata strategy. Taxonomy refers to how content is classified and grouped, making it easier for users to navigate and find relevant information. Metadata, on the other hand, provides the structured details that power search functionality, personalization, and content recommendations.

Without proper taxonomy, even high-quality content can become buried and difficult to access. Establishing consistent tagging, categorization, and metadata ensures that content remains discoverable, whether through site search, filtering options, or AI-driven recommendations. This is particularly important when transitioning to platforms like Acquia, Sitecore, or Optimizely, where personalization and dynamic content delivery depend on well-structured metadata.

Additionally, URL consistency and redirect strategies play a crucial role in maintaining SEO authority. A content mapping plan should account for how legacy URLs will transition to the new site, preventing broken links and preserving search rankings.

Scalability and Future Growth

Content mapping is not just about migrating existing assets—it’s about creating a structure that supports long-term digital success. The best content strategies anticipate future growth, ensuring that new content can be seamlessly integrated without disrupting site architecture.

This means designing a content model that accommodates personalization, omnichannel distribution, and AI-driven enhancements. As businesses scale, the ability to dynamically deliver content across different devices and user segments becomes increasingly important. Content mapping lays the foundation for this flexibility, making it easier to adapt and evolve without requiring constant restructuring.

A Seamless Digital Experience

A well-planned content mapping strategy transforms website migration from a logistical challenge into a strategic opportunity. By auditing existing content, defining clear taxonomy and metadata structures, and building for scalability, businesses can create a site that is not only organized but optimized for engagement and performance.

Content is the heart of any digital experience, but without proper mapping, it can become fragmented and difficult to manage. Taking the time to strategically align content with user needs, business goals, and technological capabilities ensures that a new site isn’t just a fresh coat of paint—it’s a true step forward in delivering meaningful digital experiences.

]]>
https://blogs.perficient.com/2025/02/18/creating-a-detailed-content-audit-mapping-strategy-for-your-next-site-build/feed/ 0 377355
Ramp Up On React/React Native In Less Than a Month https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/ https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/#respond Mon, 17 Feb 2025 14:57:23 +0000 https://blogs.perficient.com/?p=370755

I’ve had plenty of opportunities to guide developers new to the React and React Native frameworks. While everyone is different, I wanted to provide a structured guide to help bring a fresh developer into the React fold.

Prerequisites

This introduction to React is intended for a developer that at least has some experience with JavaScript, HTML and basic coding practices.

Ideally, this person has coded at least one project using JavaScript and HTML. This experience will aid in understanding the syntax of components, but any aspiring developer can learn from it as well.

 

Tiers

There are several tiers for beginner level programmers who would like to learn React and are looking for someone like you to help them get up to speed.

Beginner with little knowledge of JavaScript and/or HTML

For a developer like this, I would recommend introductory JavaScript and HTML knowledge. Maybe a simple programming exercise or online instruction, before introducing them to React. You can compare JavaScript to a language they are familiar with and cover core concepts. A basic online guide should be sufficient to get them up and running with HTML.

Junior/Intermediate with some knowledge of JavaScript and/or HTML

I would go over some basics of JavaScript and HTML to make sure they have enough to grasp the syntax and terminologies used in React. A supplementary course or online guide might be good for a refresher before introducing them to modern concepts.

Seasoned developer that hasn’t used React

Even if they haven’t used JavaScript or HTML much, they should be able to ramp up quickly. Reading through React documentation should be enough to jumpstart the learning process.

 

Tips and Guidelines

You can begin their React and React Native journey with the following guidelines:

React Documentation

The React developer documentation is a great place to start if the developer has absolutely no experience or is just starting out. It provides meaningful context in the differences between standard JavaScript and HTML and how React handles them. It also provides a valuable reference on available features and what you can do within the framework.

Pro tip: I recommend starting them right off with functional components. They are more widely used and often have better performance, especially with hooks. I personally find them easier to work with as well.

Class component:

function MyButton() {
    return (
        <button>I'm a button</button>
    );
}

 

Functional component:

const MyButton = () => {
    return (
        <button>I'm a button</button>
    )
}

 

The difference with such a small example isn’t very obvious, but it becomes much different once you introduce hooks. Hooks allow you to extract functionality into a reusable container, this allows you to keep logic separate or import it in other components. There are also several built-in hooks that make life easier. Hooks always start with “use” (useState, useRef, etc.). You are also able to create custom hooks for your own logic.

Concepts

Once they understand basic concepts, it’s time to focus on advanced React concepts. State management is an important factor in React which covers component and app-wide states. Learning widely used packages might come in handy. I recommend Redux Toolkit as it’s easy to learn, but extremely extensible. It is great for both big and small projects and offers simple to complex state management features.

Now might be a great time to point out the key differences between React and React Native. They are very similar with a few minor adjustments:

ReactReact Native
LayoutUses HTML tags“core components” (View instead of div for example).
StylingCSSStyle objects
X/Y Coordinate PlanesFlex direction: rowFlex direction: column
NavigationURLsRoutes react-navigation

Tic-Tac-Toe

I would follow the React concepts with an example project. This allows the developer to see how a project is structured and how to code within the framework. Tic-Tac-Toe is a great example for a new React developer to give a try to see if they understand the basic concepts.

Debugging

Debugging in Chrome is extremely useful for things like console logs and other logging that is beneficial for defects. The Style Inspector is another mandatory tool for React that lets you see how styles are applied to different elements. For React Native, the documentation contains useful links to helpful tools.

Project Work

Assign the new React developer low-level bugs or feature enhancements to tackle. Closely monitoring their progress via pair programing has been extremely beneficial in my experience. This provides the opportunity to ask real-time questions to which the experienced developer can offer guidance. This also provides an opportunity to correct any mistakes or bad practices before they become ingrained. Merge requests should be reviewed together before approval to ensure code quality.

In Closing

These tips and tools will give a new React or React Native developer the skills they can develop to contribute to projects. Obviously, the transition to React Native will be a lot smoother for a developer familiar with React, but any developer that is familiar with JavaScript/HTML should be able to pick up both quickly.

Thanks for your time and I wish you the best of luck with onboarding your new developer onto your project!

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/feed/ 0 370755
Engineering a Healthcare Analytics Center of Excellence (ACoE): A Strategic Framework for Innovation https://blogs.perficient.com/2025/02/14/engineering-a-healthcare-analytics-center-of-excellence-acoe-a-strategic-framework-for-innovation/ https://blogs.perficient.com/2025/02/14/engineering-a-healthcare-analytics-center-of-excellence-acoe-a-strategic-framework-for-innovation/#respond Fri, 14 Feb 2025 16:12:00 +0000 https://blogs.perficient.com/?p=377172

In today’s rapidly evolving healthcare landscape, artificial intelligence (AI) and generative AI are no longer just buzzwords – they’re transformative technologies reshaping how we deliver care, manage operations, and drive innovation. As healthcare organizations navigate this complex technological frontier, establishing an Analytics Center of Excellence (ACoE) focused on AI and generative AI has become crucial for sustainable success and competitive advantage.

The Evolution of Analytics in Healthcare

Healthcare organizations are sitting on vast treasures of data – from electronic health records and medical imaging to claims data and operational metrics. However, the real challenge lies not in data collection but in transforming this data into actionable insights that drive better patient outcomes and operational efficiency. This is where an AI-focused ACoE becomes invaluable.

Core Components of an AI-Driven Healthcare ACoE

1. People: Building a Multidisciplinary Team of Experts

The foundation of any successful ACoE is its people. For healthcare AI initiatives, the team should include:

  • Clinical AI Specialists: Healthcare professionals with deep domain knowledge and AI expertise
  • Data Scientists & ML Engineers: Experts in developing and deploying AI/ML models
  • Healthcare Data Engineers: Specialists in healthcare data architecture and integration
  • Clinical Subject Matter Experts: Physicians, nurses, and healthcare practitioners
  • Ethics & Compliance Officers: Experts in healthcare regulations and AI ethics
  • Business Analysts: Professionals who understand healthcare operations and analytics
  • Change Management Specialists: Experts in driving organizational adoption
  • UI/UX Designers: Specialists in creating intuitive healthcare interfaces

2. Processes: Establishing Robust Frameworks

The ACoE should implement clear processes aligned with the PACE framework:

Policies:

  • Data governance and privacy frameworks (HIPAA, GDPR, etc.)
  • AI model development and validation protocols
  • Clinical validation procedures
  • Ethical AI guidelines
  • Regulatory compliance processes

Advocacy:

  • Stakeholder engagement programs
  • Clinical adoption initiatives
  • Training and education programs
  • Internal communication strategies
  • External partnership management

Controls:

  • Model risk assessment frameworks
  • Clinical outcome validation
  • Performance monitoring systems
  • Quality assurance protocols
  • Audit mechanisms

Enablement:

  • Resource allocation frameworks
  • Technology adoption protocols
  • Innovation pipeline management
  • Knowledge sharing systems
  • Collaboration platforms

3. Technology: Implementing a Robust Technical Infrastructure

The well-designed technical foundation of the ACoE should include:

Core Infrastructure:

  • Cloud computing platforms (with healthcare-specific security features)
  • Healthcare-specific AI/ML platforms
  • Data lakes and warehouses optimized for healthcare data
  • Model development and deployment platforms
  • Integration engines for healthcare systems

AI/ML Capabilities:

  • Natural Language Processing for clinical documentation
  • Computer Vision for medical imaging
  • Predictive analytics for patient outcomes
  • Generative AI for medical research and content creation
  • Real-time analytics for operational efficiency

Security & Compliance:

  • End-to-end encryption
  • Access control systems
  • Audit logging mechanisms
  • Compliance monitoring tools
  • Privacy-preserving AI techniques

4. Economic Evaluation: Measuring Financial Impact

The ACoE should establish clear metrics for measuring the economic impact of the initiative:

Cost Metrics:

  • Implementation costs
  • Operational expenses
  • Training and development costs
  • Infrastructure investments
  • Licensing and maintenance fees

Benefit Metrics:

  • Utilization of health services (e.g., reduced ER and acute inpatient utilization for chronic conditions)
  • Revenue enhancement
  • Cost reduction
  • Efficiency gains (e.g., faster triage, and patient discharge times; shorter waiting times)
  • Quality improvements
  • Market share growth

5. Key Performance Indicators (KPIs)

Establish comprehensive KPIs across multiple dimensions:

Clinical Impact:

  • Patient outcome improvements
  • Reduction in medical errors
  • Length of stay optimization
  • Readmission rate reduction
  • Clinical decision support effectiveness

Operational Efficiency:

  • Process automation rates
  • Resource utilization
  • Workflow optimization
  • Staff productivity
  • Cost per patient

Innovation Metrics:

  • Number of AI models deployed
  • Model accuracy and performance
  • Time to deployment
  • Innovation pipeline health
  • Research publications and patents

User Adoption:

  • System utilization rates
  • User satisfaction scores
  • Training completion rates
  • Feature adoption metrics
  • Feedback implementation rate

6. Outcomes: Delivering Measurable Results

Focus on achieving and documenting concrete outcomes:

Patient Care:

  • Improved diagnostic accuracy
  • Enhanced treatment planning
  • Better patient and clinician engagement
  • Reduced medical errors
  • Improved patient and provider satisfaction

Operational Excellence:

  • Streamlined workflows
  • Reduced administrative burden
  • Better resource allocation
  • Improved cost management
  • Enhanced regulatory compliance

Innovation Leadership:

  • New AI-driven solutions
  • Research contributions
  • Industry recognition
  • Competitive advantage
  • Market leadership

Implementation Roadmap

1. Foundation Phase (0-6 months)

  • Establish governance structure
  • Build core team
  • Define initial use cases
  • Set up basic infrastructure

2. Development Phase (6-12 months)

  • Implement initial AI projects
  • Develop training programs
  • Create documentation frameworks
  • Establish monitoring systems

3. Scaling Phase (12-24 months)

  • Expand use cases
  • Enhance capabilities
  • Optimize processes
  • Measure and adjust

Ensuring Success: Critical Success Factors

1. Executive Sponsorship

  • Clear leadership support
  • Resource commitment
  • Strategic alignment
  • Change management

2. Stakeholder Engagement

  • Clinical staff involvement
  • IT team collaboration
  • Patient feedback
  • Partner participation

3. Continuous Learning

  • Regular training
  • Knowledge sharing
  • Best practice updates
  • Industry monitoring

Conclusion

Building an AI-focused Analytics Center of Excellence in healthcare is a complex but rewarding journey. Success requires careful attention to people, processes, technology, and outcomes. By following this comprehensive framework and maintaining a steadfast focus on delivering value, healthcare organizations can build an ACoE that drives innovation, improves patient care, and creates sustainable competitive advantage.

The future of healthcare lies in our ability to harness the power of AI and analytics effectively. A well-designed ACoE serves as a scalable and flexible foundation for this transformation, enabling organizations to compete on analytics and thrive in an increasingly data-driven healthcare landscape.

]]>
https://blogs.perficient.com/2025/02/14/engineering-a-healthcare-analytics-center-of-excellence-acoe-a-strategic-framework-for-innovation/feed/ 0 377172
Prospective Developments in API and APIGEE Management: A Look Ahead for the Next Five Years https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/ https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/#respond Wed, 12 Feb 2025 11:39:03 +0000 https://blogs.perficient.com/?p=376548

Application programming interfaces, or APIs, are crucial to the ever-changing digital transformation landscape because they enable businesses to interact with their data and services promptly and effectively. Effective administration is therefore necessary to guarantee that these APIs operate as intended, remain secure, and offer the intended advantages. This is where Apigee, Google Cloud’s premier API management solution, is helpful.

What is Apigee?

Apigee is an excellent tool for businesses wanting to manage their APIs smoothly. It simplifies the process of creating, scaling, securing, and deploying APIs, making developers’ work easier. One of Apigee’s best features is its flexibility—it can manage both external APIs for third-party access and internal APIs for company use, making it suitable for companies of all sizes. Apigee also works well with security layers like Nginx, which adds a layer of authentication between Apigee and backend systems. This flexibility and security make Apigee a reliable and easy-to-use platform for managing APIs.

What is Gemini AI?

Gemini AI is an advanced artificial intelligence tool that enhances the management and functionality of APIs. Think of it as a smart assistant that helps automate tasks, answer questions, and improve security for API systems like Apigee. For example, if a developer needs help setting up an API, Gemini AI can guide them with instructions, formats, and even create new APIs based on simple language input. It can also answer common user questions or handle customer inquiries automatically, making the whole process faster and more efficient. Essentially, Gemini AI brings intelligence and automation to API management, helping businesses run their systems smoothly and securely.

Why Should Consumers Opt for Gemini AI with Apigee?

Consumers should choose Gemini AI with Apigee because it offers more innovative, faster, and more secure API management. It also brings security, efficiency, and ease of use to API management, making it a valuable choice for businesses that want to streamline their operations and ensure their APIs are fast, reliable, and secure. Here are some key benefits: Enhanced Security, Faster Development, and Time-Saving Automation.

Below is the flow diagram for Prospective Developments in APIGEE.

Image1


Greater Emphasis on API Security

  • Zero Trust Security:  The Zero Trust security approach is founded on “never trust, always verify,” which states that no device or user should ever be presumed trustworthy, whether connected to the network or not. Each request for resource access under this architecture must undergo thorough verification.
  • Zero Trust Models: APIs will increasingly adopt zero-trust security principles, ensuring no entity is trusted by default. The future of Zero-Trust in Apigee will likely focus on increasing the security and flexibility of API management through tighter integration with identity management, real-time monitoring, and advanced threat protection technologies.
  • Enhanced Data Encryption: Future developments might include more substantial data encryption capabilities, both in transit and at rest, to protect sensitive information in compliance with Zero Trust principles.

    Picture2


Resiliency and Fault Tolerance

 The future of resiliency and fault tolerance in Apigee will likely involve advancements and innovations driven by evolving technological trends and user needs. Here are some key areas where we can expect Apigee to enhance its resiliency and fault tolerance capabilities.

Picture3

  • Automated Failover: Future iterations of Apigee will likely have improved automated failover features, guaranteeing that traffic is redirected as quickly as possible in case of delays or outages. More advanced failure detection and failover methods could be a part of this.
  • Adaptive Traffic Routing: Future updates could include more dynamic and intelligent traffic management features. This might involve adaptive routing based on real-time performance metrics, enabling more responsive adjustments to traffic patterns and load distribution.
  • Flexible API Gateway Configurations: Future enhancements could provide more flexibility in configuring API gateways to better handle different fault scenarios. This includes custom policies for fault tolerance, enhanced error handling, and more configurable redundancy options.

Gemini AI with Apigee

Gemini AI and Apigee’s integration has the potential to improve significantly API administration by enhancing its intelligence, security, and usability. Organizations can anticipate improved security, more effective operations, and better overall user and developer experience by utilizing cutting-edge AI technologies. This integration may open the door to future breakthroughs and capabilities as AI and API management technologies develop. If the API specifications that are currently available in API Hub do not satisfy your needs, you can utilize Gemini to create a new one by just stating your needs in basic English. Considerable time is saved in the cycles of development and assessment.

Gemini AI can inform you about the policy documentation in parallel while adding policies to the Apigee development. Gemini AI can guide you with the formats used in the policies. We can automate the query region like chatbots with Gemini AI. We may utilize Gemini AI to improve and get answers to questions about the APIs available on the Apigee portal.

If any integration is currently in use. We can use Gemini AI to accept inquiries from customers or clients and automate the most frequently asked responses. Additionally, Gemini AI can simply reply to customers until our professionals are active.


Overview

Apigee, Google Cloud’s API management platform, plays a key role in digital transformation by securely and flexibly connecting businesses with data and services. Future advancements focus on stronger security with a “Zero Trust” approach, improved resilience through automated failover and adaptive traffic routing, and enhanced flexibility in API gateway settings. Integration with Gemini AI will make Apigee smarter, enabling automated support, policy guidance, API creation, streamlining development, and improving customer service.

]]>
https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/feed/ 0 376548
Extending the Capabilities of Your Development Team with Visual Studio Code Extensions https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/ https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/#respond Tue, 11 Feb 2025 20:53:23 +0000 https://blogs.perficient.com/?p=377088

Introduction

Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).

This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.

What are Visual Studio Code Extensions?

Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.

Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.

Why Use Visual Studio Code Extensions?

The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.

  1. Improve Code Quality: Extensions like ESLint and JSHint help enforce coding standards and identify potential errors early in the development process. This leads to more robust, maintainable, and bug-free code.
  2. Boost Productivity: Extensions like Auto Close Tag and IntelliCode automate repetitive tasks, provide intelligent code completion, and streamline your workflow. This allows developers to focus on solving complex problems rather than getting bogged down in tedious tasks.
  3. Enhance Collaboration: Extensions like Live Share enable real-time collaboration, making it easier for team members to review code, pair program, and troubleshoot issues together, regardless of their physical location.
  4. Customize Your Workflow: VS Code’s flexibility allows you to tailor your development environment to your specific needs and preferences. Extensions like Bracket Pair Colorizer and custom themes can enhance readability and create a more comfortable and efficient working environment.
  5. Stay Current: Extensions provide support for the latest technologies and frameworks, ensuring that your team can quickly adapt to new developments in the industry and leverage the best tools for the job.
  6. Save Time: By automating common tasks and providing intelligent assistance, extensions like Path Intellisense can significantly reduce the amount of time spent on mundane tasks, freeing up more time for creative problem-solving and innovation.
  7. Ensure Consistency: Extensions like EditorConfig help enforce coding standards and best practices across your team, ensuring that everyone is following the same guidelines and producing consistent, maintainable code.
  8. Enhance Debugging: Powerful debugging extensions like Debugger for Java provide advanced debugging capabilities, making it easier to identify and resolve issues quickly and efficiently.

Managing IDE Tools for Mature Software Development Teams

As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.

  1. Standardization: Ensuring that all team members use the same tools and configurations reduces discrepancies, improves collaboration, and simplifies onboarding for new team members. Standardized extensions help maintain code quality and consistency, especially in larger teams where diverse setups can lead to confusion and inefficiencies.
  2. Efficiency: Streamlining the setup process for new team members allows them to get up to speed quickly. Automated setup scripts can install all necessary extensions and configurations in one go, saving time and reducing the risk of errors.
  3. Quality Control: Enforcing coding standards and best practices across the team is essential for maintaining code quality. Extensions like SonarLint can continuously analyze code quality, catching issues early and preventing bugs from making their way into production.
  4. Scalability: As your team evolves and adopts new technologies, managing IDE tools effectively facilitates the integration of new languages, frameworks, and tools. This ensures that your team can quickly adapt to new developments and leverage the best tools for the job.
  5. Security: Keeping all tools and extensions up-to-date and secure is paramount, especially for teams working on sensitive or high-stakes projects. Regularly updating extensions prevents security issues and ensures access to the latest features and security patches.

Best Practices for Managing VS Code Extensions in a Team

Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:

  1. Establish an Approved Extension List: Create and maintain a list of extensions that are approved for use by the team. This ensures that everyone is using the same core tools and configurations, reducing inconsistencies and improving collaboration. Consider using a shared document or a dedicated tool to manage this list.
  2. Automate Installation and Configuration: Use tools like Visual Studio Code Settings Sync or custom scripts to automate the installation and configuration of extensions and settings for all team members. This ensures that everyone has the same setup without manual intervention, saving time and reducing the risk of errors.
  3. Implement Regular Audits and Updates: Regularly review and update the list of approved extensions to add new tools, remove outdated ones, and ensure that all extensions are up-to-date with the latest security patches. This helps keep your team current with the latest developments and minimizes security risks.
  4. Provide Training and Documentation: Offer training and documentation on the approved extensions and best practices for using them. This helps ensure that all team members are proficient in using the tools and can leverage them effectively.
  5. Encourage Feedback and Collaboration: Encourage team members to provide feedback on the approved extensions and suggest new tools that could benefit the team. This fosters a culture of continuous improvement and ensures that the team is always using the best tools for the job.

Security Considerations for VS Code Extensions

While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.

  1. Verify the Source: Only install extensions from trusted sources, such as the Visual Studio Code Marketplace. Avoid downloading extensions from unknown or unverified sources, as they may contain malware or other malicious code.
  2. Review Permissions: Carefully review the permissions requested by extensions before installing them. Be cautious of extensions that request excessive permissions or access to sensitive data, as they may be attempting to compromise your security.
  3. Keep Extensions Updated: Regularly update your extensions to ensure that you have the latest security patches and bug fixes. Outdated extensions can be vulnerable to security exploits, so it’s important to keep them up-to-date.
  4. Use Security Scanning Tools: Consider using security scanning tools to automatically identify and assess potential security vulnerabilities in your VS Code extensions. These tools can help you proactively identify and address security risks before they can be exploited.

Creating Custom Visual Studio Code Extensions

In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.

  1. Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.

  2. Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.

  3. Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.

  4. Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.

  5. Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.

  6. Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.

Best Practices for Extension Development

Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:

  • Resource Management:

    • Dispose of Resources: Properly dispose of any resources your extension creates, such as disposables, subscriptions, and timers. Use the context.subscriptions.push() method to register disposables, which will be automatically disposed of when the extension is deactivated.
    • Avoid Memory Leaks: Be mindful of memory usage, especially when dealing with large files or data sets. Use techniques like streaming and pagination to process data in smaller chunks.
    • Clean Up on Deactivation: Implement the deactivate() function to clean up any resources that need to be explicitly released when the extension is deactivated.
  • Asynchronous Operations:

    • Use Async/Await: Use async/await to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.
    • Handle Errors: Properly handle errors in asynchronous operations using try/catch blocks. Log errors and provide informative messages to the user.
    • Avoid Blocking the UI: Ensure that long-running operations are performed in the background to avoid blocking the VS Code UI. Use vscode.window.withProgress to provide feedback to the user during long operations.
  • Security:

    • Validate User Input: Sanitize and validate any user input to prevent security vulnerabilities like code injection and cross-site scripting (XSS).
    • Secure API Keys: Store API keys and other sensitive information securely. Use VS Code’s secret storage API to encrypt and protect sensitive data.
    • Limit Permissions: Request only the necessary permissions for your extension. Avoid requesting excessive permissions that could compromise user security.
  • Performance:

    • Optimize Code: Optimize your code for performance. Use efficient algorithms and data structures to minimize execution time.
    • Lazy Load Resources: Load resources only when they are needed. This can improve the startup time of your extension.
    • Cache Data: Cache frequently accessed data to reduce the number of API calls and improve performance.
  • Code Quality:

    • Follow Coding Standards: Adhere to established coding standards and best practices. This makes your code more readable, maintainable, and less prone to errors.
    • Write Unit Tests: Write unit tests to ensure that your code is working correctly. This helps you catch bugs early and prevent regressions.
    • Use a Linter: Use a linter to automatically identify and fix code style issues. This helps you maintain a consistent code style across your project.
  • User Experience:

    • Provide Clear Feedback: Provide clear and informative feedback to the user. Use status bar messages, progress bars, and error messages to keep the user informed about what’s happening.
    • Respect User Settings: Respect user settings and preferences. Allow users to customize the behavior of your extension to suit their needs.
    • Keep it Simple: Keep your extension simple and easy to use. Avoid adding unnecessary features that could clutter the UI and confuse the user.

By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.

Example: Creating an AI Chatbot Integration for Documentation Generation

Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.

1. Scaffold the Extension:

First, use the Yeoman generator to create a new extension project:

yo code

2. Modify the Extension Code:

Open the generated src/extension.ts file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:

import * as vscode from 'vscode';
import axios from 'axios';

export function activate(context: vscode.ExtensionContext) {
    let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => {
        const editor = vscode.window.activeTextEditor;
        if (editor) {
            const selection = editor.selection;
            const selectedText = editor.document.getText(selection);

            const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key
            const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions';

            try {
                const response = await axios.post(apiUrl, {
                    prompt: `Generate documentation for the following code:\n\n${selectedText}`,
                    max_tokens: 150,
                    n: 1,
                    stop: null,
                    temperature: 0.5,
                }, {
                    headers: {
                        'Content-Type': 'application/json',
                        'Authorization': `Bearer ${apiKey}`,
                    },
                });

                const generatedDocs = response.data.choices[0].text;
                vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs);
            } catch (error) {
                vscode.window.showErrorMessage('Error generating documentation: ' + error.message);
            }
        }
    });

    context.subscriptions.push(disposable);
}

export function deactivate() {}

3. Update package.json:

Add the following command configuration to the contributes section of your package.json file:

"contributes": {
    "commands": [
        {
            "command": "extension.generateDocs",
            "title": "Generate Documentation"
        }
    ]
}

4. Run and Test the Extension:

Press F5 to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.

Packaging and Distributing Your Custom Extension

Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:

1. Package the Extension:

VS Code uses the vsce (Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:

npm install -g vsce

Navigate to your extension’s root directory and run the following command to package your extension:

vsce package

This will create a .vsix file, which is the packaged extension.

2. Publish to the Visual Studio Code Marketplace:

To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.

Once you have your PAT, run the following command to publish your extension:

vsce publish

You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.

3. Share Privately:

If you prefer to share your extension privately within your organization, you can distribute the .vsix file directly to your team members. They can install the extension by running the following command in VS Code:

code --install-extension your-extension.vsix

Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.

Conclusion

Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.

]]>
https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/feed/ 0 377088
AWS Secrets Manager – A Secure Solution for Protecting Your Data https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/ https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/#respond Wed, 05 Feb 2025 16:46:02 +0000 https://blogs.perficient.com/?p=376895

Objective

If you are looking for a solution to securely store your secrets like DB credentials, API keys, tokens, passwords, etc., AWS Secret Manager is the service that comes to your rescue. Keeping the secrets as plain text in your code is highly risky. Hence, storing the secrets in AWS secret manager helps you with the following.

AWS Secret Manager is a fully managed service that can store and manage sensitive information. It simplifies secret handling by enabling the auto-rotation of secrets to reduce the risk of compromise, monitoring the secrets for compliance, and reducing the manual effort of updating the credentials in the application after rotation.

Essential Features of AWS Secret Manager

Picture1

  • Security: Secrets are encrypted using encryption keys we can manage through AWS KMS.
  • Rotation schedule: Enable rotation of credentials through scheduling to replace long-term with short-term ones.
  • Authentication and Access control: Using AWS IAM, we can control access to the secrets, control lambda rotation functions, and permissions to replicate the secrets.
  • Monitor secrets for compliance: AWS Config rules can be used to check whether secrets align with internal security and compliance standards, such as HIPAA, PCI, ISO, AICPA SOC, FedRAMP, DoD, IRAP, and OSPAR.
  • Audit and monitoring: We can use other AWS services, such as Cloud Trail for auditing and Cloud Watch for monitoring.
  • Rollback through versioning: If needed, the secret can be reverted to the previous version by moving the labels attached to that secret.
  • Pay as you go: Charged based on the number of secrets managed through the Secret manager.
  • Integration with other AWS services: Integrating with other AWS services, such as EC2, Lambda, RDS, etc., eliminates the need to hard code secrets.

AWS Secret Manager Pricing

At the time of publishing this document, AWS Secret Manager pricing is below. This might be revised in the future.
ComponentCostDetails
Secret storage$0.40 per secret per monthCharges are done per month. If they are stored for less than a month, the cost is prorated.
API calls$0.05 per 10,000 API callsCharges are charged to API interactions like managing secrets / retrieving secrets.

Creating a Secret

Let us get deeper into the process of creating secrets.

  1. Log in to the AWS Secret management console and select the “store a new secret” option: https://console.aws.amazon.com/secretsmanager/.
    Picture2
  2. On the Choose secret type page,
    1. For Secret type, select the type of database secret that you want to store:
    2. For Credentials, input the credentials for the database that has been hardcoded. Picture3
    3. For the Encryption key, choose AWS/Secrets Manager. This encryption key service is free to use.
    4. For the Database field, choose your database.
    5. Then click Next.
  3. On the Configure secret page,
    1. Provide a descriptive secret name and description.
    2. In the Resource permissions field, choose Edit permissions. Provide the policy that allows RoleToRetrieveSecretAtRuntime and Save.
    3. Then, click Next. Picture4
  4. On the Configure rotation page,
    1. select the schedule for which you want this to be rotated.
    2. Click Next. Picture6
  5. On the Review page, review the details, and then Store.

Output

The secret is created as below.

Picture7

We can update the code to fetch the secret from Secrets Manager. For this, we need to remove the hardcoded credentials from the code. Based on the code language, there is a need to add a call to the function or method to the code to call the secret manager for the secret stored here. Depending on our requirements, we can modify the rotation strategy, versioning, monitoring, etc.

Secret Rotation Strategy

Picture8

  • Single user – It updates credentials for one user in one secret. During secret rotation, open connections will not be dropped. While rotating, Open connections might experience a low risk of database denial calls that use the newly rotated secrets. This can be mitigated through retry strategies. Once the rotation is completed, all new calls will use the rotated credentials.
    • Use case – This strategy can be used for one-time or interactive users.
  • Alternating users – This method updates secret values for two users in one secret. We create the first use. Then, we create a cloned second user using the rotation function during the first rotation. Whenever the secret rotates, the rotation function alternates between the user’s password and the one it updates. Even during rotation, the application gets a valid set of credentials.
    • Uses case – This is good for systems that require high availability.

Versioning of Secrets

A secret consists of the secret value and the metadata. To store multiple values in one secret, we can use json with key-value pairs. A secret has a version that holds copies of the encrypted secret values. AWS uses three labels, like:

  • AWSCURRENT – to store current secret value.
  • AWSPREVIOUS – to hold the previous version.
  • AWSPENDING – to hold pending value during rotation.

Custom labeling of the versions is also possible. AWS can never remove labeled versions of secrets, but unlabeled versions are considered deprecated and will be removed at any time.

Monitoring Secrets in AWS Secret Manager

Secrets stored in AWS Secret Manager can be monitored by services provided by AWS as below.

  • Using cloud trail – This stores all API calls to the secret Manager as events, including secret rotation and version deletion.
  • Monitoring using Cloudwatch – the number of secrets in our account can be managed, secrets that are marked for deletion, monitor metrics, etc. We can also set an alarm for metric changes.

Conclusion

AWS Secrets Manager offers a secure, automated, scalable solution for managing sensitive data and credentials. It reduces the risk of secret exposure and helps improve application security with minimal manual intervention. Adopting best practices around secret management can ensure compliance and minimize vulnerabilities in your applications.

 

]]>
https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/feed/ 0 376895
An Interview with “Tech Humanist” Kate O’Neill https://blogs.perficient.com/2025/02/05/human-friendly-tech-decisions/ https://blogs.perficient.com/2025/02/05/human-friendly-tech-decisions/#respond Wed, 05 Feb 2025 16:08:34 +0000 https://blogs.perficient.com/?p=376146

What if Tech Could Be More Human?

In this episode of “What If? So What?” Jim talks with Kate O’Neill about making human-friendly tech decisions.

In a world that’s moving faster than ever, how can leaders make technology decisions that benefit both the business and the humans they serve? That’s the question Kate O’Neill, tech futurist and author of “What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast,” explores in the latest episode of “What If? So What?”

Why Human Experience Is Bigger than Customer Experience

Kate highlights the difference between customer experience and human experience, pointing out that people are more than the roles they play as buyers. Decisions that optimize efficiency, like Amazon Go’s no-touch payment model, may seem like progress—but what happens when those innovations remove opportunities for connection? Leaders must consider the broader impacts of their choices on society and human behavior.

Future-Proofing Isn’t Enough: Be Future-Ready

The future isn’t a fixed path, Kate explains. It’s a prism of possibilities, shaped by the decisions we make today. Instead of trying to “future-proof” their businesses, leaders should prepare for multiple futures by asking two key questions: “What’s most probable?” And “What’s most preferred?” The gap between the two reveals the work needed to shape tomorrow.

A Call for Meaningful Leadership

Kate’s message to leaders is simple yet profound: purpose and meaning matter more than ever. Purpose isn’t just a buzzword—it’s the shape that meaning takes in business. Leaders who focus on what matters now and what will matter next can create technology that drives innovation and serves humanity.

Learn More in Kate’s Book

For more actionable insights and thought-provoking strategies, check out Kate’s latest book, “What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast.” Packed with frameworks and tools, it’s a must-read for leaders navigating the intersection of technology and humanity.

Listen now on your favorite podcast platform or visit our website.

Subscribe Where You Listen

Apple | Spotify | Amazon | Overcast

Meet our Guest

Kate O’Neill, , CEO of KO Insights, “Tech Humanist” and Author

Kate O'Neill

Kate O’Neill is a digital innovator, chief executive, business writer, and globally recognized speaker widely known as the “Tech Humanist.” She is the founder and CEO of KO Insights, a strategic advisory firm that enhances human experiences at scale through data-driven and AI-led interactions.

Kate has worked with prestigious clients like Google, IBM, Microsoft, and the United Nations, and she was one of the first 100 employees at Netflix. Her groundbreaking insights have been featured in the New York Times, the Wall Street Journal, and WIRED, and she has shared her expertise on NPR and the BBC.

Kate has been honored with numerous awards, including and a spot on Thinkers50’s list of the World’s Management Thinkers to Watch. With six influential books under her belt, including “Tech Humanist,” “A Future So Bright”, and her latest, “What Matters Next.

Connect with Kate

Meet the Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim:

LinkedIn | Perficient

 

 

]]>
https://blogs.perficient.com/2025/02/05/human-friendly-tech-decisions/feed/ 0 376146
Setting Up Virtual WAN (VWAN) in Azure Cloud: A Comprehensive Guide – I https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/ https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/#respond Wed, 05 Feb 2025 11:01:41 +0000 https://blogs.perficient.com/?p=376281

As businesses expand their global footprint, the need for a flexible, scalable, and secure networking solution becomes paramount. Enter Azure Virtual WAN (VWAN), a cloud-based offering designed to simplify and centralize network management while ensuring top-notch performance. Let’s dive into what Azure VWAN offers and how to set it up effectively.

What is Azure Virtual WAN (VWAN)?

Azure Virtual WAN, or VWAN, is a cloud-based network solution that connects secure, seamless, and optimized connectivity across hybrid and multi-cloud environments.

It provides:

I. Flexibility for Dynamic Network Requirements

  • Adaptable Connectivity: Azure VWAN supports various connectivity options, including ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN, ensuring compatibility with diverse environments like on-premises data centers, branch offices, and remote workers.
  • Scale On-Demand: As network requirements grow or change, Azure VWAN allows you to dynamically add or remove connections, integrate new virtual networks (VNets), or scale bandwidth based on traffic needs.
  • Global Reach: Azure VWAN enables connectivity across regions and countries using Microsoft’s extensive global network, ensuring that organizations with distributed operations stay connected.
  • Hybrid and Multi-Cloud Integration: Azure VWAN supports hybrid setups (on-premises + cloud) and integration with other public cloud providers, providing the flexibility to align with business strategies.

II. Improved Management with Centralized Controls

  • Unified Control Plane: Azure VWAN provides a centralized dashboard within the Azure Portal to manage all networking components, such as VNets, branches, VPNs, and ExpressRoute circuits.
  • Simplified Configuration: Automated setup and policy management make deploying new network segments, traffic routing, and security configurations easy.
  • Network Insights: Built-in monitoring and diagnostic tools offer deep visibility into network performance, allowing administrators to quickly identify and resolve issues.
  • Policy Enforcement: Azure VWAN enables consistent policy enforcement across regions and resources, improving governance and compliance with organizational security standards.

III. High Performance Leveraging Microsoft’s Global Backbone Infrastructure

  • Low Latency and High Throughput: Azure VWAN utilizes Microsoft’s global backbone network, known for its reliability and speed, to provide high-performance connectivity across regions and to Azure services.
  • Optimized Traffic Routing: Intelligent routing ensures that traffic takes the most efficient path across the network, reducing latency for applications and end users.
  • Built-in Resilience: Microsoft’s backbone infrastructure includes redundant pathways and fault-tolerant systems, ensuring high availability and minimizing the risk of network downtime.
  • Proximity to End Users: With a global footprint of Azure regions and points of presence (PoPs), Azure VWAN ensures proximity to end users, improving application responsiveness and user experience.

High-level architecture of VWAN

This diagram depicts a high-level architecture of Azure Virtual WAN and its connectivity components.

 

Vwanarchitecture

 

  • HQ/DC (Headquarters/Data Centre): Represents the organization’s primary data center or headquarters hosting critical IT infrastructure and services. Acts as a centralized hub for the organization’s on-premises infrastructure. Typically includes servers, storage systems, and applications that need to communicate with resources in Azure.
  • Branches: Represents the organization’s regional or local office locations. Serves as local hubs for smaller, decentralized operations. Each branch connects to Azure to access cloud-hosted resources, applications, and services and communicates with other branches or HQ/DC. The HQ/DC and branches communicate with each other and Azure resources through the Azure Virtual WAN.
  • Virtual WAN Hub: At the heart of Azure VWAN is the Virtual WAN Hub, a central node that simplifies traffic management between connected networks. This hub acts as the control point for routing and ensures efficient data flow.
  • ExpressRoute: Establishes a private connection between the on-premises network and Azure, bypassing the public internet. It uses BGP for route exchange, ensuring secure and efficient connectivity.
  • VNet Peering: Links Azure Virtual Networks directly, enabling low-latency, high-bandwidth communication.
    • Intra-Region Peering: Connects VNets within the same region.
    • Global Peering: Bridges VNets across different regions.
  • Point-to-Site (P2S) VPN: Ideal for individual users or small teams, this allows devices to securely connect to Azure resources over the internet.
  • Site-to-Site (S2S) VPN: Connects the on-premises network to Azure, enabling secure data exchange between systems.

Benefits of VWAN

  • Scalability: Expand the network effortlessly as the business grows.
  • Cost-Efficiency: Reduce hardware expenses by leveraging cloud-based solutions.
  • Global Reach: Easily connect offices and resources worldwide.
  • Enhanced Performance: Optimize data transfer paths for better reliability and speed.

Setting Up VWAN in Azure

Follow these steps to configure Azure VWAN:

Step 1: Create a Virtual WAN Resource

  • Log in to the Azure Portal and create a Virtual WAN resource. This serves as the foundation of the network architecture.

Step 2: Configure a Virtual WAN Hub

  • Make the WAN Hub the central traffic manager and adjust it to meet the company’s needs.

Step 3: Establish Connections

  • Configure VPN Gateways for secure, encrypted connections.
  • Use ExpressRoute for private, high-performance connectivity.

Step 4: Link VNets

  • Create Azure Virtual Networks and link them to the WAN Hub. The seamless interaction between resources is guaranteed by this integration.

Monitoring and Troubleshooting VWAN

Azure Monitor

Azure Monitor tracks performance, availability, and network health in real time and provides insights into traffic patterns, latency, and resource usage.

Network Watcher

Diagnose network issues with tools like packet capture and connection troubleshooting. Quickly identify and resolve any bottlenecks or disruptions.

Alerts and Logs

Set up alerts for critical issues such as connectivity drops or security breaches. Use detailed logs to analyze network events and maintain robust auditing.

Final Thoughts

Azure VWAN is a powerful tool for businesses looking to unify and optimize their global networking strategy. Organizations can ensure secure, scalable, and efficient connectivity by leveraging features like ExpressRoute, VNet Peering, and VPN Gateways. With the correct setup and monitoring tools, managing complex networks becomes a seamless experience.

]]>
https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/feed/ 0 376281
Customizing Data Exports: Dynamic Excel Updates with Power Apps, Power Automate, and Office Scripts https://blogs.perficient.com/2025/02/05/customizing-data-exports-dynamic-excel-updates-with-power-apps-power-automate-and-office-scripts/ https://blogs.perficient.com/2025/02/05/customizing-data-exports-dynamic-excel-updates-with-power-apps-power-automate-and-office-scripts/#respond Wed, 05 Feb 2025 06:16:02 +0000 https://blogs.perficient.com/?p=376246

Modern business workflows often require flexible and efficient ways to export, transform, and share data. By combining the capabilities of Power Apps, Power Automate, and Office Scripts, you can create a seamless process to dynamically customize and update Excel files with minimal effort.

This guide demonstrates how to dynamically export data from Power Apps, process it with Power Automate, format it in Excel using Office Scripts, and send the updated file via email. Let’s dive into the details.

This blog demonstrates a practical solution for automating data exports and dynamic reporting in Excel, tailored to users who expect dynamic column selection for report headers. Manual data preparation and formatting can be time-consuming and error-prone in many projects, especially those involving custom reporting.

With the process outlined in this blog, you can:

  • Dynamically select and modify column headers based on user input.
  • Automate the transformation of raw data into a formatted Excel file.
  • Share the final output effortlessly via email.

This solution integrates Power Apps, Power Automate, and Office Scripts to ensure that your reporting process is faster, error-free, and adaptable to changing requirements, saving you significant time and effort.

Exporting Data from Power Apps

Creating a Collection in Power Apps

A collection in Power Apps serves as a temporary data storage container that holds the records you want to process. Here’s how to set it up:

Step 1: Define the DATA Collection

  • Open your Power App and navigate to the screen displaying or managing your data.
  • Use the Collect or ClearCollect function in Power Apps to create a collection named ExportData that holds the required data columns.
  • You can dynamically populate this collection based on user interaction or pre-existing data from a connected source. For example:

Picture1

  • Here, the ExportData collection is populated with a static table of records. You can replace this static data with actual data retrieved from your app’s sources.
  • Tip: Use data connectors like SharePoint, SQL Server, or Dataverse to fetch real-time data and add it to the collection.

Step 2: Define a Table HeaderName for Column Names

  • To ensure the exported Excel file includes the correct column headers, define a Variable named HeaderName that holds the names of the columns to be included.
Set(HeaderName, ["Name", "Age", "Country"])

This Variable specifies the column headers appearing in the exported Excel file.

Picture2

Pass Data to Power Automate

Once the ExportData collection and HeaderName are set up, pass them as inputs to the Power Automate flow.

Step 1: Add the Flow to Power Apps

  1. Navigate to the Power Automate tab in Power Apps.
  2. Click on + Add Flow and select the flow you created for exporting data to Excel.

Step 2: Trigger the Flow and Send the Data

    • Use the following formula to trigger the flow and pass the data:
CustomizingDataExports.Run(JSON(ExportData), JSON(HeaderName))

Picture3

  • CustomizingDataExports is the Power Automate flow.
  • JSON(ExportData) converts the collection to a JSON object that Power Automate can process.
  • JSON(HeaderName) converts the collection to a JSON object that passes the column headers for use in the Excel export.

Processing Data with Power Automate

Power Automate bridges Power Apps and Excel, enabling seamless data processing, transformation, and sharing. Follow these steps to configure your flow:

1. Receive Inputs

  • Trigger Action: Use the Power Apps trigger to accept two input variables:
    • ExportData: The dataset.
    • HeaderName: The column headers.
  • Add input parameters:
    • Navigate to the trigger action.
    • Click Add an input, select Text type for both variables and label them.

2. Prepare Data

Add two Compose actions to process inputs.

  • Use these expressions:

For ExportData:

json(triggerBody()?['text'])

For HeaderName:

json(triggerBody()?['text_1'])

Add a Parse JSON action to structure the HeaderName input:

Content:

outputs('Compose_-_HeaderName')

Schema:

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "Value": {
                "type": "string"
            }
        },
        "required": [
            "Value"
        ]
    }
}

Use a Select action to extract the values:

From:

body('Parse_JSON')

Map:

item()['Value']

Picture4

3. Setup Excel Template

Add a Get file content action to fetch a pre-defined Excel template from storage (e.g., SharePoint or OneDrive).

Use a Create file action to save the template as a new file:

Dynamic File Name:

guid().xlsx

Convert the ExportData to a CSV format:

  • Add a Create CSV Table action:

From:

outputs('Compose_-_ExportData')

Picture5

Formatting Data with Office Scripts

Office Scripts are used to dynamically process and format data in Excel. Here’s how you implement it:

Set up the script

Open Excel and navigate to the “Automate” tab.

Create a new Office Script and paste the following code:

function main(workbook: ExcelScript.Workbook, headersArray: string[], csvData: string) {
  let activeWorksheet = workbook.getWorksheet("Sheet1");
  let csvRows = csvData.split('\n');
  csvRows = csvRows.map(row => row.replace(/\r$/, ''));
  let headerRow = csvRows[0].split(',');
  // Create a mapping of column headers to their indices
  let columnIndexMap: { [key: string]: number } = {};
  for (let i = 0; i < headerRow.length; i++) {
    let header = headerRow[i];
    if (headersArray.includes(header)) {
      columnIndexMap[header] = i;
    }
  }
  // Create new Excel table with headers below the logo
  let range = activeWorksheet.getRangeByIndexes(0, 0, 1, headersArray.length);
  range.setValues([headersArray]);
  // Batch size for inserting data into Excel
  const batchSize = 500;
  let batchData: string[][] = [];
  let columncount = 0;
  // Loop through CSV data and filter/select desired columns
  for (let j = 1; j < csvRows.length; j++) {
    let rowData = parseCSVRow(csvRows[j]);
    let filteredRowData: string[] = [];
    for (let k = 0; k < headersArray.length; k++) {
      let header = headersArray[k];
      let columnIndex = columnIndexMap[header];
      filteredRowData.push(rowData[columnIndex]);
    }
    batchData.push(filteredRowData);
    // Insert data into Excel in batches
    if (batchData.length === batchSize || j === csvRows.length - 1) {
      let startRowIndex = j - batchData.length + 1; // Start after the logo and headers
      let startColIndex = 0;
      let newRowRange = activeWorksheet.getRangeByIndexes(startRowIndex, startColIndex, batchData.length, batchData[0].length);
      newRowRange.setValues(batchData);
      batchData = [];
    }
    columncount=j;
  }
  workbook.addTable(activeWorksheet.getRangeByIndexes(0, 0, columncount, headersArray.length), true).setPredefinedTableStyle("TableStyleLight8");
  activeWorksheet.getRangeByIndexes(0, 0, columncount, headersArray.length).getFormat().autofitColumns();

  // Release the lock on the workbook
  activeWorksheet.exitActiveNamedSheetView();
}
// Custom CSV parsing function to handle commas within double quotes
function parseCSVRow(row: string): string[] {
  let columns: string[] = [];
  let currentColumn = '';
  let withinQuotes = false;
  for (let i = 0; i < row.length; i++) {
    let char = row[i];
    if (char === '"') {
      withinQuotes = !withinQuotes;
    } else if (char === ',' && !withinQuotes) {
      columns.push(currentColumn);
      currentColumn = '';
    } else {
      currentColumn += char;
    }
  }
  columns.push(currentColumn); // Add the last column
  return columns;
}

Picture6

Integrate with Power Automate

Use the Run script action in Power Automate to execute the Office Script.

Pass the header array and CSV data as parameters.

Picture7

Send the Updated File via Email

Once the Excel file is updated with Office Scripts, you can send it to recipients via Outlook email.

1. Retrieve the Updated File:

  • Add a Get file content action to fetch the updated file.

Use the file path or identifier from the Create file action.

outputs('Create_file')?['body/Id']

Picture8

2. Send an Email (V2):

  • Add the Send an email (V2) action from the Outlook connector.
  • Configure the email:
    • To: Add the recipient’s email dynamically or enter it manually.
    • Subject: Provide a meaningful subject, such as “Custom Data Export File”
    • Body: Add a custom message, including details about the file or process.
    • Attachments:
      • Name: Use a dynamic value
outputs('Create_file')?['body/Name']
        • Content: Pass the output from the Get file content action.
body('Get_file_content_-_Created_File')

Picture9

Integrating the Workflow

  1. Test the entire integration from Power Apps to Power Automate and Office Scripts.
  2. Verify the final Excel file includes the correct headers and data formatting.
  3. Confirm that the updated Excel file is attached to the email and sent to the specified recipients.

Result:

Excel

Picture10

Email

Picture11

How This Solution Saves Time

This approach is tailored for scenarios where users require a dynamic selection of column headers for custom reporting. Instead of spending hours manually formatting data and preparing reports, this solution automates the process end-to-end, ensuring:

  • Accurate data formatting without manual intervention.
  • Quick adaptation to changing requirements (e.g., selecting different report headers).
  • Seamless sharing of reports via email in just a few clicks.

This workflow minimizes errors, accelerates the reporting process, and enhances overall project efficiency by automating repetitive tasks.

Conclusion

You can create robust, dynamic workflows for exporting and transforming data by combining Power Apps, Power Automate, and Office Scripts. This approach saves time, reduces manual effort, and ensures process consistency. Adding email functionality ensures the updated file reaches stakeholders without manual intervention. Whether you’re managing simple data exports or complex transformations, this solution provides a scalable and efficient way to handle Excel data.

]]>
https://blogs.perficient.com/2025/02/05/customizing-data-exports-dynamic-excel-updates-with-power-apps-power-automate-and-office-scripts/feed/ 0 376246
Migrating from MVP to Jetpack Compose: A Step-by-Step Guide for Android Developers https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/ https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/#respond Mon, 03 Feb 2025 15:30:02 +0000 https://blogs.perficient.com/?p=376701

Migrating an Android App from MVP to Jetpack Compose: A Step-by-Step Guide

Jetpack Compose is Android’s modern toolkit for building native UI. It simplifies and accelerates UI development by using a declarative approach, which is a significant shift from the traditional imperative XML-based layouts. If you have an existing Android app written in Kotlin using the MVP (Model-View-Presenter) pattern with XML layouts, fragments, and activities, migrating to Jetpack Compose can bring numerous benefits, including improved developer productivity, reduced boilerplate code, and a more modern UI architecture.

In this article, we’ll walk through the steps to migrate an Android app from MVP with XML layouts to Jetpack Compose. We’ll use a basic News App to explain in detail how to migrate all layers of the app. The app has two screens:

  1. A News List Fragment to display a list of news items.
  2. A News Detail Fragment to show the details of a selected news item.

We’ll start by showing the original MVP implementation, including the Presenters, and then migrate the app to Jetpack Compose step by step. We’ll also add error handling, loading states, and use Kotlin Flow instead of LiveData for a more modern and reactive approach.

1. Understand the Key Differences

Before diving into the migration, it’s essential to understand the key differences between the two approaches:

  • Imperative vs. Declarative UI: XML layouts are imperative, meaning you define the UI structure and then manipulate it programmatically. Jetpack Compose is declarative, meaning you describe what the UI should look like for any given state, and Compose handles the rendering.
  • MVP vs. Compose Architecture: MVP separates the UI logic into Presenters and Views. Jetpack Compose encourages a more reactive and state-driven architecture, often using ViewModel and State Hoisting.
  • Fragments and Activities: In traditional Android development, Fragments and Activities are used to manage UI components. In Jetpack Compose, you can replace most Fragments and Activities with composable functions.

2. Plan the Migration

Migrating an entire app to Jetpack Compose can be a significant undertaking. Here’s a suggested approach:

  1. Start Small: Begin by migrating a single screen or component to Jetpack Compose. This will help you understand the process and identify potential challenges.
  2. Incremental Migration: Jetpack Compose is designed to work alongside traditional Views, so you can migrate your app incrementally. Use ComposeView in XML layouts or AndroidView in Compose to bridge the gap.
  3. Refactor MVP to MVVM: Jetpack Compose works well with the MVVM (Model-View-ViewModel) pattern. Consider refactoring your Presenters into ViewModels.
  4. Replace Fragments with Composable Functions: Fragments can be replaced with composable functions, simplifying navigation and UI management.
  5. Add Error Handling and Loading States: Ensure your app handles errors gracefully and displays loading states during data fetching.
  6. Use Kotlin Flow: Replace LiveData with Kotlin Flow for a more modern and reactive approach.

3. Set Up Jetpack Compose

Before starting the migration, ensure your project is set up for Jetpack Compose:

  1. Update Gradle Dependencies:
    Add the necessary Compose dependencies to your build.gradle file:

    android {
        ...
        buildFeatures {
            compose true
        }
        composeOptions {
            kotlinCompilerExtensionVersion '1.5.3'
        }
    }
    
    dependencies {
        implementation 'androidx.activity:activity-compose:1.8.0'
        implementation 'androidx.compose.ui:ui:1.5.4'
        implementation 'androidx.compose.material:material:1.5.4'
        implementation 'androidx.compose.ui:ui-tooling-preview:1.5.4'
        implementation 'androidx.lifecycle:lifecycle-viewmodel-compose:2.6.2'
        implementation 'androidx.navigation:navigation-compose:2.7.4' // For navigation
        implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.6.2' // For Flow
        implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-core:1.7.3' // For Flow
    }
  2. Enable Compose in Your Project:
    Ensure your project is using the correct Kotlin and Android Gradle plugin versions.

4. Original MVP Implementation

a. News List Fragment and Presenter

The NewsListFragment displays a list of news items. The NewsListPresenter fetches the data and updates the view.

NewsListFragment.kt

class NewsListFragment : Fragment(), NewsListView {

    private lateinit var presenter: NewsListPresenter
    private lateinit var adapter: NewsListAdapter

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val view = inflater.inflate(R.layout.fragment_news_list, container, false)
        val recyclerView = view.findViewById<RecyclerView>(R.id.recyclerView)
        adapter = NewsListAdapter { newsItem -> presenter.onNewsItemClicked(newsItem) }
        recyclerView.adapter = adapter
        recyclerView.layoutManager = LinearLayoutManager(context)
        presenter = NewsListPresenter(this)
        presenter.loadNews()
        return view
    }

    override fun showNews(news: List<NewsItem>) {
        adapter.submitList(news)
    }

    override fun showLoading() {
        // Show loading indicator
    }

    override fun showError(error: String) {
        // Show error message
    }
}

NewsListPresenter.kt

class NewsListPresenter(private val view: NewsListView) {

    fun loadNews() {
        view.showLoading()
        // Simulate fetching news from a data source (e.g., API or local database)
        try {
            val newsList = listOf(
                NewsItem(id = 1, title = "News 1", summary = "Summary 1"),
                NewsItem(id = 2, title = "News 2", summary = "Summary 2")
            )
            view.showNews(newsList)
        } catch (e: Exception) {
            view.showError(e.message ?: "An error occurred")
        }
    }

    fun onNewsItemClicked(newsItem: NewsItem) {
        // Navigate to the news detail screen
        val intent = Intent(context, NewsDetailActivity::class.java).apply {
            putExtra("newsId", newsItem.id)
        }
        startActivity(intent)
    }
}

NewsListView.kt

interface NewsListView {
    fun showNews(news: List<NewsItem>)
    fun showLoading()
    fun showError(error: String)
}

b. News Detail Fragment and Presenter

The NewsDetailFragment displays the details of a selected news item. The NewsDetailPresenter fetches the details and updates the view.

NewsDetailFragment.kt

class NewsDetailFragment : Fragment(), NewsDetailView {

    private lateinit var presenter: NewsDetailPresenter

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val view = inflater.inflate(R.layout.fragment_news_detail, container, false)
        presenter = NewsDetailPresenter(this)
        val newsId = arguments?.getInt("newsId") ?: 0
        presenter.loadNewsDetail(newsId)
        return view
    }

    override fun showNewsDetail(newsItem: NewsItem) {
        view?.findViewById<TextView>(R.id.title)?.text = newsItem.title
        view?.findViewById<TextView>(R.id.summary)?.text = newsItem.summary
    }

    override fun showLoading() {
        // Show loading indicator
    }

    override fun showError(error: String) {
        // Show error message
    }
}

NewsDetailPresenter.kt

class NewsDetailPresenter(private val view: NewsDetailView) {

    fun loadNewsDetail(newsId: Int) {
        view.showLoading()
        // Simulate fetching news detail from a data source (e.g., API or local database)
        try {
            val newsItem = NewsItem(id = newsId, title = "News $newsId", summary = "Summary $newsId")
            view.showNewsDetail(newsItem)
        } catch (e: Exception) {
            view.showError(e.message ?: "An error occurred")
        }
    }
}

NewsDetailView.kt

interface NewsDetailView {
    fun showNewsDetail(newsItem: NewsItem)
    fun showLoading()
    fun showError(error: String)
}

5. Migrate to Jetpack Compose

a. Migrate the News List Fragment

Replace the NewsListFragment with a composable function. The NewsListPresenter will be refactored into a NewsListViewModel.

NewsListScreen.kt

@Composable
fun NewsListScreen(viewModel: NewsListViewModel, onItemClick: (NewsItem) -> Unit) {
    val newsState by viewModel.newsState.collectAsState()

    when (newsState) {
        is NewsState.Loading -> {
            // Show loading indicator
            CircularProgressIndicator()
        }
        is NewsState.Success -> {
            val news = (newsState as NewsState.Success).news
            LazyColumn {
                items(news) { newsItem ->
                    NewsListItem(newsItem = newsItem, onClick = { onItemClick(newsItem) })
                }
            }
        }
        is NewsState.Error -> {
            // Show error message
            val error = (newsState as NewsState.Error).error
            Text(text = error, color = Color.Red)
        }
    }
}

@Composable
fun NewsListItem(newsItem: NewsItem, onClick: () -> Unit) {
    Card(
        modifier = Modifier
            .fillMaxWidth()
            .padding(8.dp)
            .clickable { onClick() }
    ) {
        Column(modifier = Modifier.padding(16.dp)) {
            Text(text = newsItem.title, style = MaterialTheme.typography.h6)
            Text(text = newsItem.summary, style = MaterialTheme.typography.body1)
        }
    }
}

NewsListViewModel.kt

class NewsListViewModel : ViewModel() {

    private val _newsState = MutableStateFlow<NewsState>(NewsState.Loading)
    val newsState: StateFlow<NewsState> get() = _newsState

    init {
        loadNews()
    }

    private fun loadNews() {
        viewModelScope.launch {
            _newsState.value = NewsState.Loading
            try {
                // Simulate fetching news from a data source (e.g., API or local database)
                val newsList = listOf(
                    NewsItem(id = 1, title = "News 1", summary = "Summary 1"),
                    NewsItem(id = 2, title = "News 2", summary = "Summary 2")
                )
                _newsState.value = NewsState.Success(newsList)
            } catch (e: Exception) {
                _newsState.value = NewsState.Error(e.message ?: "An error occurred")
            }
        }
    }
}

sealed class NewsState {
    object Loading : NewsState()
    data class Success(val news: List<NewsItem>) : NewsState()
    data class Error(val error: String) : NewsState()
}

b. Migrate the News Detail Fragment

Replace the NewsDetailFragment with a composable function. The NewsDetailPresenter will be refactored into a NewsDetailViewModel.

NewsDetailScreen.kt

@Composable
fun NewsDetailScreen(viewModel: NewsDetailViewModel) {
    val newsState by viewModel.newsState.collectAsState()

    when (newsState) {
        is NewsState.Loading -> {
            // Show loading indicator
            CircularProgressIndicator()
        }
        is NewsState.Success -> {
            val newsItem = (newsState as NewsState.Success).news
            Column(modifier = Modifier.padding(16.dp)) {
                Text(text = newsItem.title, style = MaterialTheme.typography.h4)
                Text(text = newsItem.summary, style = MaterialTheme.typography.body1)
            }
        }
        is NewsState.Error -> {
            // Show error message
            val error = (newsState as NewsState.Error).error
            Text(text = error, color = Color.Red)
        }
    }
}

NewsDetailViewModel.kt

class NewsDetailViewModel : ViewModel() {

    private val _newsState = MutableStateFlow<NewsState>(NewsState.Loading)
    val newsState: StateFlow<NewsState> get() = _newsState

    fun loadNewsDetail(newsId: Int) {
        viewModelScope.launch {
            _newsState.value = NewsState.Loading
            try {
                // Simulate fetching news detail from a data source (e.g., API or local database)
                val newsItem = NewsItem(id = newsId, title = "News $newsId", summary = "Summary $newsId")
                _newsState.value = NewsState.Success(newsItem)
            } catch (e: Exception) {
                _newsState.value = NewsState.Error(e.message ?: "An error occurred")
            }
        }
    }
}

sealed class NewsState {
    object Loading : NewsState()
    data class Success(val news: NewsItem) : NewsState()
    data class Error(val error: String) : NewsState()
}

6. Set Up Navigation

Replace Fragment-based navigation with Compose navigation:

class MainActivity : ComponentActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            NewsApp()
        }
    }
}

@Composable
fun NewsApp() {
    val navController = rememberNavController()
    NavHost(navController = navController, startDestination = "newsList") {
        composable("newsList") {
            val viewModel: NewsListViewModel = viewModel()
            NewsListScreen(viewModel = viewModel) { newsItem ->
                navController.navigate("newsDetail/${newsItem.id}")
            }
        }
        composable("newsDetail/{newsId}") { backStackEntry ->
            val viewModel: NewsDetailViewModel = viewModel()
            val newsId = backStackEntry.arguments?.getString("newsId")?.toIntOrNull() ?: 0
            viewModel.loadNewsDetail(newsId)
            NewsDetailScreen(viewModel = viewModel)
        }
    }
}

7. Test and Iterate

After migrating the screens, thoroughly test the app to ensure it behaves as expected. Use Compose’s preview functionality to visualize your UI:

@Preview(showBackground = true)
@Composable
fun PreviewNewsListScreen() {
    NewsListScreen(viewModel = NewsListViewModel(), onItemClick = {})
}

@Preview(showBackground = true)
@Composable
fun PreviewNewsDetailScreen() {
    NewsDetailScreen(viewModel = NewsDetailViewModel())
}

8. Gradually Migrate the Entire App

Once you’re comfortable with the migration process, continue migrating the rest of your app incrementally. Use ComposeView and AndroidView to integrate Compose with existing XML

]]>
https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/feed/ 0 376701
Sales Cloud to Data Cloud with No Code! https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/ https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/#respond Fri, 31 Jan 2025 18:15:25 +0000 https://blogs.perficient.com/?p=376326

Salesforce has been giving us a ‘No Code’ way to have Data Cloud notify Sales Cloud of changes through Data Actions and Flows.   But did you know you can go the other direction too?

The Data Cloud Ingestion API allows us to setup a ‘No Code’ way of sending changes in Sales Cloud to Data Cloud.

Why would you want to do this with the Ingestion API?

  1. You are right that we could surely setup a ‘normal’ Salesforce CRM Data Stream to pull data from Sales Cloud into Data Cloud.  This is also a ‘No Code’ way to integrate the two.  But maybe you want to do some complex filtering or logic before sending the data onto Sales Cloud where a Flow could really help.
  2. CRM Data Streams only run on a schedule with every 10 minutes.  With the Ingestion API we can send to Data Cloud immediately, we just need to wait until the Ingestion API can run for that specific request.  The current wait time for the Ingestion API to run is 3 minutes, but I have seen it run faster at times.  It is not ‘real-time’, so do not use this for ‘real-time’ use cases.  But this is faster than CRM Data Streams for incremental and smaller syncs that need better control.
  3. You could also ingest data into Data Cloud easily through an Amazon S3 bucket.  But again, here we have data in Sales Cloud that we want to get to Data Cloud with no code.
  4. We can do very cool integrations by leveraging the Ingestion API outside of Salesforce like in this video, but we want a way to use Flows (No Code!) to send data to Data Cloud.

Use Case:

You have Sales Cloud, Data Cloud and Marketing Cloud Engagement.  As a Marketing Campaign Manager you want to send an email through Marketing Cloud Engagement when a Lead fills out a certain form.

You only want to send the email if the Lead is from a certain state like ‘Minnesota’ and that Email address has ordered a certain product in the past.  The historical product data lives in Data Cloud only.  This email could come out a few minutes later and does not need to be real-time.

Solution A:

If you need to do this in near real-time, I would suggest to not use the Ingestion API.  We can query the Data Cloud product data in a Flow and then update your Lead or other record in a way that triggers a ‘Journey Builder Salesforce Data Event‘ in Marketing Cloud Engagement.

Solution B:

But our above requirements do not require real-time so let’s solve this with the Ingestion API.  Since we are sending data to Data Cloud we will have some more power with the Salesforce Data Action to reference more Data Cloud data and not use the Flow ‘Get Records’ for all data needs.

We can build an Ingestion API Data Stream that we can use in a Salesforce Flow.  The flow can check to make sure that the Lead is from a certain state like ‘Minnesota’.  The Ingestion API can be triggered from within the flow.  Once the data lands in the DMO object in Data Cloud we can then use a ‘Data Action’ to listen for that data change, check if that Lead has purchased a certain product before and then use a ‘Data Action Target’ to push to a Journey in Marketing Cloud Engagement.  All that should occur within a couple of minutes.

Sales Cloud to Data Cloud with No Code!  Let’s do this!

Here is the base Salesforce post sharing that this is possible through Flows, but let’s go deeper for you!

The following are those deeper steps of getting the data to Data Cloud from Sales Cloud.  In my screen shots you will see data moving between a VIN (Vehicle Identification Number) custom object to a VIN DLO/DMO in Data Cloud, but the same process could be used for our ‘Lead’ Use Case above.

  1. Create a YAML file that we will use to define the fields in the Data Lake Object (DLO).  I put an example YAML structure at the bottom of this post.
  2. Go to Setup, Data Cloud, External Integrations, Ingestion API.   Click on ‘New’
    Newingestionapi

    1. Give your new Ingestion API Source a Name.  Click on Save.
      Newingestionapiname
    2. In the Schema section click on the ‘Upload Files’ link to upload your YAML file.
      Newingestionapischema
    3. You will see a screen to preview your Schema.  Click on Save.
      Newingestionapischemapreview
    4. After that is complete you will see your new Schema Object
      Newingestionapischemadone
    5. Note that at this point there is no Data Lake Object created yet.
  3. Create a new ‘Ingestion API’ Data Stream.  Go to the ‘Data Steams’ tab and click on ‘New’.   Click on the ‘Ingestion API’ box and click on ‘Next’.
    Ingestionapipic

    1. Select the Ingestion API that was created in Step 2 above.  Select the Schema object that is associated to it.  Click Next.
      Newingestionapidsnew
    2. Configure your new Data Lake Object by setting the Category, Primary Key and Record Modified Fields
      Newingestionapidsnewdlo
    3. Set any Filters you want with the ‘Set Filters’ link and click on ‘Deploy’ to create your new Data Stream and the associated Data Lake Object.
      Newingestionapidsnewdeploy
    4. If you want to also create a Data Model Object (DMO) you can do that and then use the ‘Review’ button in the ‘Data Mapping’ section on the Data Stream detail page to do that mapping.  You do need a DMO to use the ‘Data Action’ feature in Data Cloud.
  4. Now we are ready to use this new Ingestion API Source in our Flow!  Yeah!
  5. Create a new ‘Start from Scratch’, ‘Record-Triggered Flow’ on the Standard or Custom object you want to use to send data to Data Cloud.
  6. Configure an Asynchronous path.  We cannot connect to this ‘Ingestion API’ from the ‘Run Immediately’ part of the Flow because this Action will be making an API to Data Cloud.  This is similar to how we have to use a ‘Future’ call with an Apex Trigger.
    Newingestionapiflowasync
  7. Once you have configured your base Flow, add the ‘Action’ to the ‘Run Asynchronously’ part of the Flow.    Select the ‘Send to Data Cloud’ Action and then map your fields to the Ingestion API inputs that are available for that ‘Ingestion API’ Data Stream you created.
    Newingestionapiflowasync2
  8. Save and Activate your Flow.
  9. To test, update your record in a way that will trigger your Flow to run.
  10. Go into Data Cloud and see your data has made it there by using the ‘Data Explorer’ tab.
  11. The standard Salesforce Debug Logs will show the details of your Flow steps if you need to troubleshoot something.

Congrats!

You have sent data from Sales Cloud to Data Cloud with ‘No Code’ using the Ingestion API!

Setting up the Data Action and connecting to Marketing Cloud Journey Builder is documented here to round out the use case.

Here is the base Ingestion API Documentation.

At Perficient we have experts in Sales Cloud, Data Cloud and Marketing Cloud Engagement.  Please reach out and let’s work together to reach your business goals on these platforms and others.

Example YAML Structure:

Yaml Pic

openapi: 3.0.3
components:
schemas:
VIN_DC:
type: object
properties:
VIN_Number:
type: string
Description:
type: string
Make:
type: string
Model:
type: string
Year:
type: number
created:
type: string
format: date-time

]]>
https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/feed/ 0 376326