Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Wed, 19 Feb 2025 16:14:29 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 Informatica Intelligent Cloud Services (IICS) Cloud Data Integration (CDI) for PowerCenter Experts https://blogs.perficient.com/2025/02/18/informatica-intelligent-cloud-services-iics-cloud-data-integration-cdi-for-powercenter-experts/ https://blogs.perficient.com/2025/02/18/informatica-intelligent-cloud-services-iics-cloud-data-integration-cdi-for-powercenter-experts/#respond Wed, 19 Feb 2025 05:56:08 +0000 https://blogs.perficient.com/?p=377296

Informatica Power Center professionals transitioning to Informatica Intelligent Cloud Services (IICS) Cloud Data Integration (CDI) will find both exciting opportunities and new challenges. While core data integration principles remain, IICS’s cloud-native architecture requires a shift in mindset. This article outlines key differences, migration strategies, and best practices for a smooth transition.

Core Differences Between Power Center and IICS CDI:

  • Architecture: Power Center is on-premise, while IICS CDI is a cloud-based iPaaS. Key architectural distinctions include:
    • Agent-Based Processing: IICS uses Secure Agents as a bridge between on-premise and cloud sources.
    • Cloud-Native Infrastructure: IICS leverages cloud elasticity for scalability, unlike Power Center’s server-based approach.
    • Microservices: IICS offers modular, independently scalable services.
  • Development and UI: IICS uses a web-based UI, replacing Power Center’s thick client (Repository Manager, Designer, Workflow Manager, Monitor). IICS organizes objects into projects and folders (not repositories) and uses tasks, taskflows, and mappings (not workflows) for process execution.
  • Connectivity and Deployment: IICS offers native cloud connectivity to services like AWS, Azure, and Google Cloud. It supports hybrid deployments and enhanced parameterization.

Migration Strategies:

  1. Assessment: Thoroughly review existing Power Center workflows, mappings, and transformations to understand dependencies and complexity.
  2. Automated Tools: Leverage Informatica’s migration tools, such as the Power Center to IICS Migration Utility, to convert mappings.
  3. Optimization: Rebuild or optimize mappings as needed, taking advantage of IICS capabilities.

Best Practices for IICS CDI:

  1. Secure Agent Efficiency: Deploy Secure Agents near data sources for optimal performance and reduced latency.
  2. Reusable Components: Utilize reusable mappings and templates for standardization.
  3. Performance Monitoring: Use Operational Insights to track execution, identify bottlenecks, and optimize pipelines.
  4. Security: Implement robust security measures, including role-based access, encryption, and data masking.

Conclusion:

IICS CDI offers Power Center users a modern, scalable, and efficient cloud-based data integration platform. While adapting to the new UI and development paradigm requires learning, the fundamental data integration principles remain. By understanding the architectural differences, using migration tools, and following best practices, Power Center professionals can successfully transition to IICS CDI and harness the power of cloud-based data integration.

]]>
https://blogs.perficient.com/2025/02/18/informatica-intelligent-cloud-services-iics-cloud-data-integration-cdi-for-powercenter-experts/feed/ 0 377296
The Colorful World of Azure DevOps Boards https://blogs.perficient.com/2025/02/18/the-colorful-world-of-azure-devops-boards/ https://blogs.perficient.com/2025/02/18/the-colorful-world-of-azure-devops-boards/#respond Tue, 18 Feb 2025 18:08:51 +0000 https://blogs.perficient.com/?p=377362

Out of the box, Azure DevOps provides black-and-white capabilities in terms of how it can be utilized to support a project and its code repository. Over time, teams establish and settle into work processes, often continuing to use those basic settings, which can lead to a mundane operation and, perhaps, risk losing sight of the end goal.

Even if a project is customized in terms of workflow, custom state options, or custom fields, sometimes it is still difficult to know where things stand and what is important to focus on.

There are a few ways in which Azure DevOps can aid in making those items visible and obvious, to better help guide a team.

Leverage color to draw attention

When viewing a Board in Azure DevOps, it can often be overwhelming to look at or find specific work items. Consider: what is most important for the team to complete or prioritize, and, what could be a unique identifier to locate those items? These are the items we want the team to notice and work on first.

There are a couple of ways in which Azure DevOps allows us to style work items on a board:

  • Card Styles
  • Tag Colors

Let’s take an example of Card Styles: We want the client to quickly and easily see if items on the Board are blocked. In our board settings, we can use the Board Settings > Cards > Styles options to apply some rules to make any work items which contain the tag ‘Blocked’ to appear Red in color.

Example Settings:

Blocked Card Style Settings 

Example Card Preview:

Blocked Card

Another use case for applying Card Styles could be that we want our team members to prioritize and focus on any Bug work items which have a Priority of 1. In the same settings dialog, we can add another styling rule so that any Bug work item which has a Priority of ‘1’ should appear Yellow in color. This will make it extremely easy to find those Priority 1 Bugs when viewing the board, so that it is obvious to any team member who is assigned to one.

Example Card Preview:

Priority 1 Card

Let’s look at one more use case – we want our team to easily recognize work items containing the tag ‘content.’ In this example, this tag means that the work item will require manual content steps, along with the code changes. In the Board Settings > Cards > Tag Colors options, we can configure a rule so that this specific tag will appear in Pink while viewing the board.

Example Card Preview:

Content Tags

TIP: While it is great to provide color styling rules to work items, it is best to reserve those rules only items needing specific, frequent attention. Consider this before applying any styling setting on a project’s Board.

Find key details in a dash on your Dashboard

Lastly, Dashboards are a fantastic way to provide fast, summary information regarding the progress of a team or project. Consider creating dashboards to display results of queries that you often find yourself referencing for reporting or oversight. Like the Backlog and Boards views, keep the dashboards focused on the most valuable information. Make it easily visible by organizing the most important widgets to the top of the dashboard.

In this example below, the team wanted to automate a way of finding work items which were mis-placed in the backlog or were without tags. A series of queries were created and used to provide data of matching results. In the first screenshot, there are no results, and all the tiles are equal to 0 – this is the ideal state. In the second screenshot, there are results in one of the tables and three of the tiles have a matching result of 1, in which case the tile is configured to turn Red in color. This makes it very easy for a team member to notice and take action to make sure specific work items are addressed quickly.

Screenshot 1:

Dashboard Ex 1

Screenshot 2:

Dashboard Ex 2

 

TIPS:

    • Create multiple dashboards, each with their own purpose, to prevent having just 1 or 2 dashboards being overwhelmed by too much information.
    • The ‘Chart for Work Items’ widget on the dashboards also allows for the color options to be customized. Consider this in cases where you want to draw attention to a specific attribute, such as work item State.

Paint the picture for your team

To help keep the team focused and from settling into a mundane work pattern, keep the most important data in Azure DevOps accessible and visible on project Boards, Backlogs, and Dashboards. Use visual indicators like color to help enable the team to quickly find what is most important to use their time most efficiently towards the project’s goal.

By using these simply tips and tricks, it will help to paint a masterpiece of a project that both the team and client will be better engaged in.

]]>
https://blogs.perficient.com/2025/02/18/the-colorful-world-of-azure-devops-boards/feed/ 0 377362
Navigating the Landscape of Development Frameworks: A Guide for Aspiring Developers. https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/ https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/#comments Tue, 18 Feb 2025 05:44:58 +0000 https://blogs.perficient.com/?p=377319

Nine years ago, I was eager to be a developer but found no convincing platform. Luckily, the smartphone world was booming, and its extraordinary growth immediately caught my eye. This led to my career as an Android developer, where I had the opportunity to learn the nuances of building mobile applications. The time I went along helped me expand my reach into hybrid mobile app development, allowing me to smoothly adapt to various platforms.

I also know the struggles of countless aspiring developers dilemma with uncertainty about which direction to head and which technology to pursue. Hence, the idea of writing this blog stemmed from my experiences and insights while making my own way through mobile app development. It is geared toward those beginning to learn this subject or adding to current knowledge.

Web Development

  • Web Development: Focuses on building the user interface (UI) and user experience (UX) of applications.
    • Technologies:
      • HTML (HyperText Markup Language): The backbone of web pages, used to structure content with elements like headings, paragraphs, images, and links.
      • CSS (Cascading Style Sheets): Styles web pages by controlling layout, colors, fonts, and animations, making websites visually appealing and responsive.
      • JavaScript: A powerful programming language that adds interactivity to web pages, enabling dynamic content updates, event handling, and logic execution.
      • React: A JavaScript library developed by Facebook for building fast and scalable user interfaces using a component-based architecture.
      • Angular: A TypeScript-based front-end framework developed by Google that provides a complete solution for building complex, dynamic web applications.
      • Vue.js: A progressive JavaScript framework known for its simplicity and flexibility, allowing developers to build user interfaces and single-page applications efficiently.
    • Upskilling:
      • Learn the basics of HTML, CSS, and JavaScript (essential for any front-end developer).
      • Explore modern frameworks like React or Vue.js for building interactive UIs.
      • Practice building small projects like a portfolio website or a simple task manager.
      • Recommended Resources:

Backend Development

  • Backend Development: Focuses on server-side logic, APIs, and database management.
    • Technologies:
      • Node.js: A JavaScript runtime that allows developers to build fast, scalable server-side applications using a non-blocking, event-driven architecture.
      • Python (Django, Flask): Python is a versatile programming language; Django is a high-level framework for rapid web development, while Flask is a lightweight framework offering flexibility and simplicity.
      • Java (Spring Boot): A Java-based framework that simplifies the development of enterprise-level applications with built-in tools for microservices, security, and database integration.
      • Ruby on Rails: A full-stack web application framework built with Ruby, known for its convention-over-configuration approach and rapid development capabilities.
    • Upskilling:
      • Learn the basics of backend languages like JavaScript (Node.js) or Python.
      • Understand APIs (REST and GraphQL).
      • Practice building CRUD applications and connecting them to databases like MySQL or MongoDB.
      • Recommended Resources:

Mobile App Development

  • Native Development:
    • Android Development
      • Java: A widely used, object-oriented programming language known for its platform independence (Write Once, Run Anywhere) and strong ecosystem, making it popular for enterprise applications and Android development.
      • Kotlin: A modern, concise, and expressive programming language that runs on the JVM, is fully interoperable with Java, and is officially recommended by Google for Android app development due to its safety and productivity features.
    • iOS Development:
      • Swift: A modern, fast, and safe programming language developed by Apple for iOS, macOS, watchOS, and tvOS development. It offers clean syntax, performance optimizations, and strong safety features.
      • Objective-C: An older, dynamic programming language used for Apple app development before Swift. It is based on C with added object-oriented features but is now largely replaced by Swift for new projects.
    • Upskilling:
      • Learn Kotlin or Swift (modern, preferred languages for Android and iOS).
      • Use platform-specific tools: Android Studio (Android) or Xcode (iOS).
      • Start small, like creating a to-do list app or weather app.
      • Recommended Resources:
  • Cross-Platform Development:
    • Technologies:
      • React Native: A JavaScript framework developed by Meta for building cross-platform mobile applications using a single codebase. It leverages React and native components to provide a near-native experience.
      • Flutter: A UI toolkit by Google that uses the Dart language to build natively compiled applications for mobile, web, and desktop from a single codebase, offering high performance and a rich set of pre-designed widgets.
    • Upskilling:

Game Development

  • Technologies:
    • Unity (C#): A popular game engine known for its versatility and ease of use, supporting 2D and 3D game development across multiple platforms. It uses C# for scripting and is widely used for indie and AAA games.
    • Unreal Engine (C++): A high-performance game engine developed by Epic Games, known for its stunning graphics and powerful features. It primarily uses C++ and Blueprints for scripting, making it ideal for AAA game development.
    • Godot: An open-source game engine with a lightweight footprint and built-in scripting language (GDScript), along with support for C# and C++. It is beginner-friendly and widely used for 2D and 3D game development.
  • Upskilling:
    • Learn a game engine (Unity is beginner-friendly and widely used).
    • Explore C# (for Unity) or C++ (for Unreal Engine).
    • Practice by creating simple 2D games, then progress to 3D.
    • Recommended Resources:

Data Science and Machine Learning

  • Technologies:
    • Python (NumPy, Pandas, Scikit-learn): Python is widely used in data science and machine learning, with NumPy for numerical computing, Pandas for data manipulation, and Scikit-learn for machine learning algorithms.
    • R: A statistical programming language designed for data analysis, visualization, and machine learning. It is heavily used in academic and research fields.
    • TensorFlow: An open-source machine learning framework developed by Google, known for its scalability and deep learning capabilities, supporting both CPUs and GPUs.
    • PyTorch: A deep learning framework developed by Facebook, favored for its dynamic computation graph, ease of debugging, and strong research community support.
  • Upskilling:
    • Learn Python and libraries like NumPy, Pandas, and Matplotlib.
    • Explore machine learning concepts and algorithms using Scikit-learn or TensorFlow.
    • Start with data analysis projects or simple ML models.
    • Recommended Resources:

DevOps and Cloud Development

  • Technologies:
    • Docker: A containerization platform that allows developers to package applications with dependencies, ensuring consistency across different environments.
    • Kubernetes: An open-source container orchestration system that automates the deployment, scaling, and management of containerized applications.
    • AWS, Azure, Google Cloud: Leading cloud platforms offering computing, storage, databases, and AI/ML services, enabling scalable and reliable application hosting.
    • CI/CD tools: Continuous Integration and Continuous Deployment tools (like Jenkins, GitHub Actions, and GitLab CI) automate testing, building, and deployment processes for faster and more reliable software releases.
  • Upskilling:
    • Learn about containerization (Docker) and orchestration (Kubernetes).
    • Understand cloud platforms like AWS and their core services (EC2, S3, Lambda).
    • Practice setting up CI/CD pipelines with tools like Jenkins or GitHub Actions.
    • Recommended Resources:

Embedded Systems and IoT Development

  • Technologies:
    • C, C++: Low-level programming languages known for their efficiency and performance, widely used in system programming, game development, and embedded systems.
    • Python: A versatile, high-level programming language known for its simplicity and readability, used in web development, automation, AI, and scientific computing.
    • Arduino: An open-source electronics platform with easy-to-use hardware and software, commonly used for building IoT and embedded systems projects.
    • Raspberry Pi: A small, affordable computer that runs Linux and supports various programming languages, often used for DIY projects, robotics, and education.
  • Upskilling:
    • Learn C/C++ for low-level programming.
    • Experiment with hardware like Arduino or Raspberry Pi.
    • Build projects like smart home systems or sensors.
    • Recommended Resources:

How to Get Started and Transition Smoothly

  1. Assess Your Interests:
    • Do you prefer visual work (Frontend, Mobile), problem-solving (Backend, Data Science), or system-level programming (IoT, Embedded Systems)?
  2. Leverage Your QA Experience:
    • Highlight skills like testing, debugging, and attention to detail when transitioning to development roles.
    • Learn Test-Driven Development (TDD) and how to write unit and integration tests.
  3. Build Projects:
    • Start with small, practical projects and showcase them on GitHub.
    • Examples: A weather app, an e-commerce backend, or a simple game.
  4. Online Platforms for Learning:
    • FreeCodeCamp: For web development.
    • Udemy and Coursera: Wide range of development courses.
    • HackerRank or LeetCode: For coding practice.
  5. Network and Apply:
    • Contribute to open-source projects.
    • Build connections in developer communities like GitHub, Reddit, or LinkedIn.

Choosing the right development framework depends on your interests, career goals, and project requirements. If you enjoy building interactive user experiences, Web Development with React, Angular, or Vue.js could be your path. If you prefer handling server-side logic, Backend Development with Node.js, Python, or Java might be ideal. Those fascinated by mobile applications can explore Native (Kotlin, Swift) or Cross-Platform (React Native, Flutter) Development.

For those drawn to game development, Unity and Unreal Engine provide powerful tools, while Data Science & Machine Learning enthusiasts can leverage Python and frameworks like TensorFlow and PyTorch. If you’re passionate about infrastructure and automation, DevOps & Cloud Development with Docker, Kubernetes, and AWS is a strong choice. Meanwhile, Embedded Systems & IoT Development appeals to those interested in hardware-software integration using Arduino, Raspberry Pi, and C/C++.

Pros and Cons of Different Development Paths

Path Pros Cons
Web Development High-demand, fast-paced, large community Frequent technology changes
Backend Development Scalable applications, strong job market Can be complex, requires database expertise
Mobile Development Booming industry, native vs. cross-platform options Requires platform-specific knowledge
Game Development Creative field, engaging projects Competitive market, longer development cycles
Data Science & ML High-paying field, innovative applications Requires strong math and programming skills
DevOps & Cloud Essential for modern development, automation focus Can be complex, requires networking knowledge
Embedded Systems & IoT Hardware integration, real-world applications Limited to specialized domains

Final Recommendations

  1. If you’re just starting, pick a general-purpose language like JavaScript or Python and build small projects.
  2. If you have a specific goal, choose a framework aligned with your interest (e.g., React for frontend, Node.js for backend, Flutter for cross-platform).
  3. For career growth, explore in-demand technologies like DevOps, AI/ML, or cloud platforms.
  4. Keep learning and practicing—build projects, contribute to open-source, and stay updated with industry trends.

No matter which path you choose, the key is continuous learning and hands-on experience. Stay curious, build projects, and embrace challenges on your journey to becoming a skilled developer, check out Developer Roadmaps for further insights and guidance. 🚀 Happy coding!

]]>
https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/feed/ 1 377319
Install Sitecore Hotfixes on Azure PaaS with Azure DevOps Pipeline https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/ https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/#respond Mon, 17 Feb 2025 21:47:29 +0000 https://blogs.perficient.com/?p=377308

Why Automate Sitecore Hotfix Deployment to Azure PaaS?

Sitecore frequently releases hotfixes to address reported issues, including critical security vulnerabilities or urgent problems. Having a quick, automated process to apply these updates is crucial. By automating the deployment of Sitecore hotfixes with an Azure DevOps pipeline, you can ensure faster, more reliable updates while reducing human error and minimizing downtime. This approach allows you to apply hotfixes quickly and consistently to your Azure PaaS environment, ensuring your Sitecore instance remains secure and up to date without manual intervention. In this post, we’ll walk you through how to automate this process using Azure DevOps.

Prerequisites for Automating Sitecore Hotfix Deployment

Before diving into the pipeline setup, make sure you have the following prerequisites in place:

  1. Azure DevOps Account: Ensure you have access to Azure DevOps to create and manage pipelines.
  2. Azure Storage Account: You’ll need an Azure Storage Account to store your Sitecore WDP hotfix files.
  3. Azure Subscription: Your Azure PaaS environment should be up and running, with a subscription linked to Azure DevOps.
  4. Sitecore Hotfix WDP: Download the Cloud Cumulative package for your version and topology. Be sure to check the release notes for additional instructions.

Steps to Automate Sitecore Hotfix Deployment

  1. Upload Your Sitecore Hotfix to Azure Storage
    • Create a storage container in Azure to store your WDP files.
    • Upload the hotfix using Azure Portal, Storage Explorer, or CLI.
  2. Create a New Pipeline in Azure DevOps
    • Navigate to Pipelines and create a new pipeline.
    • Select the repository containing your Sitecore solution.
    • Configure the pipeline using YAML for flexibility and automation.
  3. Define the Pipeline to Automate Hotfix Deployment
    • Retrieve the Azure Storage connection string securely via Azure Key Vault.
    • Download the Sitecore hotfix from Azure Storage.
    • Deploy the hotfix package to the Azure Web App production slot.
  4. Set Up Pipeline Variables
    • Store critical values like storage connection strings and hotfix file names securely.
    • Ensure the web application name is correctly configured in the pipeline.
  5. Trigger and Verify the Deployment
    • Run the pipeline manually or set up an automatic trigger on commit.
    • Verify the applied hotfix by checking the Sitecore instance and confirming issue resolution.

Enhancing Security in the Deployment Process

  • Use Azure Key Vault: Securely store sensitive credentials and access keys, preventing unauthorized access.
  • Restrict Access to Storage Accounts: Implement role-based access control (RBAC) to limit who can modify or retrieve the hotfix files.
  • Enable Logging and Monitoring: Utilize Azure Monitor and Application Insights to track deployment performance and detect potential failures.

Handling Rollbacks and Errors

  • Implement Deployment Slots: Test hotfix deployments in a staging slot before swapping them into production.
  • Set Up Automated Rollbacks: Configure rollback procedures to revert to a previous stable version if an issue is detected.
  • Enable Notifications: Use Azure DevOps notifications to alert teams about deployment success or failure.

Scaling the Approach for Large Deployments

  • Automate Across Multiple Environments: Extend the pipeline to deploy hotfixes across development, QA, and production environments.
  • Use Infrastructure as Code (IaC): Leverage tools like Terraform or ARM templates to ensure a consistent infrastructure setup.
  • Integrate Automated Testing: Implement testing frameworks such as Selenium or JMeter to verify hotfix functionality before deployment.

Why Streamline Sitecore Hotfix Deployments with Azure DevOps is Important

Automating the deployment of Sitecore hotfixes to Azure PaaS with an Azure DevOps pipeline saves time and ensures consistency and accuracy across environments. By storing the hotfix WDP in an Azure Storage Account, you create a centralized, secure location for all your hotfixes. The Azure DevOps pipeline then handles the rest—keeping your Sitecore environment up to date.

This process makes applying Sitecore hotfixes faster, more reliable, and less prone to error, which is exactly what you need in a production environment.

]]>
https://blogs.perficient.com/2025/02/17/install-sitecore-hotfixes-on-azure-paas-with-azure-devops-pipeline/feed/ 0 377308
Ramp Up On React/React Native In Less Than a Month https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/ https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/#respond Mon, 17 Feb 2025 14:57:23 +0000 https://blogs.perficient.com/?p=370755

I’ve had plenty of opportunities to guide developers new to the React and React Native frameworks. While everyone is different, I wanted to provide a structured guide to help bring a fresh developer into the React fold.

Prerequisites

This introduction to React is intended for a developer that at least has some experience with JavaScript, HTML and basic coding practices.

Ideally, this person has coded at least one project using JavaScript and HTML. This experience will aid in understanding the syntax of components, but any aspiring developer can learn from it as well.

 

Tiers

There are several tiers for beginner level programmers who would like to learn React and are looking for someone like you to help them get up to speed.

Beginner with little knowledge of JavaScript and/or HTML

For a developer like this, I would recommend introductory JavaScript and HTML knowledge. Maybe a simple programming exercise or online instruction, before introducing them to React. You can compare JavaScript to a language they are familiar with and cover core concepts. A basic online guide should be sufficient to get them up and running with HTML.

Junior/Intermediate with some knowledge of JavaScript and/or HTML

I would go over some basics of JavaScript and HTML to make sure they have enough to grasp the syntax and terminologies used in React. A supplementary course or online guide might be good for a refresher before introducing them to modern concepts.

Seasoned developer that hasn’t used React

Even if they haven’t used JavaScript or HTML much, they should be able to ramp up quickly. Reading through React documentation should be enough to jumpstart the learning process.

 

Tips and Guidelines

You can begin their React and React Native journey with the following guidelines:

React Documentation

The React developer documentation is a great place to start if the developer has absolutely no experience or is just starting out. It provides meaningful context in the differences between standard JavaScript and HTML and how React handles them. It also provides a valuable reference on available features and what you can do within the framework.

Pro tip: I recommend starting them right off with functional components. They are more widely used and often have better performance, especially with hooks. I personally find them easier to work with as well.

Class component:

function MyButton() {
    return (
        <button>I'm a button</button>
    );
}

 

Functional component:

const MyButton = () => {
    return (
        <button>I'm a button</button>
    )
}

 

The difference with such a small example isn’t very obvious, but it becomes much different once you introduce hooks. Hooks allow you to extract functionality into a reusable container, this allows you to keep logic separate or import it in other components. There are also several built-in hooks that make life easier. Hooks always start with “use” (useState, useRef, etc.). You are also able to create custom hooks for your own logic.

Concepts

Once they understand basic concepts, it’s time to focus on advanced React concepts. State management is an important factor in React which covers component and app-wide states. Learning widely used packages might come in handy. I recommend Redux Toolkit as it’s easy to learn, but extremely extensible. It is great for both big and small projects and offers simple to complex state management features.

Now might be a great time to point out the key differences between React and React Native. They are very similar with a few minor adjustments:

ReactReact Native
LayoutUses HTML tags“core components” (View instead of div for example).
StylingCSSStyle objects
X/Y Coordinate PlanesFlex direction: rowFlex direction: column
NavigationURLsRoutes react-navigation

Tic-Tac-Toe

I would follow the React concepts with an example project. This allows the developer to see how a project is structured and how to code within the framework. Tic-Tac-Toe is a great example for a new React developer to give a try to see if they understand the basic concepts.

Debugging

Debugging in Chrome is extremely useful for things like console logs and other logging that is beneficial for defects. The Style Inspector is another mandatory tool for React that lets you see how styles are applied to different elements. For React Native, the documentation contains useful links to helpful tools.

Project Work

Assign the new React developer low-level bugs or feature enhancements to tackle. Closely monitoring their progress via pair programing has been extremely beneficial in my experience. This provides the opportunity to ask real-time questions to which the experienced developer can offer guidance. This also provides an opportunity to correct any mistakes or bad practices before they become ingrained. Merge requests should be reviewed together before approval to ensure code quality.

In Closing

These tips and tools will give a new React or React Native developer the skills they can develop to contribute to projects. Obviously, the transition to React Native will be a lot smoother for a developer familiar with React, but any developer that is familiar with JavaScript/HTML should be able to pick up both quickly.

Thanks for your time and I wish you the best of luck with onboarding your new developer onto your project!

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/feed/ 0 370755
Prospective Developments in API and APIGEE Management: A Look Ahead for the Next Five Years https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/ https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/#respond Wed, 12 Feb 2025 11:39:03 +0000 https://blogs.perficient.com/?p=376548

Application programming interfaces, or APIs, are crucial to the ever-changing digital transformation landscape because they enable businesses to interact with their data and services promptly and effectively. Effective administration is therefore necessary to guarantee that these APIs operate as intended, remain secure, and offer the intended advantages. This is where Apigee, Google Cloud’s premier API management solution, is helpful.

What is Apigee?

Apigee is an excellent tool for businesses wanting to manage their APIs smoothly. It simplifies the process of creating, scaling, securing, and deploying APIs, making developers’ work easier. One of Apigee’s best features is its flexibility—it can manage both external APIs for third-party access and internal APIs for company use, making it suitable for companies of all sizes. Apigee also works well with security layers like Nginx, which adds a layer of authentication between Apigee and backend systems. This flexibility and security make Apigee a reliable and easy-to-use platform for managing APIs.

What is Gemini AI?

Gemini AI is an advanced artificial intelligence tool that enhances the management and functionality of APIs. Think of it as a smart assistant that helps automate tasks, answer questions, and improve security for API systems like Apigee. For example, if a developer needs help setting up an API, Gemini AI can guide them with instructions, formats, and even create new APIs based on simple language input. It can also answer common user questions or handle customer inquiries automatically, making the whole process faster and more efficient. Essentially, Gemini AI brings intelligence and automation to API management, helping businesses run their systems smoothly and securely.

Why Should Consumers Opt for Gemini AI with Apigee?

Consumers should choose Gemini AI with Apigee because it offers more innovative, faster, and more secure API management. It also brings security, efficiency, and ease of use to API management, making it a valuable choice for businesses that want to streamline their operations and ensure their APIs are fast, reliable, and secure. Here are some key benefits: Enhanced Security, Faster Development, and Time-Saving Automation.

Below is the flow diagram for Prospective Developments in APIGEE.

Image1


Greater Emphasis on API Security

  • Zero Trust Security:  The Zero Trust security approach is founded on “never trust, always verify,” which states that no device or user should ever be presumed trustworthy, whether connected to the network or not. Each request for resource access under this architecture must undergo thorough verification.
  • Zero Trust Models: APIs will increasingly adopt zero-trust security principles, ensuring no entity is trusted by default. The future of Zero-Trust in Apigee will likely focus on increasing the security and flexibility of API management through tighter integration with identity management, real-time monitoring, and advanced threat protection technologies.
  • Enhanced Data Encryption: Future developments might include more substantial data encryption capabilities, both in transit and at rest, to protect sensitive information in compliance with Zero Trust principles.

    Picture2


Resiliency and Fault Tolerance

 The future of resiliency and fault tolerance in Apigee will likely involve advancements and innovations driven by evolving technological trends and user needs. Here are some key areas where we can expect Apigee to enhance its resiliency and fault tolerance capabilities.

Picture3

  • Automated Failover: Future iterations of Apigee will likely have improved automated failover features, guaranteeing that traffic is redirected as quickly as possible in case of delays or outages. More advanced failure detection and failover methods could be a part of this.
  • Adaptive Traffic Routing: Future updates could include more dynamic and intelligent traffic management features. This might involve adaptive routing based on real-time performance metrics, enabling more responsive adjustments to traffic patterns and load distribution.
  • Flexible API Gateway Configurations: Future enhancements could provide more flexibility in configuring API gateways to better handle different fault scenarios. This includes custom policies for fault tolerance, enhanced error handling, and more configurable redundancy options.

Gemini AI with Apigee

Gemini AI and Apigee’s integration has the potential to improve significantly API administration by enhancing its intelligence, security, and usability. Organizations can anticipate improved security, more effective operations, and better overall user and developer experience by utilizing cutting-edge AI technologies. This integration may open the door to future breakthroughs and capabilities as AI and API management technologies develop. If the API specifications that are currently available in API Hub do not satisfy your needs, you can utilize Gemini to create a new one by just stating your needs in basic English. Considerable time is saved in the cycles of development and assessment.

Gemini AI can inform you about the policy documentation in parallel while adding policies to the Apigee development. Gemini AI can guide you with the formats used in the policies. We can automate the query region like chatbots with Gemini AI. We may utilize Gemini AI to improve and get answers to questions about the APIs available on the Apigee portal.

If any integration is currently in use. We can use Gemini AI to accept inquiries from customers or clients and automate the most frequently asked responses. Additionally, Gemini AI can simply reply to customers until our professionals are active.


Overview

Apigee, Google Cloud’s API management platform, plays a key role in digital transformation by securely and flexibly connecting businesses with data and services. Future advancements focus on stronger security with a “Zero Trust” approach, improved resilience through automated failover and adaptive traffic routing, and enhanced flexibility in API gateway settings. Integration with Gemini AI will make Apigee smarter, enabling automated support, policy guidance, API creation, streamlining development, and improving customer service.

]]>
https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/feed/ 0 376548
Revolutionizing Work With Microsoft Copilot: A Game-Changer in AI Integration https://blogs.perficient.com/2025/02/11/revolutionizing-work-with-microsoft-copilot-a-game-changer-in-ai-integration/ https://blogs.perficient.com/2025/02/11/revolutionizing-work-with-microsoft-copilot-a-game-changer-in-ai-integration/#respond Tue, 11 Feb 2025 21:33:15 +0000 https://blogs.perficient.com/?p=377121

AI is no longer a futuristic concept—it’s here, transforming the way we work in real time. Organizations worldwide making AI a strategic priority, harnessing its power to enhance efficiency, accelerate decision-making, and drive growth. In fact, generative AI adoption has skyrocketed from 55% in 2023 to 75% in 2024, contributing to an estimated global economic impact of $19.9 trillion. To stay ahead, businesses must align AI investments across applications, platforms, data, and infrastructure to maximize value and maintain a competitive edge.

“In the evolving landscape of AI, the future hinges on our ability to not just experiment, but to strategically pivot—transforming experimentation into sustainable innovation,” said Rick Villars, group vice president, Worldwide Research at IDC. “As we embrace AI, we need to prioritize relevance, urgency, and resourcefulness to forge resilient enterprises that thrive in a data-driven world.” 

Success with AI isn’t just about adopting new technology—it requires a clear vision, a strong strategy, and the right expertise to turn possibilities into real-world impact. As a Microsoft partner, Perficient is at the forefront of AI transformation, helping businesses navigate this evolving landscape. Our upcoming Microsoft Copilot video showcases exactly how AI is reshaping everyday workflows, with real-world examples of how different roles—from executives to developers—are leveraging Copilot to work smarter and more efficiently. The world is changing, and AI is advancing at an unprecedented pace. Now is the time to lead, innovate, and unlock the full potential of AI in your organization.

 

Microsoft Copilot: The AI-Powered Workplace Ally 

Microsoft Copilot is an AI-powered assistant designed to enhance productivity across Microsoft applications and services. It leverages advanced language models and integrates seamlessly with Microsoft 365, Dynamics 365, and GitHub to provide real-time assistance, automate tasks, and offer intelligent suggestions. 

Key Features:

  • Content Generation: Drafts emails, documents, and presentations with contextually relevant suggestions.
  • Data Analysis: Helps interpret data in Excel, offering insights and generating summaries for better decision-making.
  • Meeting Summaries: Provides concise overviews of meetings, highlighting key points and action items.
  • Code Assistance: Supports developers by suggesting code snippets and completing code blocks in GitHub.

Microsoft Copilot uniquely integrates web intelligence, organizational data, and user context to provide powerful assistance. It enhances workflows across roles—from sales teams managing customer interactions, to HR professionals optimizing employee support, to developers accelerating coding projects. With privacy and security at the forefront, Copilot empowers every end user to work smarter and achieve more. 

 

Real-World AI Impact: How Businesses Are Using Microsoft Copilot

Organizations are leveraging Copilot to streamline operations, enhance productivity, and drive business success. Here are the top use cases where our clients and prospects see the most value: 

  • Company Executives: Leveraging AI-driven insights to make informed strategic decisions and improve business performance.
  • Customer Service Teams: Enhancing response times and preemptively resolving issues with predictive AI.
  • HR Professionals: Automating policy updates, employee inquiries, and streamlining workforce management.
  • Legal & Operations: Simplifying contract reviews, compliance tracking, and change management processes.
  • Sales Reps: Accessing deep customer insights, automating CRM updates, and optimizing meeting preparation.
  • Developers: Speeding up application development with AI-assisted code suggestions and debugging support.

Each of these use cases is explored in depth in our upcoming Microsoft Copilot video, where we demonstrate real-world applications of AI across various industries and roles—stay tuned for a closer look at how Copilot is making a tangible impact. 

 

The Future of AI-Powered Workflows

The next phase of AI transformation will continue to reshape industries. As Microsoft’s Chairman and CEO Satya Nadella explains, “2025 will be about model-forward applications that reshape all application categories. More so than any previous platform shift, every layer of the application stack will be impacted. It’s akin to GUI, internet servers, and cloud-native databases all being introduced into the app stack simultaneously. Thirty years of change is being compressed into three years!” 

Key advancements to watch:

  • Agentic AI: AI applications will develop memory, entitlements, and action spaces, allowing them to perform complex tasks independently.
  • The End of the SaaS Age: AI agents will replace traditional SaaS models, integrating multiple platforms and automating workflows.
  • CoreAI Initiative: Microsoft’s CoreAI unit is driving next-gen AI capabilities, streamlining platforms, and enhancing AI-driven applications.

 

Partnering With Perficient for AI Transformation

Perficient and Microsoft can help your organization confidently scale AI solutions, no matter where you are on your AI journey. Our Microsoft AI solutions are designed to unlock new levels of productivity, drive innovation, fuel growth, and ensure secure AI integration across all business functions. Let’s shape the future together. Read more about our Copilot and AI capabilities here

]]>
https://blogs.perficient.com/2025/02/11/revolutionizing-work-with-microsoft-copilot-a-game-changer-in-ai-integration/feed/ 0 377121
AWS Secrets Manager – A Secure Solution for Protecting Your Data https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/ https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/#respond Wed, 05 Feb 2025 16:46:02 +0000 https://blogs.perficient.com/?p=376895

Objective

If you are looking for a solution to securely store your secrets like DB credentials, API keys, tokens, passwords, etc., AWS Secret Manager is the service that comes to your rescue. Keeping the secrets as plain text in your code is highly risky. Hence, storing the secrets in AWS secret manager helps you with the following.

AWS Secret Manager is a fully managed service that can store and manage sensitive information. It simplifies secret handling by enabling the auto-rotation of secrets to reduce the risk of compromise, monitoring the secrets for compliance, and reducing the manual effort of updating the credentials in the application after rotation.

Essential Features of AWS Secret Manager

Picture1

  • Security: Secrets are encrypted using encryption keys we can manage through AWS KMS.
  • Rotation schedule: Enable rotation of credentials through scheduling to replace long-term with short-term ones.
  • Authentication and Access control: Using AWS IAM, we can control access to the secrets, control lambda rotation functions, and permissions to replicate the secrets.
  • Monitor secrets for compliance: AWS Config rules can be used to check whether secrets align with internal security and compliance standards, such as HIPAA, PCI, ISO, AICPA SOC, FedRAMP, DoD, IRAP, and OSPAR.
  • Audit and monitoring: We can use other AWS services, such as Cloud Trail for auditing and Cloud Watch for monitoring.
  • Rollback through versioning: If needed, the secret can be reverted to the previous version by moving the labels attached to that secret.
  • Pay as you go: Charged based on the number of secrets managed through the Secret manager.
  • Integration with other AWS services: Integrating with other AWS services, such as EC2, Lambda, RDS, etc., eliminates the need to hard code secrets.

AWS Secret Manager Pricing

At the time of publishing this document, AWS Secret Manager pricing is below. This might be revised in the future.
ComponentCostDetails
Secret storage$0.40 per secret per monthCharges are done per month. If they are stored for less than a month, the cost is prorated.
API calls$0.05 per 10,000 API callsCharges are charged to API interactions like managing secrets / retrieving secrets.

Creating a Secret

Let us get deeper into the process of creating secrets.

  1. Log in to the AWS Secret management console and select the “store a new secret” option: https://console.aws.amazon.com/secretsmanager/.
    Picture2
  2. On the Choose secret type page,
    1. For Secret type, select the type of database secret that you want to store:
    2. For Credentials, input the credentials for the database that has been hardcoded. Picture3
    3. For the Encryption key, choose AWS/Secrets Manager. This encryption key service is free to use.
    4. For the Database field, choose your database.
    5. Then click Next.
  3. On the Configure secret page,
    1. Provide a descriptive secret name and description.
    2. In the Resource permissions field, choose Edit permissions. Provide the policy that allows RoleToRetrieveSecretAtRuntime and Save.
    3. Then, click Next. Picture4
  4. On the Configure rotation page,
    1. select the schedule for which you want this to be rotated.
    2. Click Next. Picture6
  5. On the Review page, review the details, and then Store.

Output

The secret is created as below.

Picture7

We can update the code to fetch the secret from Secrets Manager. For this, we need to remove the hardcoded credentials from the code. Based on the code language, there is a need to add a call to the function or method to the code to call the secret manager for the secret stored here. Depending on our requirements, we can modify the rotation strategy, versioning, monitoring, etc.

Secret Rotation Strategy

Picture8

  • Single user – It updates credentials for one user in one secret. During secret rotation, open connections will not be dropped. While rotating, Open connections might experience a low risk of database denial calls that use the newly rotated secrets. This can be mitigated through retry strategies. Once the rotation is completed, all new calls will use the rotated credentials.
    • Use case – This strategy can be used for one-time or interactive users.
  • Alternating users – This method updates secret values for two users in one secret. We create the first use. Then, we create a cloned second user using the rotation function during the first rotation. Whenever the secret rotates, the rotation function alternates between the user’s password and the one it updates. Even during rotation, the application gets a valid set of credentials.
    • Uses case – This is good for systems that require high availability.

Versioning of Secrets

A secret consists of the secret value and the metadata. To store multiple values in one secret, we can use json with key-value pairs. A secret has a version that holds copies of the encrypted secret values. AWS uses three labels, like:

  • AWSCURRENT – to store current secret value.
  • AWSPREVIOUS – to hold the previous version.
  • AWSPENDING – to hold pending value during rotation.

Custom labeling of the versions is also possible. AWS can never remove labeled versions of secrets, but unlabeled versions are considered deprecated and will be removed at any time.

Monitoring Secrets in AWS Secret Manager

Secrets stored in AWS Secret Manager can be monitored by services provided by AWS as below.

  • Using cloud trail – This stores all API calls to the secret Manager as events, including secret rotation and version deletion.
  • Monitoring using Cloudwatch – the number of secrets in our account can be managed, secrets that are marked for deletion, monitor metrics, etc. We can also set an alarm for metric changes.

Conclusion

AWS Secrets Manager offers a secure, automated, scalable solution for managing sensitive data and credentials. It reduces the risk of secret exposure and helps improve application security with minimal manual intervention. Adopting best practices around secret management can ensure compliance and minimize vulnerabilities in your applications.

 

]]>
https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/feed/ 0 376895
Setting Up Virtual WAN (VWAN) in Azure Cloud: A Comprehensive Guide – I https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/ https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/#respond Wed, 05 Feb 2025 11:01:41 +0000 https://blogs.perficient.com/?p=376281

As businesses expand their global footprint, the need for a flexible, scalable, and secure networking solution becomes paramount. Enter Azure Virtual WAN (VWAN), a cloud-based offering designed to simplify and centralize network management while ensuring top-notch performance. Let’s dive into what Azure VWAN offers and how to set it up effectively.

What is Azure Virtual WAN (VWAN)?

Azure Virtual WAN, or VWAN, is a cloud-based network solution that connects secure, seamless, and optimized connectivity across hybrid and multi-cloud environments.

It provides:

I. Flexibility for Dynamic Network Requirements

  • Adaptable Connectivity: Azure VWAN supports various connectivity options, including ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN, ensuring compatibility with diverse environments like on-premises data centers, branch offices, and remote workers.
  • Scale On-Demand: As network requirements grow or change, Azure VWAN allows you to dynamically add or remove connections, integrate new virtual networks (VNets), or scale bandwidth based on traffic needs.
  • Global Reach: Azure VWAN enables connectivity across regions and countries using Microsoft’s extensive global network, ensuring that organizations with distributed operations stay connected.
  • Hybrid and Multi-Cloud Integration: Azure VWAN supports hybrid setups (on-premises + cloud) and integration with other public cloud providers, providing the flexibility to align with business strategies.

II. Improved Management with Centralized Controls

  • Unified Control Plane: Azure VWAN provides a centralized dashboard within the Azure Portal to manage all networking components, such as VNets, branches, VPNs, and ExpressRoute circuits.
  • Simplified Configuration: Automated setup and policy management make deploying new network segments, traffic routing, and security configurations easy.
  • Network Insights: Built-in monitoring and diagnostic tools offer deep visibility into network performance, allowing administrators to quickly identify and resolve issues.
  • Policy Enforcement: Azure VWAN enables consistent policy enforcement across regions and resources, improving governance and compliance with organizational security standards.

III. High Performance Leveraging Microsoft’s Global Backbone Infrastructure

  • Low Latency and High Throughput: Azure VWAN utilizes Microsoft’s global backbone network, known for its reliability and speed, to provide high-performance connectivity across regions and to Azure services.
  • Optimized Traffic Routing: Intelligent routing ensures that traffic takes the most efficient path across the network, reducing latency for applications and end users.
  • Built-in Resilience: Microsoft’s backbone infrastructure includes redundant pathways and fault-tolerant systems, ensuring high availability and minimizing the risk of network downtime.
  • Proximity to End Users: With a global footprint of Azure regions and points of presence (PoPs), Azure VWAN ensures proximity to end users, improving application responsiveness and user experience.

High-level architecture of VWAN

This diagram depicts a high-level architecture of Azure Virtual WAN and its connectivity components.

 

Vwanarchitecture

 

  • HQ/DC (Headquarters/Data Centre): Represents the organization’s primary data center or headquarters hosting critical IT infrastructure and services. Acts as a centralized hub for the organization’s on-premises infrastructure. Typically includes servers, storage systems, and applications that need to communicate with resources in Azure.
  • Branches: Represents the organization’s regional or local office locations. Serves as local hubs for smaller, decentralized operations. Each branch connects to Azure to access cloud-hosted resources, applications, and services and communicates with other branches or HQ/DC. The HQ/DC and branches communicate with each other and Azure resources through the Azure Virtual WAN.
  • Virtual WAN Hub: At the heart of Azure VWAN is the Virtual WAN Hub, a central node that simplifies traffic management between connected networks. This hub acts as the control point for routing and ensures efficient data flow.
  • ExpressRoute: Establishes a private connection between the on-premises network and Azure, bypassing the public internet. It uses BGP for route exchange, ensuring secure and efficient connectivity.
  • VNet Peering: Links Azure Virtual Networks directly, enabling low-latency, high-bandwidth communication.
    • Intra-Region Peering: Connects VNets within the same region.
    • Global Peering: Bridges VNets across different regions.
  • Point-to-Site (P2S) VPN: Ideal for individual users or small teams, this allows devices to securely connect to Azure resources over the internet.
  • Site-to-Site (S2S) VPN: Connects the on-premises network to Azure, enabling secure data exchange between systems.

Benefits of VWAN

  • Scalability: Expand the network effortlessly as the business grows.
  • Cost-Efficiency: Reduce hardware expenses by leveraging cloud-based solutions.
  • Global Reach: Easily connect offices and resources worldwide.
  • Enhanced Performance: Optimize data transfer paths for better reliability and speed.

Setting Up VWAN in Azure

Follow these steps to configure Azure VWAN:

Step 1: Create a Virtual WAN Resource

  • Log in to the Azure Portal and create a Virtual WAN resource. This serves as the foundation of the network architecture.

Step 2: Configure a Virtual WAN Hub

  • Make the WAN Hub the central traffic manager and adjust it to meet the company’s needs.

Step 3: Establish Connections

  • Configure VPN Gateways for secure, encrypted connections.
  • Use ExpressRoute for private, high-performance connectivity.

Step 4: Link VNets

  • Create Azure Virtual Networks and link them to the WAN Hub. The seamless interaction between resources is guaranteed by this integration.

Monitoring and Troubleshooting VWAN

Azure Monitor

Azure Monitor tracks performance, availability, and network health in real time and provides insights into traffic patterns, latency, and resource usage.

Network Watcher

Diagnose network issues with tools like packet capture and connection troubleshooting. Quickly identify and resolve any bottlenecks or disruptions.

Alerts and Logs

Set up alerts for critical issues such as connectivity drops or security breaches. Use detailed logs to analyze network events and maintain robust auditing.

Final Thoughts

Azure VWAN is a powerful tool for businesses looking to unify and optimize their global networking strategy. Organizations can ensure secure, scalable, and efficient connectivity by leveraging features like ExpressRoute, VNet Peering, and VPN Gateways. With the correct setup and monitoring tools, managing complex networks becomes a seamless experience.

]]>
https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/feed/ 0 376281
Migrating from MVP to Jetpack Compose: A Step-by-Step Guide for Android Developers https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/ https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/#respond Mon, 03 Feb 2025 15:30:02 +0000 https://blogs.perficient.com/?p=376701

Migrating an Android App from MVP to Jetpack Compose: A Step-by-Step Guide

Jetpack Compose is Android’s modern toolkit for building native UI. It simplifies and accelerates UI development by using a declarative approach, which is a significant shift from the traditional imperative XML-based layouts. If you have an existing Android app written in Kotlin using the MVP (Model-View-Presenter) pattern with XML layouts, fragments, and activities, migrating to Jetpack Compose can bring numerous benefits, including improved developer productivity, reduced boilerplate code, and a more modern UI architecture.

In this article, we’ll walk through the steps to migrate an Android app from MVP with XML layouts to Jetpack Compose. We’ll use a basic News App to explain in detail how to migrate all layers of the app. The app has two screens:

  1. A News List Fragment to display a list of news items.
  2. A News Detail Fragment to show the details of a selected news item.

We’ll start by showing the original MVP implementation, including the Presenters, and then migrate the app to Jetpack Compose step by step. We’ll also add error handling, loading states, and use Kotlin Flow instead of LiveData for a more modern and reactive approach.

1. Understand the Key Differences

Before diving into the migration, it’s essential to understand the key differences between the two approaches:

  • Imperative vs. Declarative UI: XML layouts are imperative, meaning you define the UI structure and then manipulate it programmatically. Jetpack Compose is declarative, meaning you describe what the UI should look like for any given state, and Compose handles the rendering.
  • MVP vs. Compose Architecture: MVP separates the UI logic into Presenters and Views. Jetpack Compose encourages a more reactive and state-driven architecture, often using ViewModel and State Hoisting.
  • Fragments and Activities: In traditional Android development, Fragments and Activities are used to manage UI components. In Jetpack Compose, you can replace most Fragments and Activities with composable functions.

2. Plan the Migration

Migrating an entire app to Jetpack Compose can be a significant undertaking. Here’s a suggested approach:

  1. Start Small: Begin by migrating a single screen or component to Jetpack Compose. This will help you understand the process and identify potential challenges.
  2. Incremental Migration: Jetpack Compose is designed to work alongside traditional Views, so you can migrate your app incrementally. Use ComposeView in XML layouts or AndroidView in Compose to bridge the gap.
  3. Refactor MVP to MVVM: Jetpack Compose works well with the MVVM (Model-View-ViewModel) pattern. Consider refactoring your Presenters into ViewModels.
  4. Replace Fragments with Composable Functions: Fragments can be replaced with composable functions, simplifying navigation and UI management.
  5. Add Error Handling and Loading States: Ensure your app handles errors gracefully and displays loading states during data fetching.
  6. Use Kotlin Flow: Replace LiveData with Kotlin Flow for a more modern and reactive approach.

3. Set Up Jetpack Compose

Before starting the migration, ensure your project is set up for Jetpack Compose:

  1. Update Gradle Dependencies:
    Add the necessary Compose dependencies to your build.gradle file:

    android {
        ...
        buildFeatures {
            compose true
        }
        composeOptions {
            kotlinCompilerExtensionVersion '1.5.3'
        }
    }
    
    dependencies {
        implementation 'androidx.activity:activity-compose:1.8.0'
        implementation 'androidx.compose.ui:ui:1.5.4'
        implementation 'androidx.compose.material:material:1.5.4'
        implementation 'androidx.compose.ui:ui-tooling-preview:1.5.4'
        implementation 'androidx.lifecycle:lifecycle-viewmodel-compose:2.6.2'
        implementation 'androidx.navigation:navigation-compose:2.7.4' // For navigation
        implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.6.2' // For Flow
        implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-core:1.7.3' // For Flow
    }
  2. Enable Compose in Your Project:
    Ensure your project is using the correct Kotlin and Android Gradle plugin versions.

4. Original MVP Implementation

a. News List Fragment and Presenter

The NewsListFragment displays a list of news items. The NewsListPresenter fetches the data and updates the view.

NewsListFragment.kt

class NewsListFragment : Fragment(), NewsListView {

    private lateinit var presenter: NewsListPresenter
    private lateinit var adapter: NewsListAdapter

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val view = inflater.inflate(R.layout.fragment_news_list, container, false)
        val recyclerView = view.findViewById<RecyclerView>(R.id.recyclerView)
        adapter = NewsListAdapter { newsItem -> presenter.onNewsItemClicked(newsItem) }
        recyclerView.adapter = adapter
        recyclerView.layoutManager = LinearLayoutManager(context)
        presenter = NewsListPresenter(this)
        presenter.loadNews()
        return view
    }

    override fun showNews(news: List<NewsItem>) {
        adapter.submitList(news)
    }

    override fun showLoading() {
        // Show loading indicator
    }

    override fun showError(error: String) {
        // Show error message
    }
}

NewsListPresenter.kt

class NewsListPresenter(private val view: NewsListView) {

    fun loadNews() {
        view.showLoading()
        // Simulate fetching news from a data source (e.g., API or local database)
        try {
            val newsList = listOf(
                NewsItem(id = 1, title = "News 1", summary = "Summary 1"),
                NewsItem(id = 2, title = "News 2", summary = "Summary 2")
            )
            view.showNews(newsList)
        } catch (e: Exception) {
            view.showError(e.message ?: "An error occurred")
        }
    }

    fun onNewsItemClicked(newsItem: NewsItem) {
        // Navigate to the news detail screen
        val intent = Intent(context, NewsDetailActivity::class.java).apply {
            putExtra("newsId", newsItem.id)
        }
        startActivity(intent)
    }
}

NewsListView.kt

interface NewsListView {
    fun showNews(news: List<NewsItem>)
    fun showLoading()
    fun showError(error: String)
}

b. News Detail Fragment and Presenter

The NewsDetailFragment displays the details of a selected news item. The NewsDetailPresenter fetches the details and updates the view.

NewsDetailFragment.kt

class NewsDetailFragment : Fragment(), NewsDetailView {

    private lateinit var presenter: NewsDetailPresenter

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val view = inflater.inflate(R.layout.fragment_news_detail, container, false)
        presenter = NewsDetailPresenter(this)
        val newsId = arguments?.getInt("newsId") ?: 0
        presenter.loadNewsDetail(newsId)
        return view
    }

    override fun showNewsDetail(newsItem: NewsItem) {
        view?.findViewById<TextView>(R.id.title)?.text = newsItem.title
        view?.findViewById<TextView>(R.id.summary)?.text = newsItem.summary
    }

    override fun showLoading() {
        // Show loading indicator
    }

    override fun showError(error: String) {
        // Show error message
    }
}

NewsDetailPresenter.kt

class NewsDetailPresenter(private val view: NewsDetailView) {

    fun loadNewsDetail(newsId: Int) {
        view.showLoading()
        // Simulate fetching news detail from a data source (e.g., API or local database)
        try {
            val newsItem = NewsItem(id = newsId, title = "News $newsId", summary = "Summary $newsId")
            view.showNewsDetail(newsItem)
        } catch (e: Exception) {
            view.showError(e.message ?: "An error occurred")
        }
    }
}

NewsDetailView.kt

interface NewsDetailView {
    fun showNewsDetail(newsItem: NewsItem)
    fun showLoading()
    fun showError(error: String)
}

5. Migrate to Jetpack Compose

a. Migrate the News List Fragment

Replace the NewsListFragment with a composable function. The NewsListPresenter will be refactored into a NewsListViewModel.

NewsListScreen.kt

@Composable
fun NewsListScreen(viewModel: NewsListViewModel, onItemClick: (NewsItem) -> Unit) {
    val newsState by viewModel.newsState.collectAsState()

    when (newsState) {
        is NewsState.Loading -> {
            // Show loading indicator
            CircularProgressIndicator()
        }
        is NewsState.Success -> {
            val news = (newsState as NewsState.Success).news
            LazyColumn {
                items(news) { newsItem ->
                    NewsListItem(newsItem = newsItem, onClick = { onItemClick(newsItem) })
                }
            }
        }
        is NewsState.Error -> {
            // Show error message
            val error = (newsState as NewsState.Error).error
            Text(text = error, color = Color.Red)
        }
    }
}

@Composable
fun NewsListItem(newsItem: NewsItem, onClick: () -> Unit) {
    Card(
        modifier = Modifier
            .fillMaxWidth()
            .padding(8.dp)
            .clickable { onClick() }
    ) {
        Column(modifier = Modifier.padding(16.dp)) {
            Text(text = newsItem.title, style = MaterialTheme.typography.h6)
            Text(text = newsItem.summary, style = MaterialTheme.typography.body1)
        }
    }
}

NewsListViewModel.kt

class NewsListViewModel : ViewModel() {

    private val _newsState = MutableStateFlow<NewsState>(NewsState.Loading)
    val newsState: StateFlow<NewsState> get() = _newsState

    init {
        loadNews()
    }

    private fun loadNews() {
        viewModelScope.launch {
            _newsState.value = NewsState.Loading
            try {
                // Simulate fetching news from a data source (e.g., API or local database)
                val newsList = listOf(
                    NewsItem(id = 1, title = "News 1", summary = "Summary 1"),
                    NewsItem(id = 2, title = "News 2", summary = "Summary 2")
                )
                _newsState.value = NewsState.Success(newsList)
            } catch (e: Exception) {
                _newsState.value = NewsState.Error(e.message ?: "An error occurred")
            }
        }
    }
}

sealed class NewsState {
    object Loading : NewsState()
    data class Success(val news: List<NewsItem>) : NewsState()
    data class Error(val error: String) : NewsState()
}

b. Migrate the News Detail Fragment

Replace the NewsDetailFragment with a composable function. The NewsDetailPresenter will be refactored into a NewsDetailViewModel.

NewsDetailScreen.kt

@Composable
fun NewsDetailScreen(viewModel: NewsDetailViewModel) {
    val newsState by viewModel.newsState.collectAsState()

    when (newsState) {
        is NewsState.Loading -> {
            // Show loading indicator
            CircularProgressIndicator()
        }
        is NewsState.Success -> {
            val newsItem = (newsState as NewsState.Success).news
            Column(modifier = Modifier.padding(16.dp)) {
                Text(text = newsItem.title, style = MaterialTheme.typography.h4)
                Text(text = newsItem.summary, style = MaterialTheme.typography.body1)
            }
        }
        is NewsState.Error -> {
            // Show error message
            val error = (newsState as NewsState.Error).error
            Text(text = error, color = Color.Red)
        }
    }
}

NewsDetailViewModel.kt

class NewsDetailViewModel : ViewModel() {

    private val _newsState = MutableStateFlow<NewsState>(NewsState.Loading)
    val newsState: StateFlow<NewsState> get() = _newsState

    fun loadNewsDetail(newsId: Int) {
        viewModelScope.launch {
            _newsState.value = NewsState.Loading
            try {
                // Simulate fetching news detail from a data source (e.g., API or local database)
                val newsItem = NewsItem(id = newsId, title = "News $newsId", summary = "Summary $newsId")
                _newsState.value = NewsState.Success(newsItem)
            } catch (e: Exception) {
                _newsState.value = NewsState.Error(e.message ?: "An error occurred")
            }
        }
    }
}

sealed class NewsState {
    object Loading : NewsState()
    data class Success(val news: NewsItem) : NewsState()
    data class Error(val error: String) : NewsState()
}

6. Set Up Navigation

Replace Fragment-based navigation with Compose navigation:

class MainActivity : ComponentActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            NewsApp()
        }
    }
}

@Composable
fun NewsApp() {
    val navController = rememberNavController()
    NavHost(navController = navController, startDestination = "newsList") {
        composable("newsList") {
            val viewModel: NewsListViewModel = viewModel()
            NewsListScreen(viewModel = viewModel) { newsItem ->
                navController.navigate("newsDetail/${newsItem.id}")
            }
        }
        composable("newsDetail/{newsId}") { backStackEntry ->
            val viewModel: NewsDetailViewModel = viewModel()
            val newsId = backStackEntry.arguments?.getString("newsId")?.toIntOrNull() ?: 0
            viewModel.loadNewsDetail(newsId)
            NewsDetailScreen(viewModel = viewModel)
        }
    }
}

7. Test and Iterate

After migrating the screens, thoroughly test the app to ensure it behaves as expected. Use Compose’s preview functionality to visualize your UI:

@Preview(showBackground = true)
@Composable
fun PreviewNewsListScreen() {
    NewsListScreen(viewModel = NewsListViewModel(), onItemClick = {})
}

@Preview(showBackground = true)
@Composable
fun PreviewNewsDetailScreen() {
    NewsDetailScreen(viewModel = NewsDetailViewModel())
}

8. Gradually Migrate the Entire App

Once you’re comfortable with the migration process, continue migrating the rest of your app incrementally. Use ComposeView and AndroidView to integrate Compose with existing XML

]]>
https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/feed/ 0 376701
Apex Security Best Practices for Salesforce Applications https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/ https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/#respond Mon, 03 Feb 2025 05:51:18 +0000 https://blogs.perficient.com/?p=373874

As businesses increasingly rely on Salesforce to manage their critical data, ensuring data security has become more important than ever. Apex, Salesforce’s proprietary programming language, runs in system mode by default, bypassing object- and field-level security. To protect sensitive data, developers need to enforce strict security measures.

This blog will explore Apex security best practices, including enforcing sharing rules, field-level permissions, and user access enforcement to protect your Salesforce data.

Why Apex Security is Critical for Your Salesforce Applications

Apex’s ability to bypass security settings puts the onus on developers to implement proper Salesforce security practices. Without these protections, your Salesforce application might unintentionally expose sensitive data to unauthorized users.

By following best practices such as enforcing sharing rules, validating inputs, and using security-enforced SOQL queries, you can significantly reduce the risk of data breaches and ensure your app adheres to the platform’s security standards.

Enforcing Sharing Rules in Apex to Maintain Data Security

Sharing rules are central to controlling data access in Salesforce. Apex doesn’t automatically respect these sharing rules unless explicitly instructed to do so. Here’s how to enforce them in your Apex code:

Using with sharing in Apex Classes

  • with sharing: Ensures the current user’s sharing settings are enforced, preventing unauthorized access to records.
  • without sharing: Ignores sharing rules and is often used for administrative tasks or system-level operations where access should not be restricted.
  • inherited sharing: Inherits sharing settings from the calling class.

Best Practice: Always use with sharing unless you explicitly need to override sharing rules for specific use cases. This ensures your code complies with Salesforce security standards.

Example

public class AccountHandlerWithSharing {
    public void fetchAccounts() {
        // Ensures that sharing settings are respected
        List<Account> accounts = [SELECT Id, Name FROM Account];
    }
}
public class AccountHandlerWithoutSharing {
    public void fetchAccounts() {
        // Ignores sharing settings and returns all records
        List<Account> accounts = [SELECT Id, Name FROM Account];
    }
}

Enforcing Object and Field-Level Permissions in Apex

Apex operates in a system context by default, bypassing object- and field-level security. You must manually enforce these security measures to ensure your code respects user access rights.

Using WITH SECURITY_ENFORCED in SOQL Queries

The WITH SECURITY_ENFORCED keyword ensures that Salesforce performs a permission check on fields and objects in your SOQL query, ensuring that only accessible data is returned.

Example

List<Account> accounts = [
    SELECT Id, Name
    FROM Account
    WHERE Industry = 'Technology'
    WITH SECURITY_ENFORCED
];

This approach guarantees that only fields and objects the current user can access are returned in your query results.

Using the stripInaccessible Method to Filter Inaccessible Data

Salesforce provides the stripInaccessible method, which removes inaccessible fields or relationships from query results. It also helps prevent runtime errors by ensuring no inaccessible fields are used in DML operations.

Example

Account acc = [SELECT Id, Name FROM Account LIMIT 1];
Account sanitizedAcc = (Account) Security.stripInaccessible(AccessType.READABLE, acc);

Using stripInaccessible ensures that any fields or relationships the user cannot access are stripped out of the Account record before any further processing.

Apex Managed Sharing: Programmatically Share Records

Apex Managed Sharing can be a powerful tool when you need to manage record access dynamically. This feature allows developers to programmatically share records with specific users or groups.

Example

public void shareRecord(Id recordId, Id userId) {
    CustomObject__Share share = new CustomObject__Share();
    share.ParentId = recordId;
    share.UserOrGroupId = userId;
    share.AccessLevel = 'Edit'; // Options: 'Read', 'Edit', or 'All'
    insert share;
}

This code lets you share a custom object record with a specific user and grant them Edit access. Apex Managed Sharing allows more flexible, dynamic record-sharing controls.

Security Tips for Apex and Lightning Development

Here are some critical tips for improving security in your Apex and Lightning applications:

Avoid Hardcoding IDs

Hardcoding Salesforce IDs, such as record IDs or profile IDs, can introduce security vulnerabilities and reduce code flexibility. Use dynamic retrieval to retrieve IDs, and consider using Custom Settings or Custom Metadata for more flexible and secure configurations.

Validate User Inputs to Prevent Security Threats

It is essential to sanitize all user inputs to prevent threats like SOQL injection and Cross-Site Scripting (XSS). Always use parameterized queries and escape characters where necessary.

Use stripInaccessible in DML Operations

To prevent processing inaccessible fields, always use the stripInaccessible method when handling records containing fields restricted by user permissions.

Review Sharing Contexts to Ensure Data Security

Ensure you use the correct sharing context for each class or trigger. Avoid granting unnecessary access by using with sharing for most of your classes.

Write Test Methods to Simulate User Permissions

Writing tests that simulate various user roles using System.runAs() is crucial to ensure your code respects sharing rules, field-level permissions, and other security settings.

Conclusion: Enhancing Salesforce Security with Apex

Implementing Apex security best practices is essential to protect your Salesforce data. Whether you are enforcing sharing rules, respecting field-level permissions, or programmatically managing record sharing, these practices help ensure that only authorized users can access sensitive data.

When building your Salesforce applications, always prioritize security by:

  • Using with sharing where possible.
  • Implementing security-enforced queries.
  • Tools like stripInaccessible can be used to filter out inaccessible fields.

By adhering to these practices, you can build secure Salesforce applications that meet business requirements and ensure data integrity and compliance.

Further Reading on Salesforce Security

]]>
https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/feed/ 0 373874
Optimizing Mobile Experiences with Experience Cloud: Reaching Customers on the Go https://blogs.perficient.com/2025/02/02/experience-cloud/ https://blogs.perficient.com/2025/02/02/experience-cloud/#respond Mon, 03 Feb 2025 03:19:05 +0000 https://blogs.perficient.com/?p=375935

Today, customers expect to interact with businesses anytime, anywhere, and on any device. With mobile usage on the rise, businesses must ensure their digital experiences are optimized for mobile. Salesforce Experience Cloud helps companies create personalized and branded digital experiences, but these experiences must be mobile-friendly to meet customer expectations.

Why Mobile Optimization Matters

Over half of all web traffic now comes from mobile devices. Whether shopping, seeking customer support, or browsing forums, more customers use their phones to interact with businesses. If your Experience Cloud sites aren’t mobile-optimized, users could face slow load times and difficult navigation. On the other hand, a smooth, mobile-friendly experience can boost customer satisfaction and increase engagement.

6 Strategies for Optimizing Mobile Experiences

Here are some effective strategies for making your Salesforce Experience Cloud site mobile-friendly:

1. Responsive Design

Responsive design ensures your site automatically adjusts to different screen sizes on a phone or desktop. Make sure images, pages, and buttons resize correctly for smaller screens. For a better experience, keep navigation simple and touch-friendly.

2. Mobile-First Design

Start with designing for mobile before scaling up to larger screens. Prioritize essential features, such as live chat or case submissions, and simplify your content so it’s easier to read and navigate on smaller screens.

3. Optimize Load Speed

Mobile users expect fast loading times. Compress images, reduce heavy scripts, and implement caching to speed up your site. Faster sites not only improve user experience but also boost your SEO rankings.

4. Touch-Friendly UI/UX

Ensure buttons are large enough to be easily tapped and that forms are simple to fill out. Consider sticky navigation bars, which allow users to access important features without scrolling back to the top.

5. Push Notifications and Alerts

Use push notifications to keep users informed and engaged. While personalized updates are significant, avoid bombarding users with too many messages. Let users control their notification preferences to maintain a balance.

6. Mobile-Optimized Forms and Payments

Make sure forms are easy to fill out on mobile devices. For e-commerce businesses, ensure payment systems are compatible with mobile-friendly options like Apple Pay or Google Pay, giving customers a smoother checkout experience.

Visit the articles below to learn more about Experience Cloud:

]]>
https://blogs.perficient.com/2025/02/02/experience-cloud/feed/ 0 375935