Architecture Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/architecture/ Expert Digital Insights Wed, 26 Mar 2025 04:52:50 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Architecture Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/architecture/ 32 32 30508587 Power Fx in Power Automate Desktop https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/ https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/#respond Wed, 26 Mar 2025 04:52:50 +0000 https://blogs.perficient.com/?p=379147

Power Fx Features

Power Fx is a low-code language expressing logic across the Microsoft Power Platform. It’s a general-purpose, strong-typed, declarative, and functional programming language described in human-friendly text. Makers can use Power Fx directly in an Excel-like formula bar or Visual Studio Code text window. Its concise and straightforward nature makes everyday programming tasks easy for both makers and developers.

Power Fx is expressed in human-friendly text. It’s a low-code language that makers can use directly in an Excel-like formula bar or Visual Studio Code text window. The “low” in low-code is due to the concise and straightforward nature of the language, making everyday programming tasks easy for both makers and developers.

Power Fx enables the full spectrum of development, from no-code makers without any programming knowledge to pro-code for professional developers. It enables diverse teams to collaborate and save time and effort.

Using Power Fx in Desktop Flow

To use Power Fx as an expression language in a desktop flow, you must create one and enable the respective toggle button when creating it through Power Automate for the desktop’s console.

Picture1

Differences in Power Fx-Enabled Flows

Each Power Fx expression must start with an “=” (equals to sign).

If you’re transitioning from flows where Power Fx is disabled, you might notice some differences. To streamline your experience while creating new desktop flows, here are some key concepts to keep in mind:

  • In the same fashion as Excel formulas, desktop flows that use Power Fx as their expression language use 1 (one) based array indexing instead of 0 (zero) based indexing. For example, expression =Index(numbersArray, 1) returns the first element of the numbersArray array.
  • Variable names are case-sensitive in desktop flows with Power Fx. For example, NewVar is different than newVar.
  • When Power Fx is enabled in a desktop flow, variable initialization is required before use. Attempting to use an uninitialized variable in Power Fx expressions results in an error.
  • The If action accepts a single conditional expression. Previously, it accepted multiple operands.
  • While flows without Power Fx enabled have the term “General value” to denote an unknown object type, Power Fx revolves around a strict type system. In Power Fx enabled flows, there’s a distinction between dynamic variables (variables whose type or value can be changed during runtime) and dynamic values (values whose type or schema is determined at runtime). To better understand this distinction, consider the following example. The dynamicVariable changes its type during runtime from a Numeric to a Boolean value, while dynamicValue is determined during runtime to be an untyped object, with its actual type being a Custom object:

With Power Fx Enabled

Picture2

With Power Fx Disabled

Picture3

  • Values that are treated as dynamic values are:
    • Data tables
    • Custom objects with unknown schema
    • Dynamic action outputs (for example, the “Run .NET Script” action)
    • Outputs from the “Run desktop flow” action
    • Any action output without a predefined schema (for example, “Read from Excel worksheet” or “Create New List”)
  • Dynamic values are treated similarly to the Power Fx Untyped Object and usually require explicit functions to be converted into the required type (for example, Bool() and Text()). To streamline your experience, there’s an implicit conversion when using a dynamic value as an action input or as a part of a Power Fx expression. There’s no validation during authoring, but depending on the actual value during runtime, a runtime error occurs if the conversion fails.
  • A warning message stating “Deferred type provided” is presented whenever a dynamic variable is used. These warnings arise from Power Fx’s strict requirement for strong-typed schemas (strictly defined types). Dynamic variables aren’t permitted in lists, tables, or as a property for Record values.
  • By combining the Run Power Fx expression action with expressions using the Collect, Clear, ClearCollect, and Patch functions, you can emulate behavior found in the actions Add item to list and Insert row into data table, which were previously unavailable for Power Fx-enabled desktop flows. While both actions are still available, use the Collect function when working with strongly typed lists (for example, a list of files). This function ensures the list remains typed, as the Add Item to List action converts the list into an untyped object.

Examples

  • The =1 in an input field equals the numeric value 1.
  • The = variableName is equal to the variableName variable’s value.
  • The expression = {‘prop’:”value”} returns a record value equivalent to a custom object.
  • The expression = Table({‘prop’:”value”}) returns a Power Fx table that is equivalent to a list of custom objects.
  • The expression – = [1,2,3,4] creates a list of numeric values.
  • To access a value from a List, use the function Index(var, number), where var is the list’s name and number is the position of the value to be retrieved.
  • To access a data table cell using a column index, use the Index() function. =Index(Index(DataTableVar, 1), 2) retrieves the value from the cell in row 1 within column 2. =Index(DataRowVar, 1) retrieves the value from the cell in row 1.
  • Define the Collection Variable:

Give your collection a name (e.g., myCollection) in the Variable Name field.

In the Value field, define the collection. Collections in PAD are essentially arrays, which you can define by enclosing the values in square brackets [ ].

1. Create a Collection of Numbers

Action: Set Variable

Variable Name: myNumberCollection

Value: [1, 2, 3, 4, 5]

2. Create a Collection of Text (Strings)

Action: Set Variable

Variable Name: myTextCollection

Value: [“Alice”, “Bob”, “Charlie”]

3. Create a Collection with Mixed Data Types

You can also create collections with mixed data types. For example, a collection with both numbers and strings:

Action: Set Variable

Variable Name: mixedCollection

Value: [1, “John”, 42, “Doe”]

  • To include an interpolated value in an input or a UI/web element selector, use the following syntax: Text before ${variable/expression} text after
    • Example: The total number is ${Sum(10, 20)}

 If you want to use the dollar sign ($) followed by a opening curly brace sign ({) within a Power Fx expression or in the syntax of a UI/Web element selector and have Power Automate for desktop not treat it as the string interpolation syntax, make sure to follow this syntax: $${ (the first dollar sign will act as an escape character)

Available Power Fx functions

For the complete list of all available functions in Power Automate for desktop flows, go to Formula reference – desktop flows.

Known Issues and Limitations

  • The following actions from the standard library of automation actions aren’t currently supported:
    • Switch
    • Case
    • Default case
  • Some Power Fx functions presented through IntelliSense aren’t currently supported in desktop flows. When used, they display the following design-time error: “Parameter ‘Value’: PowerFx type ‘OptionSetValueType’ isn’t supported.”

 

When and When Not to Use Power Fx on Desktop

When to Use Power Fx in Power Automate Desktop

  1. Complex Logic: If you need to implement more complicated conditions, calculations, or data transformations in your flows, Power Fx can simplify the process.
  2. Integration with Power Apps: If your automations are closely tied to Power Apps and you need consistent logic between them, Power Fx can offer a seamless experience as it’s used across the Power Platform.
  3. Data Manipulation: Power Fx excels at handling data operations like string manipulation, date formatting, mathematical operations, and more. It may be helpful if your flow requires manipulating data in these ways.
  4. Reusability: Power Fx functions can be reused in different parts of your flow or other flows, providing consistency and reducing the need for redundant logic.
  5. Low-Code Approach: If you’re building solutions that require a lot of custom logic but don’t want to dive into full-fledged programming, Power Fx can be a good middle ground.

When Not to Use Power Fx in Power Automate Desktop

  1. Simple Flows: For straightforward automation tasks that don’t require complex expressions (like basic UI automation or file manipulations), using Power Fx could add unnecessary complexity. It’s better to stick with the built-in actions.
  2. Limited Support in Desktop: While Power Fx is more prevalent in Power Apps, Power Automate Desktop doesn’t fully support all Power Fx features available in other parts of the Power Platform. If your flow depends on more advanced Power Fx capabilities, it might be limited in Power Automate Desktop.
  3. Learning Curve: Power Fx has its own syntax and can take time to get used to, mainly if you’re accustomed to more traditional automation methods. If you’re new to it, you may want to weigh the time it takes to learn Power Fx versus simply using the built-in features in Power Automate Desktop.

Conclusion

Yes, use Power Fx if your flow needs custom logic, data transformation, or integration with Power Apps and you’re comfortable with the learning curve.

No, avoid it if your flows are relatively simple or if you’re primarily focused on automation tasks like file manipulation, web scraping, or UI automation, where Power Automate Desktop’s native features will be sufficient.

]]>
https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/feed/ 0 379147
Kotlin Multiplatform vs. React Native vs. Flutter: Building Your First App https://blogs.perficient.com/2025/02/26/kotlin-multiplatform-vs-react-native-vs-flutter-building-your-first-app/ https://blogs.perficient.com/2025/02/26/kotlin-multiplatform-vs-react-native-vs-flutter-building-your-first-app/#respond Wed, 26 Feb 2025 21:50:16 +0000 https://blogs.perficient.com/?p=377508

Choosing the right framework for your first cross-platform app can be challenging, especially with so many great options available. To help you decide, let’s compare Kotlin Multiplatform (KMP), React Native, and Flutter by building a simple “Hello World” app with each framework. We’ll also evaluate them across key aspects like setup, UI development, code sharing, performance, community, and developer experience. By the end, you’ll have a clear understanding of which framework is best suited for your first app.

Building a “Hello World” App

1. Kotlin Multiplatform (KMP)

Kotlin Multiplatform allows you to share business logic across platforms while using native UI components. Here’s how to build a “Hello World” app:

Steps:

  1. Set Up the Project:
    • Install Android Studio and the Kotlin Multiplatform Mobile plugin.
    • Create a new KMP project using the “Mobile Library” template.
  2. Shared Code:In the shared module, create a Greeting class with a function to return “Hello World”.
    // shared/src/commonMain/kotlin/Greeting.kt
    class Greeting {
        fun greet(): String {
            return "Hello, World!"
        }
    }
  3. Platform-Specific UIs:For Android, use Jetpack Compose or XML layouts in the androidApp module. For iOS, use SwiftUI or UIKit in the iosApp module.Android (Jetpack Compose):
    // androidApp/src/main/java/com/example/androidApp/MainActivity.kt
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                Text(text = Greeting().greet())
            }
        }
    }

    iOS (SwiftUI):

    // iosApp/iosApp/ContentView.swift
    struct ContentView: View {
        var body: some View {
            Text(Greeting().greet())
        }
    }
  4. Run the App:Build and run the app on Android and iOS simulators/emulators.

Pros and Cons:

Pros:

  • Native performance and look.
  • Shared business logic reduces code duplication.

Cons:

  • Requires knowledge of platform-specific UIs (Jetpack Compose for Android, SwiftUI/UIKit for iOS).
  • Initial setup can be complex.

2. React Native

React Native allows you to build cross-platform apps using JavaScript and React. Here’s how to build a “Hello World” app:

Steps:

  1. Set Up the Project:
    • Install Node.js and the React Native CLI.
    • Create a new project:
      npx react-native init HelloWorldApp
  2. Write the Code:Open App.js and replace the content with the following:
    import React from 'react';
    import { Text, View } from 'react-native';
    
    const App = () => {
        return (
            <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
                <Text>Hello, World!</Text>
            </View>
        );
    };
    
    export default App;
  3. Run the App:Start the Metro bundler:
    npx react-native start

    Run the app on Android or iOS:

    npx react-native run-android
    npx react-native run-ios

Pros and Cons:

Pros:

  • Easy setup and quick development.
  • Hot reload for instant updates.

Cons:

  • Performance may suffer for complex apps due to the JavaScript bridge.
  • Limited native look and feel.

3. Flutter

Flutter is a UI toolkit for building natively compiled apps for mobile, web, and desktop using Dart. Here’s how to build a “Hello World” app:

Steps:

  1. Set Up the Project:
    • Install Flutter SDK and Android Studio/VS Code.
    • Create a new project:
      flutter create hello_world_app
  2. Write the Code:Open lib/main.dart and replace the content with the following:
    import 'package:flutter/material.dart';
    
    void main() {
        runApp(MyApp());
    }
    
    class MyApp extends StatelessWidget {
        @override
        Widget build(BuildContext context) {
            return MaterialApp(
                home: Scaffold(
                    appBar: AppBar(title: Text('Hello World App')),
                    body: Center(child: Text('Hello, World!')),
                ),
            );
        }
    }
  3. Run the App:Run the app on Android or iOS:
    flutter run

Pros and Cons:

Pros:

  • Single codebase for UI and business logic.
  • Excellent performance and rich UI components.

Cons:

  • Larger app size compared to native apps.
  • Requires learning Dart.

Comparing the Frameworks

1. Initial Setup

  • KMP: Moderate setup complexity, especially for iOS. Requires configuring Gradle files and platform-specific dependencies.
  • React Native: Easy setup with tools like Expo and React Native CLI.
  • Flutter: Smoothest setup with the Flutter CLI and flutter doctor command.

Best option: Flutter (for ease of initial setup).

2. UI Development

  • KMP: Platform-specific UIs (Jetpack Compose for Android, SwiftUI/UIKit for iOS). Offers native flexibility but requires separate UI code.
  • React Native: Declarative UI with JSX. Powerful but can feel like a middle ground between native and custom rendering.
  • Flutter: Widget-based system for consistent cross-platform UIs. Highly customizable but requires learning Dart.

Best option: A tie between KMP (for native UI flexibility) and Flutter (for cross-platform consistency).

3. Code Sharing

  • KMP: Excels at sharing business logic while allowing native UIs.
  • React Native: High code sharing but may require platform-specific code for advanced features.
  • Flutter: High code sharing for both UI and business logic but requires Dart.

Best option: Kotlin Multiplatform (for its focus on sharing business logic).

4. Performance

  • KMP: Native performance due to native UIs and compiled shared code.
  • React Native: Good performance but can struggle with complex UIs due to the JavaScript bridge.
  • Flutter: Excellent performance, often close to native, but may not match native performance in all scenarios.

Winner: Kotlin Multiplatform (for native performance).

5. Community and Ecosystem

  • KMP: Growing community backed by JetBrains. Kotlin ecosystem is mature.
  • React Native: Large and active community with a rich ecosystem.
  • Flutter: Thriving community with strong Google support.

Best option: React Native (for its large and mature community), but Flutter is a close contender.

6. Developer Experience

  • KMP: Gentle learning curve for Kotlin developers but requires platform-specific UI knowledge.
  • React Native: Familiar for JavaScript/React developers but may require native mobile knowledge.
  • Flutter: Excellent developer experience with hot reload and comprehensive documentation.

Best option: Flutter (for its excellent developer experience and tooling).

7. AI-Assisted Development Speed

With the rise of AI tools like GitHub Copilot, ChatGPT, Gemini, Claude, etc.. Developers can significantly speed up app development. Let’s evaluate how each framework benefits from AI assistance:

  • KMP: AI tools can help generate Kotlin code for shared logic and even platform-specific UIs. However, the need for platform-specific knowledge may limit the speed gains.
  • React Native: JavaScript is widely supported by AI tools, making it easy to generate boilerplate code, components, and even entire screens. The large ecosystem also means AI can suggest relevant libraries and solutions.
  • Flutter: Dart is less commonly supported by AI tools compared to JavaScript, but Flutter’s widget-based system is highly structured, making it easier for AI to generate consistent and functional code.

Best option: React Native (due to JavaScript’s widespread support in AI tools).

The resolution:

There’s no one-size-fits-all answer. The best choice depends on your priorities:

    • Prioritize Performance and Native UI: Choose Kotlin Multiplatform.
    • Prioritize Speed of Development and a Large Community: Choose React Native.
    • Prioritize Ease of Use, Cross-Platform Consistency, and Fast Development: Choose Flutter.

For Your First App:

  • Simple App, Fast Development: Flutter is an excellent choice. Its ease of setup, hot reload, and comprehensive widget system will get you up and running quickly.
  • Existing Kotlin/Android Skills, Focus on Shared Logic: Kotlin Multiplatform allows you to leverage your existing knowledge while sharing a significant portion of your codebase.
  • Web Developer, Familiar with React: React Native is a natural fit, allowing you to utilize your web development skills for mobile development.

Conclusion

Each framework has its strengths and weaknesses, and the best choice depends on your team’s expertise, project requirements, and long-term goals. For your first app, consider starting with Flutter for its ease of use and fast development, React Native if you’re a web developer, or Kotlin Multiplatform if you’re focused on performance and native UIs.

Try building a simple app with each framework to see which one aligns best with your preferences and project requirements.

References

  1. Kotlin Multiplatform Documentation: https://kotlinlang.org/docs/multiplatform.html
  2. React Native Documentation: https://reactnative.dev/docs/getting-started
  3. Flutter Documentation: https://flutter.dev/docs
  4. JetBrains Blog on KMP: https://blog.jetbrains.com/kotlin/
  5. React Native Community: https://github.com/react-native-community
  6. Flutter Community: https://flutter.dev/community

 

]]>
https://blogs.perficient.com/2025/02/26/kotlin-multiplatform-vs-react-native-vs-flutter-building-your-first-app/feed/ 0 377508
Ramp Up On React/React Native In Less Than a Month https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/ https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/#comments Mon, 17 Feb 2025 14:57:23 +0000 https://blogs.perficient.com/?p=370755

I’ve had plenty of opportunities to guide developers new to the React and React Native frameworks. While everyone is different, I wanted to provide a structured guide to help bring a fresh developer into the React fold.

Prerequisites

This introduction to React is intended for a developer that at least has some experience with JavaScript, HTML and basic coding practices.

Ideally, this person has coded at least one project using JavaScript and HTML. This experience will aid in understanding the syntax of components, but any aspiring developer can learn from it as well.

 

Tiers

There are several tiers for beginner level programmers who would like to learn React and are looking for someone like you to help them get up to speed.

Beginner with little knowledge of JavaScript and/or HTML

For a developer like this, I would recommend introductory JavaScript and HTML knowledge. Maybe a simple programming exercise or online instruction, before introducing them to React. You can compare JavaScript to a language they are familiar with and cover core concepts. A basic online guide should be sufficient to get them up and running with HTML.

Junior/Intermediate with some knowledge of JavaScript and/or HTML

I would go over some basics of JavaScript and HTML to make sure they have enough to grasp the syntax and terminologies used in React. A supplementary course or online guide might be good for a refresher before introducing them to modern concepts.

Seasoned developer that hasn’t used React

Even if they haven’t used JavaScript or HTML much, they should be able to ramp up quickly. Reading through React documentation should be enough to jumpstart the learning process.

 

Tips and Guidelines

You can begin their React and React Native journey with the following guidelines:

React Documentation

The React developer documentation is a great place to start if the developer has absolutely no experience or is just starting out. It provides meaningful context in the differences between standard JavaScript and HTML and how React handles them. It also provides a valuable reference on available features and what you can do within the framework.

Pro tip: I recommend starting them right off with functional components. They are more widely used and often have better performance, especially with hooks. I personally find them easier to work with as well.

Class component:

function MyButton() {
    return (
        <button>I'm a button</button>
    );
}

 

Functional component:

const MyButton = () => {
    return (
        <button>I'm a button</button>
    )
}

 

The difference with such a small example isn’t very obvious, but it becomes much different once you introduce hooks. Hooks allow you to extract functionality into a reusable container, this allows you to keep logic separate or import it in other components. There are also several built-in hooks that make life easier. Hooks always start with “use” (useState, useRef, etc.). You are also able to create custom hooks for your own logic.

Concepts

Once they understand basic concepts, it’s time to focus on advanced React concepts. State management is an important factor in React which covers component and app-wide states. Learning widely used packages might come in handy. I recommend Redux Toolkit as it’s easy to learn, but extremely extensible. It is great for both big and small projects and offers simple to complex state management features.

Now might be a great time to point out the key differences between React and React Native. They are very similar with a few minor adjustments:

ReactReact Native
LayoutUses HTML tags“core components” (View instead of div for example).
StylingCSSStyle objects
X/Y Coordinate PlanesFlex direction: rowFlex direction: column
NavigationURLsRoutes react-navigation

Tic-Tac-Toe

I would follow the React concepts with an example project. This allows the developer to see how a project is structured and how to code within the framework. Tic-Tac-Toe is a great example for a new React developer to give a try to see if they understand the basic concepts.

Debugging

Debugging in Chrome is extremely useful for things like console logs and other logging that is beneficial for defects. The Style Inspector is another mandatory tool for React that lets you see how styles are applied to different elements. For React Native, the documentation contains useful links to helpful tools.

Project Work

Assign the new React developer low-level bugs or feature enhancements to tackle. Closely monitoring their progress via pair programing has been extremely beneficial in my experience. This provides the opportunity to ask real-time questions to which the experienced developer can offer guidance. This also provides an opportunity to correct any mistakes or bad practices before they become ingrained. Merge requests should be reviewed together before approval to ensure code quality.

In Closing

These tips and tools will give a new React or React Native developer the skills they can develop to contribute to projects. Obviously, the transition to React Native will be a lot smoother for a developer familiar with React, but any developer that is familiar with JavaScript/HTML should be able to pick up both quickly.

Thanks for your time and I wish you the best of luck with onboarding your new developer onto your project!

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/feed/ 1 370755
Extending the Capabilities of Your Development Team with Visual Studio Code Extensions https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/ https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/#respond Tue, 11 Feb 2025 20:53:23 +0000 https://blogs.perficient.com/?p=377088

Introduction

Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).

This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.

What are Visual Studio Code Extensions?

Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.

Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.

Why Use Visual Studio Code Extensions?

The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.

  1. Improve Code Quality: Extensions like ESLint and JSHint help enforce coding standards and identify potential errors early in the development process. This leads to more robust, maintainable, and bug-free code.
  2. Boost Productivity: Extensions like Auto Close Tag and IntelliCode automate repetitive tasks, provide intelligent code completion, and streamline your workflow. This allows developers to focus on solving complex problems rather than getting bogged down in tedious tasks.
  3. Enhance Collaboration: Extensions like Live Share enable real-time collaboration, making it easier for team members to review code, pair program, and troubleshoot issues together, regardless of their physical location.
  4. Customize Your Workflow: VS Code’s flexibility allows you to tailor your development environment to your specific needs and preferences. Extensions like Bracket Pair Colorizer and custom themes can enhance readability and create a more comfortable and efficient working environment.
  5. Stay Current: Extensions provide support for the latest technologies and frameworks, ensuring that your team can quickly adapt to new developments in the industry and leverage the best tools for the job.
  6. Save Time: By automating common tasks and providing intelligent assistance, extensions like Path Intellisense can significantly reduce the amount of time spent on mundane tasks, freeing up more time for creative problem-solving and innovation.
  7. Ensure Consistency: Extensions like EditorConfig help enforce coding standards and best practices across your team, ensuring that everyone is following the same guidelines and producing consistent, maintainable code.
  8. Enhance Debugging: Powerful debugging extensions like Debugger for Java provide advanced debugging capabilities, making it easier to identify and resolve issues quickly and efficiently.

Managing IDE Tools for Mature Software Development Teams

As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.

  1. Standardization: Ensuring that all team members use the same tools and configurations reduces discrepancies, improves collaboration, and simplifies onboarding for new team members. Standardized extensions help maintain code quality and consistency, especially in larger teams where diverse setups can lead to confusion and inefficiencies.
  2. Efficiency: Streamlining the setup process for new team members allows them to get up to speed quickly. Automated setup scripts can install all necessary extensions and configurations in one go, saving time and reducing the risk of errors.
  3. Quality Control: Enforcing coding standards and best practices across the team is essential for maintaining code quality. Extensions like SonarLint can continuously analyze code quality, catching issues early and preventing bugs from making their way into production.
  4. Scalability: As your team evolves and adopts new technologies, managing IDE tools effectively facilitates the integration of new languages, frameworks, and tools. This ensures that your team can quickly adapt to new developments and leverage the best tools for the job.
  5. Security: Keeping all tools and extensions up-to-date and secure is paramount, especially for teams working on sensitive or high-stakes projects. Regularly updating extensions prevents security issues and ensures access to the latest features and security patches.

Best Practices for Managing VS Code Extensions in a Team

Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:

  1. Establish an Approved Extension List: Create and maintain a list of extensions that are approved for use by the team. This ensures that everyone is using the same core tools and configurations, reducing inconsistencies and improving collaboration. Consider using a shared document or a dedicated tool to manage this list.
  2. Automate Installation and Configuration: Use tools like Visual Studio Code Settings Sync or custom scripts to automate the installation and configuration of extensions and settings for all team members. This ensures that everyone has the same setup without manual intervention, saving time and reducing the risk of errors.
  3. Implement Regular Audits and Updates: Regularly review and update the list of approved extensions to add new tools, remove outdated ones, and ensure that all extensions are up-to-date with the latest security patches. This helps keep your team current with the latest developments and minimizes security risks.
  4. Provide Training and Documentation: Offer training and documentation on the approved extensions and best practices for using them. This helps ensure that all team members are proficient in using the tools and can leverage them effectively.
  5. Encourage Feedback and Collaboration: Encourage team members to provide feedback on the approved extensions and suggest new tools that could benefit the team. This fosters a culture of continuous improvement and ensures that the team is always using the best tools for the job.

Security Considerations for VS Code Extensions

While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.

  1. Verify the Source: Only install extensions from trusted sources, such as the Visual Studio Code Marketplace. Avoid downloading extensions from unknown or unverified sources, as they may contain malware or other malicious code.
  2. Review Permissions: Carefully review the permissions requested by extensions before installing them. Be cautious of extensions that request excessive permissions or access to sensitive data, as they may be attempting to compromise your security.
  3. Keep Extensions Updated: Regularly update your extensions to ensure that you have the latest security patches and bug fixes. Outdated extensions can be vulnerable to security exploits, so it’s important to keep them up-to-date.
  4. Use Security Scanning Tools: Consider using security scanning tools to automatically identify and assess potential security vulnerabilities in your VS Code extensions. These tools can help you proactively identify and address security risks before they can be exploited.

Creating Custom Visual Studio Code Extensions

In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.

  1. Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.

  2. Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.

  3. Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.

  4. Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.

  5. Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.

  6. Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.

Best Practices for Extension Development

Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:

  • Resource Management:

    • Dispose of Resources: Properly dispose of any resources your extension creates, such as disposables, subscriptions, and timers. Use the context.subscriptions.push() method to register disposables, which will be automatically disposed of when the extension is deactivated.
    • Avoid Memory Leaks: Be mindful of memory usage, especially when dealing with large files or data sets. Use techniques like streaming and pagination to process data in smaller chunks.
    • Clean Up on Deactivation: Implement the deactivate() function to clean up any resources that need to be explicitly released when the extension is deactivated.
  • Asynchronous Operations:

    • Use Async/Await: Use async/await to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.
    • Handle Errors: Properly handle errors in asynchronous operations using try/catch blocks. Log errors and provide informative messages to the user.
    • Avoid Blocking the UI: Ensure that long-running operations are performed in the background to avoid blocking the VS Code UI. Use vscode.window.withProgress to provide feedback to the user during long operations.
  • Security:

    • Validate User Input: Sanitize and validate any user input to prevent security vulnerabilities like code injection and cross-site scripting (XSS).
    • Secure API Keys: Store API keys and other sensitive information securely. Use VS Code’s secret storage API to encrypt and protect sensitive data.
    • Limit Permissions: Request only the necessary permissions for your extension. Avoid requesting excessive permissions that could compromise user security.
  • Performance:

    • Optimize Code: Optimize your code for performance. Use efficient algorithms and data structures to minimize execution time.
    • Lazy Load Resources: Load resources only when they are needed. This can improve the startup time of your extension.
    • Cache Data: Cache frequently accessed data to reduce the number of API calls and improve performance.
  • Code Quality:

    • Follow Coding Standards: Adhere to established coding standards and best practices. This makes your code more readable, maintainable, and less prone to errors.
    • Write Unit Tests: Write unit tests to ensure that your code is working correctly. This helps you catch bugs early and prevent regressions.
    • Use a Linter: Use a linter to automatically identify and fix code style issues. This helps you maintain a consistent code style across your project.
  • User Experience:

    • Provide Clear Feedback: Provide clear and informative feedback to the user. Use status bar messages, progress bars, and error messages to keep the user informed about what’s happening.
    • Respect User Settings: Respect user settings and preferences. Allow users to customize the behavior of your extension to suit their needs.
    • Keep it Simple: Keep your extension simple and easy to use. Avoid adding unnecessary features that could clutter the UI and confuse the user.

By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.

Example: Creating an AI Chatbot Integration for Documentation Generation

Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.

1. Scaffold the Extension:

First, use the Yeoman generator to create a new extension project:

yo code

2. Modify the Extension Code:

Open the generated src/extension.ts file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:

import * as vscode from 'vscode';
import axios from 'axios';

export function activate(context: vscode.ExtensionContext) {
 let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => {
  const editor = vscode.window.activeTextEditor;
  if (editor) {
   const selection = editor.selection;
   const selectedText = editor.document.getText(selection);

   const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key
   const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions';

   try {
    const response = await axios.post(
     apiUrl,
     {
      prompt: `Generate documentation for the following code:\n\n${selectedText}`,
      max_tokens: 150,
      n: 1,
      stop: null,
      temperature: 0.5,
     },
     {
      headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${apiKey}`,
      },
     }
    );

    const generatedDocs = response.data.choices[0].text;
    vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs);
   } catch (error) {
    vscode.window.showErrorMessage('Error generating documentation: ' + error.message);
   }
  }
 });

 context.subscriptions.push(disposable);
}

export function deactivate() {}

3. Update package.json:

Add the following command configuration to the contributes section of your package.json file:

"contributes": {
    "commands": [
        {
            "command": "extension.generateDocs",
            "title": "Generate Documentation"
        }
    ]
}

4. Run and Test the Extension:

Press F5 to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.

Packaging and Distributing Your Custom Extension

Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:

1. Package the Extension:

VS Code uses the vsce (Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:

npm install -g vsce

Navigate to your extension’s root directory and run the following command to package your extension:

vsce package

This will create a .vsix file, which is the packaged extension.

2. Publish to the Visual Studio Code Marketplace:

To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.

Once you have your PAT, run the following command to publish your extension:

vsce publish

You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.

3. Share Privately:

If you prefer to share your extension privately within your organization, you can distribute the .vsix file directly to your team members. They can install the extension by running the following command in VS Code:

code --install-extension your-extension.vsix

Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.

Conclusion

Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/feed/ 0 377088
AWS Secrets Manager – A Secure Solution for Protecting Your Data https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/ https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/#respond Wed, 05 Feb 2025 16:46:02 +0000 https://blogs.perficient.com/?p=376895

Objective

If you are looking for a solution to securely store your secrets like DB credentials, API keys, tokens, passwords, etc., AWS Secret Manager is the service that comes to your rescue. Keeping the secrets as plain text in your code is highly risky. Hence, storing the secrets in AWS secret manager helps you with the following.

AWS Secret Manager is a fully managed service that can store and manage sensitive information. It simplifies secret handling by enabling the auto-rotation of secrets to reduce the risk of compromise, monitoring the secrets for compliance, and reducing the manual effort of updating the credentials in the application after rotation.

Essential Features of AWS Secret Manager

Picture1

  • Security: Secrets are encrypted using encryption keys we can manage through AWS KMS.
  • Rotation schedule: Enable rotation of credentials through scheduling to replace long-term with short-term ones.
  • Authentication and Access control: Using AWS IAM, we can control access to the secrets, control lambda rotation functions, and permissions to replicate the secrets.
  • Monitor secrets for compliance: AWS Config rules can be used to check whether secrets align with internal security and compliance standards, such as HIPAA, PCI, ISO, AICPA SOC, FedRAMP, DoD, IRAP, and OSPAR.
  • Audit and monitoring: We can use other AWS services, such as Cloud Trail for auditing and Cloud Watch for monitoring.
  • Rollback through versioning: If needed, the secret can be reverted to the previous version by moving the labels attached to that secret.
  • Pay as you go: Charged based on the number of secrets managed through the Secret manager.
  • Integration with other AWS services: Integrating with other AWS services, such as EC2, Lambda, RDS, etc., eliminates the need to hard code secrets.

AWS Secret Manager Pricing

At the time of publishing this document, AWS Secret Manager pricing is below. This might be revised in the future.
ComponentCostDetails
Secret storage$0.40 per secret per monthCharges are done per month. If they are stored for less than a month, the cost is prorated.
API calls$0.05 per 10,000 API callsCharges are charged to API interactions like managing secrets / retrieving secrets.

Creating a Secret

Let us get deeper into the process of creating secrets.

  1. Log in to the AWS Secret management console and select the “store a new secret” option: https://console.aws.amazon.com/secretsmanager/.
    Picture2
  2. On the Choose secret type page,
    1. For Secret type, select the type of database secret that you want to store:
    2. For Credentials, input the credentials for the database that has been hardcoded. Picture3
    3. For the Encryption key, choose AWS/Secrets Manager. This encryption key service is free to use.
    4. For the Database field, choose your database.
    5. Then click Next.
  3. On the Configure secret page,
    1. Provide a descriptive secret name and description.
    2. In the Resource permissions field, choose Edit permissions. Provide the policy that allows RoleToRetrieveSecretAtRuntime and Save.
    3. Then, click Next. Picture4
  4. On the Configure rotation page,
    1. select the schedule for which you want this to be rotated.
    2. Click Next. Picture6
  5. On the Review page, review the details, and then Store.

Output

The secret is created as below.

Picture7

We can update the code to fetch the secret from Secrets Manager. For this, we need to remove the hardcoded credentials from the code. Based on the code language, there is a need to add a call to the function or method to the code to call the secret manager for the secret stored here. Depending on our requirements, we can modify the rotation strategy, versioning, monitoring, etc.

Secret Rotation Strategy

Picture8

  • Single user – It updates credentials for one user in one secret. During secret rotation, open connections will not be dropped. While rotating, Open connections might experience a low risk of database denial calls that use the newly rotated secrets. This can be mitigated through retry strategies. Once the rotation is completed, all new calls will use the rotated credentials.
    • Use case – This strategy can be used for one-time or interactive users.
  • Alternating users – This method updates secret values for two users in one secret. We create the first use. Then, we create a cloned second user using the rotation function during the first rotation. Whenever the secret rotates, the rotation function alternates between the user’s password and the one it updates. Even during rotation, the application gets a valid set of credentials.
    • Uses case – This is good for systems that require high availability.

Versioning of Secrets

A secret consists of the secret value and the metadata. To store multiple values in one secret, we can use json with key-value pairs. A secret has a version that holds copies of the encrypted secret values. AWS uses three labels, like:

  • AWSCURRENT – to store current secret value.
  • AWSPREVIOUS – to hold the previous version.
  • AWSPENDING – to hold pending value during rotation.

Custom labeling of the versions is also possible. AWS can never remove labeled versions of secrets, but unlabeled versions are considered deprecated and will be removed at any time.

Monitoring Secrets in AWS Secret Manager

Secrets stored in AWS Secret Manager can be monitored by services provided by AWS as below.

  • Using cloud trail – This stores all API calls to the secret Manager as events, including secret rotation and version deletion.
  • Monitoring using Cloudwatch – the number of secrets in our account can be managed, secrets that are marked for deletion, monitor metrics, etc. We can also set an alarm for metric changes.

Conclusion

AWS Secrets Manager offers a secure, automated, scalable solution for managing sensitive data and credentials. It reduces the risk of secret exposure and helps improve application security with minimal manual intervention. Adopting best practices around secret management can ensure compliance and minimize vulnerabilities in your applications.

 

]]>
https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/feed/ 0 376895
Setting Up Virtual WAN (VWAN) in Azure Cloud: A Comprehensive Guide – I https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/ https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/#comments Wed, 05 Feb 2025 11:01:41 +0000 https://blogs.perficient.com/?p=376281

As businesses expand their global footprint, the need for a flexible, scalable, and secure networking solution becomes paramount. Enter Azure Virtual WAN (VWAN), a cloud-based offering designed to simplify and centralize network management while ensuring top-notch performance. Let’s dive into what Azure VWAN offers and how to set it up effectively.

What is Azure Virtual WAN (VWAN)?

Azure Virtual WAN, or VWAN, is a cloud-based network solution that connects secure, seamless, and optimized connectivity across hybrid and multi-cloud environments.

It provides:

I. Flexibility for Dynamic Network Requirements

  • Adaptable Connectivity: Azure VWAN supports various connectivity options, including ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN, ensuring compatibility with diverse environments like on-premises data centers, branch offices, and remote workers.
  • Scale On-Demand: As network requirements grow or change, Azure VWAN allows you to dynamically add or remove connections, integrate new virtual networks (VNets), or scale bandwidth based on traffic needs.
  • Global Reach: Azure VWAN enables connectivity across regions and countries using Microsoft’s extensive global network, ensuring that organizations with distributed operations stay connected.
  • Hybrid and Multi-Cloud Integration: Azure VWAN supports hybrid setups (on-premises + cloud) and integration with other public cloud providers, providing the flexibility to align with business strategies.

II. Improved Management with Centralized Controls

  • Unified Control Plane: Azure VWAN provides a centralized dashboard within the Azure Portal to manage all networking components, such as VNets, branches, VPNs, and ExpressRoute circuits.
  • Simplified Configuration: Automated setup and policy management make deploying new network segments, traffic routing, and security configurations easy.
  • Network Insights: Built-in monitoring and diagnostic tools offer deep visibility into network performance, allowing administrators to quickly identify and resolve issues.
  • Policy Enforcement: Azure VWAN enables consistent policy enforcement across regions and resources, improving governance and compliance with organizational security standards.

III. High Performance Leveraging Microsoft’s Global Backbone Infrastructure

  • Low Latency and High Throughput: Azure VWAN utilizes Microsoft’s global backbone network, known for its reliability and speed, to provide high-performance connectivity across regions and to Azure services.
  • Optimized Traffic Routing: Intelligent routing ensures that traffic takes the most efficient path across the network, reducing latency for applications and end users.
  • Built-in Resilience: Microsoft’s backbone infrastructure includes redundant pathways and fault-tolerant systems, ensuring high availability and minimizing the risk of network downtime.
  • Proximity to End Users: With a global footprint of Azure regions and points of presence (PoPs), Azure VWAN ensures proximity to end users, improving application responsiveness and user experience.

High-level architecture of VWAN

This diagram depicts a high-level architecture of Azure Virtual WAN and its connectivity components.

 

Vwanarchitecture

 

  • HQ/DC (Headquarters/Data Centre): Represents the organization’s primary data center or headquarters hosting critical IT infrastructure and services. Acts as a centralized hub for the organization’s on-premises infrastructure. Typically includes servers, storage systems, and applications that need to communicate with resources in Azure.
  • Branches: Represents the organization’s regional or local office locations. Serves as local hubs for smaller, decentralized operations. Each branch connects to Azure to access cloud-hosted resources, applications, and services and communicates with other branches or HQ/DC. The HQ/DC and branches communicate with each other and Azure resources through the Azure Virtual WAN.
  • Virtual WAN Hub: At the heart of Azure VWAN is the Virtual WAN Hub, a central node that simplifies traffic management between connected networks. This hub acts as the control point for routing and ensures efficient data flow.
  • ExpressRoute: Establishes a private connection between the on-premises network and Azure, bypassing the public internet. It uses BGP for route exchange, ensuring secure and efficient connectivity.
  • VNet Peering: Links Azure Virtual Networks directly, enabling low-latency, high-bandwidth communication.
    • Intra-Region Peering: Connects VNets within the same region.
    • Global Peering: Bridges VNets across different regions.
  • Point-to-Site (P2S) VPN: Ideal for individual users or small teams, this allows devices to securely connect to Azure resources over the internet.
  • Site-to-Site (S2S) VPN: Connects the on-premises network to Azure, enabling secure data exchange between systems.

Benefits of VWAN

  • Scalability: Expand the network effortlessly as the business grows.
  • Cost-Efficiency: Reduce hardware expenses by leveraging cloud-based solutions.
  • Global Reach: Easily connect offices and resources worldwide.
  • Enhanced Performance: Optimize data transfer paths for better reliability and speed.

Setting Up VWAN in Azure

Follow these steps to configure Azure VWAN:

Step 1: Create a Virtual WAN Resource

  • Log in to the Azure Portal and create a Virtual WAN resource. This serves as the foundation of the network architecture.

Step 2: Configure a Virtual WAN Hub

  • Make the WAN Hub the central traffic manager and adjust it to meet the company’s needs.

Step 3: Establish Connections

  • Configure VPN Gateways for secure, encrypted connections.
  • Use ExpressRoute for private, high-performance connectivity.

Step 4: Link VNets

  • Create Azure Virtual Networks and link them to the WAN Hub. The seamless interaction between resources is guaranteed by this integration.

Monitoring and Troubleshooting VWAN

Azure Monitor

Azure Monitor tracks performance, availability, and network health in real time and provides insights into traffic patterns, latency, and resource usage.

Network Watcher

Diagnose network issues with tools like packet capture and connection troubleshooting. Quickly identify and resolve any bottlenecks or disruptions.

Alerts and Logs

Set up alerts for critical issues such as connectivity drops or security breaches. Use detailed logs to analyze network events and maintain robust auditing.

Final Thoughts

Azure VWAN is a powerful tool for businesses looking to unify and optimize their global networking strategy. Organizations can ensure secure, scalable, and efficient connectivity by leveraging features like ExpressRoute, VNet Peering, and VPN Gateways. With the correct setup and monitoring tools, managing complex networks becomes a seamless experience.

]]>
https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/feed/ 1 376281
Sales Cloud to Data Cloud with No Code! https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/ https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/#respond Fri, 31 Jan 2025 18:15:25 +0000 https://blogs.perficient.com/?p=376326

Salesforce has been giving us a ‘No Code’ way to have Data Cloud notify Sales Cloud of changes through Data Actions and Flows.   But did you know you can go the other direction too?

The Data Cloud Ingestion API allows us to setup a ‘No Code’ way of sending changes in Sales Cloud to Data Cloud.

Why would you want to do this with the Ingestion API?

  1. You are right that we could surely setup a ‘normal’ Salesforce CRM Data Stream to pull data from Sales Cloud into Data Cloud.  This is also a ‘No Code’ way to integrate the two.  But maybe you want to do some complex filtering or logic before sending the data onto Sales Cloud where a Flow could really help.
  2. CRM Data Streams only run on a schedule with every 10 minutes.  With the Ingestion API we can send to Data Cloud immediately, we just need to wait until the Ingestion API can run for that specific request.  The current wait time for the Ingestion API to run is 3 minutes, but I have seen it run faster at times.  It is not ‘real-time’, so do not use this for ‘real-time’ use cases.  But this is faster than CRM Data Streams for incremental and smaller syncs that need better control.
  3. You could also ingest data into Data Cloud easily through an Amazon S3 bucket.  But again, here we have data in Sales Cloud that we want to get to Data Cloud with no code.
  4. We can do very cool integrations by leveraging the Ingestion API outside of Salesforce like in this video, but we want a way to use Flows (No Code!) to send data to Data Cloud.

Use Case:

You have Sales Cloud, Data Cloud and Marketing Cloud Engagement.  As a Marketing Campaign Manager you want to send an email through Marketing Cloud Engagement when a Lead fills out a certain form.

You only want to send the email if the Lead is from a certain state like ‘Minnesota’ and that Email address has ordered a certain product in the past.  The historical product data lives in Data Cloud only.  This email could come out a few minutes later and does not need to be real-time.

Solution A:

If you need to do this in near real-time, I would suggest to not use the Ingestion API.  We can query the Data Cloud product data in a Flow and then update your Lead or other record in a way that triggers a ‘Journey Builder Salesforce Data Event‘ in Marketing Cloud Engagement.

Solution B:

But our above requirements do not require real-time so let’s solve this with the Ingestion API.  Since we are sending data to Data Cloud we will have some more power with the Salesforce Data Action to reference more Data Cloud data and not use the Flow ‘Get Records’ for all data needs.

We can build an Ingestion API Data Stream that we can use in a Salesforce Flow.  The flow can check to make sure that the Lead is from a certain state like ‘Minnesota’.  The Ingestion API can be triggered from within the flow.  Once the data lands in the DMO object in Data Cloud we can then use a ‘Data Action’ to listen for that data change, check if that Lead has purchased a certain product before and then use a ‘Data Action Target’ to push to a Journey in Marketing Cloud Engagement.  All that should occur within a couple of minutes.

Sales Cloud to Data Cloud with No Code!  Let’s do this!

Here is the base Salesforce post sharing that this is possible through Flows, but let’s go deeper for you!

The following are those deeper steps of getting the data to Data Cloud from Sales Cloud.  In my screen shots you will see data moving between a VIN (Vehicle Identification Number) custom object to a VIN DLO/DMO in Data Cloud, but the same process could be used for our ‘Lead’ Use Case above.

  1. Create a YAML file that we will use to define the fields in the Data Lake Object (DLO).  I put an example YAML structure at the bottom of this post.
  2. Go to Setup, Data Cloud, External Integrations, Ingestion API.   Click on ‘New’
    Newingestionapi

    1. Give your new Ingestion API Source a Name.  Click on Save.
      Newingestionapiname
    2. In the Schema section click on the ‘Upload Files’ link to upload your YAML file.
      Newingestionapischema
    3. You will see a screen to preview your Schema.  Click on Save.
      Newingestionapischemapreview
    4. After that is complete you will see your new Schema Object
      Newingestionapischemadone
    5. Note that at this point there is no Data Lake Object created yet.
  3. Create a new ‘Ingestion API’ Data Stream.  Go to the ‘Data Steams’ tab and click on ‘New’.   Click on the ‘Ingestion API’ box and click on ‘Next’.
    Ingestionapipic

    1. Select the Ingestion API that was created in Step 2 above.  Select the Schema object that is associated to it.  Click Next.
      Newingestionapidsnew
    2. Configure your new Data Lake Object by setting the Category, Primary Key and Record Modified Fields
      Newingestionapidsnewdlo
    3. Set any Filters you want with the ‘Set Filters’ link and click on ‘Deploy’ to create your new Data Stream and the associated Data Lake Object.
      Newingestionapidsnewdeploy
    4. If you want to also create a Data Model Object (DMO) you can do that and then use the ‘Review’ button in the ‘Data Mapping’ section on the Data Stream detail page to do that mapping.  You do need a DMO to use the ‘Data Action’ feature in Data Cloud.
  4. Now we are ready to use this new Ingestion API Source in our Flow!  Yeah!
  5. Create a new ‘Start from Scratch’, ‘Record-Triggered Flow’ on the Standard or Custom object you want to use to send data to Data Cloud.
  6. Configure an Asynchronous path.  We cannot connect to this ‘Ingestion API’ from the ‘Run Immediately’ part of the Flow because this Action will be making an API to Data Cloud.  This is similar to how we have to use a ‘Future’ call with an Apex Trigger.
    Newingestionapiflowasync
  7. Once you have configured your base Flow, add the ‘Action’ to the ‘Run Asynchronously’ part of the Flow.    Select the ‘Send to Data Cloud’ Action and then map your fields to the Ingestion API inputs that are available for that ‘Ingestion API’ Data Stream you created.
    Newingestionapiflowasync2
  8. Save and Activate your Flow.
  9. To test, update your record in a way that will trigger your Flow to run.
  10. Go into Data Cloud and see your data has made it there by using the ‘Data Explorer’ tab.
  11. The standard Salesforce Debug Logs will show the details of your Flow steps if you need to troubleshoot something.

Congrats!

You have sent data from Sales Cloud to Data Cloud with ‘No Code’ using the Ingestion API!

Setting up the Data Action and connecting to Marketing Cloud Journey Builder is documented here to round out the use case.

Here is the base Ingestion API Documentation.

At Perficient we have experts in Sales Cloud, Data Cloud and Marketing Cloud Engagement.  Please reach out and let’s work together to reach your business goals on these platforms and others.

Example YAML Structure:

Yaml Pic

openapi: 3.0.3
components:
schemas:
VIN_DC:
type: object
properties:
VIN_Number:
type: string
Description:
type: string
Make:
type: string
Model:
type: string
Year:
type: number
created:
type: string
format: date-time

]]>
https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/feed/ 0 376326
Drupal CMS is here, what it means for you and your organization. https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/ https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/#respond Thu, 16 Jan 2025 14:19:32 +0000 https://blogs.perficient.com/?p=375772

In a previous blog post I discussed various content authoring approaches within Drupal and the importance of selecting the right one for your specific situation. Towards the end I mentioned a new iteration of Drupal(Starshot). It is now here, Startshot, i.e. Drupal CMS was released on Jan 15th. As it becomes part of the Drupal Ecosystem, here are 5 key areas to consider when tackling a new project or build.

 

1. What is Drupal CMS?

Drupal CMS is a tooling built on top of Drupal 11 Core. This takes some of the most commonly used configurations, recipes, modules and more, puts them into an installable package and offers it for a great starting point for small to moderate complexity websites and portals.

 

2. What are the advantages of Drupal CMS?

As mentioned above, Drupal CMS is a pre-bundled installation of Drupal 11 Core, Contributed modules, Recipes and configuration that provides a rapid starting point for marketing teams.

The advantages include quicker time to market, easier configuration of toolings for cookie compliance, content workflows, permissions, multilingual support and more. Drupal CMS as a product will enable marketing teams to build and maintain a web presence with limited technical staff requirements. You may be able to take advantage of an implementation partner like Perficient and have much smaller learning curve for web editors and managers as opposed to a completely custom build on top of Drupal Core.

The ability for a CMS to be spun up with limited customization and overhead, is a big departure from traditional Drupal development which required extensive experience and technical support. This will be a huge time and budget saver for certain situations and organizations.

Another advantage of Drupal CMS is that is built upon the standard Drupal 11 core. This allows a site to evolve, grow and take advantage of the more complex technical underpinnings as needed. If you start with Drupal CMS, you are not handcuffed to it, and have the entire Drupal open source ecosystem available to you as you scale.

 

3. What are the disadvantages of Drupal CMS?

Or course, no situation is a win-win-win, so what are the tradeoffs of Drupal CMS?

The major disadvantages of Drupal CMS would come to light in heavily customized or complex systems. All of the preconfigured toolings that make a simple to moderately complex site easier on Drupal CMS can cause MORE complexity on larger or completely custom builds, as a technical team may find themselves spending unnecessary time undoing the unnecessary aspects of Drupal CMS.

Another (for the meantime)disadvantage of Drupal CMS is that it is built on top of Drupal 11 core, while Drupal 11 is a secure and final release, the community support historically lags. It is worth evaluating support for any contributed modules for Drupal 11 before making the decision on Drupal CMS.

 

4. Drupal 10, Drupal 11, Drupal CMS, which is the right choice?

With all of the advantages and disadvantages to various Drupal Core and CMS versions. It can be a large choice of what direction to go. When making that decision for your organization, you should evaluate 3 major areas. First, look at the scale of your technical team and implementation budget. A smaller team or budget would suggest evaluating Drupal CMS as a solution.

Secondly, evaluate your technical requirements. Are you building a simple website with standard content needs and workflows? Drupal CMS might be perfect. Are you building a complex B2B commerce site with extensive content, workflow and technical customizations? Drupal Core might be the right choice.

Finally, evaluate your technical requirements for any needs that may not be fully supported by Drupal 11 just yet. If you find an area that isn’t supported, it would be time to evaluate the timeline for support, timeline for your project as well as criticality of the functional gaps. This is where a well versed and community connected implementation partner such as Perficient can provide crucial insights to ensure the proper selection of your underlying tooling.

 

5. I am already on Drupal 7/8/9/10/11, do I need to move to Drupal CMS?

In my opinion this is highly dependent of where you currently are. If you are on Drupal 7/8, you are many versions behind, lacking support and any upgrade is essentially a rebuild. In this case, Drupal CMS should be considered just like an new build considering the points above. Drupal 9/10/11, an upgrade to Drupal 10/11 respectively might be your best bet. Drupal CMS can be layered on top of this upgrade if you feel the features fit the direction of your website, but it is important to consider all the above pros and cons when making this decision. Again, a trusted implementation partner such as Perficient can help guide and inform you and your team as you tackle these considerations!

]]>
https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/feed/ 0 375772
Newman Tool and Performance Testing in Postman https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/ https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/#respond Thu, 16 Jan 2025 12:13:41 +0000 https://blogs.perficient.com/?p=375112

Postman is an application programming interface (API) testing tool for designing, testing, and changing existing APIs. It has almost every capability a developer may need to test any API included in Postman.

Postman simplifies the testing process for both REST APIs and SOAP web services with its robust features and intuitive interface. Whether you’re developing a new API or testing an existing one, Postman provides the tools you need to ensure your services are functioning as intended.

  • Using Postman to test the APIs offers a wide range of benefits that eventually help in the overall testing of the application. Postman’s interface is very user-friendly, which allows users to easily create and manage requests without extensive coding knowledge, making it accessible to both developers and testers.
  • Postman supports multiple protocols such as HTTP, SOAP, GraphQL, and WebSocket APIs, which ensures a versatile testing set-up for a wide range of services.
  • To automate the process of validating the API Responses under various scenarios, users can write tests in JavaScript to ensure that the API behavior is as expected.
  • Postman offers an environment management feature that enables the user to set up different environments with environment-specific variables, which makes switching between development, staging, and production settings possible without changing requests manually.
  • Postman provides options for creating collection and organization, which makes it easier to manage requests, group tests, and maintain documentation.
  • Postman supports team collaboration, which allows multiple users to work on the same collections, share requests, and provide feedback in real-time.

Newman In Postman

Newman is a command-line runner that is used to perform commands and check Postman’s response. The Newman can be used to initiate requests in a Postman Collection in addition to the Collection Runner.

Newman is proficient with GitHub and the NPM registry. Additionally, Jenkin and other continuous integration technologies can be linked to it. If every request is fulfilled correctly, Newman produces code.

In the case of errors, code 1 is generated. Newman uses the npm package management, which is built on the Node.js platform.

How to install Newman

Step 1: Ensure that your system has Node.js downloaded and installed. If not, then download and install Node.js.

Step 2: Run the following command in your cli: npm install -g newman

How to use Newman: 

Step 1: Export the Postman collection and save it to your local device.

Step 2: Click on the eye icon in the top right corner of the Postman application.

Step 3: The “MANAGE ENVIRONMENTS” window will open. Provide a variable URL for the VARIABLE field and for INITIAL VALUE. Click on the Download as JSON button. Then, choose a location and save.

Step 4: Export the Environment to the same path where the Collection is available.

Step 5: In the command line, move from the current directory to the direction where the Collection and Environment have been saved.

Step 6: Run the command − newman run <“name of file”>. Please note that the name of the file should be in quotes.

Helpful CLI Commands to Use Newman

-h, --helpGives information about the options available
-v, --versionTo check the version
-e, --environment [file URL]Specify the file path or URL of environment variables.
-g, --globals [file URL]Specify the file path or URL of global variables.
-d, --iteration-data [file]Specify the file path or URL of a data file (JSON or CSV) to use for iteration data.
-n, --iteration-count [number]Specify the number of times for the collection to run. Use with the iteration data file.
--folder [folder Name]Specify a folder to run requests from. You can specify more than one folder by using this option multiple times, specifying one folder for each time the option is used.
--working-dir [path]Set the path of the working directory to use while reading files with relative paths. Defaults to the current directory.
--no-insecure-file-readPrevents reading of files located outside of the working directory.
--export-environment [path]The path to the file where Newman will output the final environment variables file before completing a run
--export-globals [path]The path to the file where Newman will output the final global variables file before completing a run.
--export-collection [path]The path to the file where Newman will output the final collection file before completing a run.
--postman-api-key [api-key]The Postman API Key used to load resources using the Postman API.
--delay-request [number]Specify a delay (in milliseconds) between requests.
--timeout [number]Specify the time (in milliseconds) to wait for the entire collection run to complete execution.
--timeout-request [number]Specify the time (in milliseconds) to wait for requests to return a response.
--timeout-script [number]Specify the time (in milliseconds) to wait for scripts to complete execution.
--ssl-client-cert [path]The path to the public client certificate file. Use this option to make authenticated requests.
-k, --insecureTurn off SSL verification checks and allow self-signed SSL certificates.
--ssl-extra-ca-certs Specify additionally trusted CA certificates (PEM)

Picture2

 

Picture3 Min

Picture4

Performance Testing in Postman

API performance testing involves mimicking actual traffic and watching how your API behaves. It is a procedure that evaluates how well the API performs regarding availability, throughput, and response time under the simulated load.

Testing the performance of APIs can help us in:

  • Test that the API can manage the anticipated load and observe how it reacts to load variations.
  • To ensure a better user experience, optimize and enhance the API’s performance.
  • Performance testing also aids in identifying the system’s scalability and fixing bottlenecks, delays, and failures.

How to Use Postman for API Performance Testing

Step 1: Select the Postman Collection for Performance testing.

Step 2: Click on the 3 dots beside the Collection.

Step 3:  Click on the “Run Collection” option.

Step 4:  Click on the “Performance” option

Step 5: Set up the Performance test (Load Profile, Virtual User, Test Duration).

Step 6: Click on the Run button.

After completion of the Run, we can also download a report in a.pdf format, which states how our collection ran.

A strong and adaptable method for ensuring your APIs fulfill functionality and performance requirements is to use Newman with Postman alongside performance testing. You may automate your tests and provide comprehensive reports that offer insightful information about the functionality of your API by utilizing Newman’s command-line features.

This combination facilitates faster detection and resolution of performance issues by streamlining the testing process and improving team collaboration. Using Newman with Postman will enhance your testing procedures and raise the general quality of your applications as you continue improving your API testing techniques.

Use these resources to develop dependable, strong APIs that can handle the demands of practical use, ensuring a flawless user experience.

]]>
https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/feed/ 0 375112
CCaaS Migration Best Practices: Tips for moving your customer care platform to the cloud https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/ https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/#respond Fri, 06 Dec 2024 16:28:56 +0000 https://blogs.perficient.com/?p=373159

Migrating to a cloud-delivered Contact Center as a Service (CCaaS) solution can revolutionize how your organization delivers customer service. However, this transition requires careful planning and execution to avoid disruptions. Assuming you have selected a CCaaS platform that aligns with your organizational needs, the following best practices outline key considerations for a seamless migration.

A successful migration to CCaaS not only enhances operational efficiency and scalability but also ensures a significant improvement in service delivery, directly impacting customer satisfaction and retention. Organizations should consider the risks of not embracing modern cloud-based customer care solutions, which can

include diminished customer service capabilities and potential costs due to outdated or inflexible systems. Moreover, organizations that delay this shift risk falling behind competitors who can adapt more quickly to market demands and customer needs. Thus, embarking on a well-planned migration journey is imperative for companies aiming to optimize their customer care operations and secure a competitive advantage in their respective markets.

 

  1. Physical Infrastructure Migration

Understanding your current environment is critical for a successful transition. Start with a thorough site review to document the infrastructure and identify unique user requirements. Engage with call center managers, team leaders, and power users to uncover specific needs and configured features such as whisper settings, omnichannel components, call management, etc.

Factors such as bandwidth and latency are paramount for seamless operations. Evaluate your facility’s connectivity for both on-site and remote users, ensuring it aligns with the CCaaS product requirements. Fortunately, modern CCaaS solutions such as Amazon Connect, Twilio Flex and Five9 supply agent connectivity tools to verify that workers have sufficient resources to provide good customer service over various channels.

Additionally, document call treatments and station-specific configurations like call coverage paths. Legacy components requiring continued functionality should be cataloged to prepare for integration.

 

  1. Change Management Planning

Change management is essential to mitigate risks and maximize adoption. A staged cutover strategy is recommended over a single-event migration, allowing for gradual testing and adjustments.

Develop a robust testing strategy to validate the platform’s performance under real-world conditions. Complement this with an organizational enablement strategy to train users and ensure they are comfortable with the new system. Adoption by your business units and users is one of the most critical factors which will determine the success of your CCaaS migration.

 

  1. Operational Considerations

Operational continuity is vital during migration. Start by understanding the reporting requirements for business managers to ensure no loss of visibility into critical metrics. Additionally, review monitoring processes to maintain visibility into system performance post-migration.

 

  1. Integration Planning

Integrating legacy infrastructure with the new CCaaS platform can present significant challenges. Document existing components, including FXO/FXS interfaces, Workforce Management solutions, FAX systems, wallboards, and specialty dialers. Verify that integrations comply with any regulatory requirements, such as HIPAA or FINRA.

Interactive Voice Response (IVR) systems often require specific integrations with local data sources or enterprise middleware. Assess these integrations to ensure call flows function as intended. For specialized applications, verify that they meet operational needs within the new environment.

 

  1. Fault Tolerance and Disaster Recovery

Testing fault tolerance and disaster recovery capabilities are critical steps in any CCaaS migration. Develop and execute a failsafe testing plan to ensure resilience against both premise-level and carrier-level failures. It is important to align to your IT organization’s standards for recovery time objective (RTO) and business up-time expectations. Disaster recovery plans must reflect these measures and be tested to protect against potential downtime.

 

  1. Scalability and Compliance

CCaaS solutions must scale with your business. Validate scalability by conducting load tests and documenting performance metrics. Compliance is equally important—ensure your migration adheres to industry standards like HIPAA, FedRAMP, or FINRA through thorough compliance testing and documentation.

 

Conclusion

A successful CCaaS migration hinges on meticulous planning, comprehensive testing, and strong change management. By following these best practices, you can minimize risks, ensure operational continuity, and set your organization up for long-term success with its new contact center platform. The result? An enhanced customer experience and a contact center infrastructure that grows with your business.

 

 

]]>
https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/feed/ 0 373159
Don’t try to fit a Layout Builder peg in a Site Studio hole. https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/ https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/#respond Thu, 14 Nov 2024 19:39:04 +0000 https://blogs.perficient.com/?p=372075

How to ensure your toolset matches your vision, team and long term goals.

Seems common sense right? Use the right tool for the right purpose. However, in the DXP and Drupal space, we often see folks trying to fit their project to the tool and not the tool to the project.

There are many modules, profiles, and approaches to building Drupal out there, and most all of them have their time and place. The key is knowing when to implement which and why. I am going to take a little time here a dive into one of those key decisions that we find ourselves at Perficient facing frequently and how we work with our clients to ensure the proper approach is selected for their Drupal application.

Site Studio vs Standard Drupal(blocks, views, content, etc..) vs Layout Builder

I would say this is the most common area where we see confusion related to the best tooling and how to pick. To start let’s do a summary of the various options(there are many more approaches available but these are the common ones we encounter), as well as their pros and cons.

First, we have Acquia Site Studio, it is a low-code site management tool built on top of Drupal. And it is SLICK. They provide web user editable templates, components, helpers, and more that allow a well trained Content Admin to have control of almost every aspect of the look and feel of the website. There is drag and drop editors for all templates that would traditionally be TWIG, as well as UI editors for styles, fonts and more. This is the cadillac of low code solutions for Drupal, but that comes with some trade offs in terms of developer customizability and config management strategies. We have also noticed, that not every content team actually utilizes the full scope of Site Studio features, which can lead to additional complexity without any benefit, but when the team is right, Site Studio is a very powerful tool.

The next option we frequently see, is a standard Drupal build utilizing Content Types and Blocks to control page layouts, with WYSIWYG editors for rich content and a standard Drupal theme with SASS, TWIG templates, etc…. This is the one you see most developer familiarity with, as well as the most flexibility to implement custom work as well as clean configuration management. The trade off here, is that most customizations will require a developer to build them out, and content editors are limited to “color between the lines” of what was initially built. We have experienced both content teams that were very satisfied with the defined controls, but also teams that felt handcuffed with the limitations and desired more UI/UX customizations without deployments/developer involvement.

The third and final option we will be discussing here, is the Standard Drupal option described above, with the addition of Layout Builder. Layout Builder is a Drupal Core module that enables users to attach layouts, such as 1 column, 2 column and more to various Drupal Entity types(Content, Users, etc..). These layouts then support the placement of blocks into their various regions to give users drag and drop flexibility over laying out their content. Layout Builder does not support full site templates or custom theme work such as site wide CSS changes. Layout Builder can be a good middle ground for content teams not looking for the full customization and accompanying complexity of Site Studio, but desiring some level of content layout control. Layout builder does come with some permissions and configuration management considerations. It is important to decide what is treated as content and what as configuration, as well as define roles and permissions to ensure proper editors have access to the right level of customizations.

Now that we have covered the options as well as the basic pros and cons of each, how do you know which tool is right for your team and your project? This is where we at Perficient start with a holistic review of your needs, short and long term goals, as well as the technical ability of your internal team. It is important to honestly evaluate this. Just because something has all the bells and whistles, do you have the team and time to utilize them, or is it a sunk cost with limited ROI. On the flip side, if you have a very technically robust team, you don’t want to handcuff them and leave them frustrated with limitations that could impact marketing opportunities that could lead to higher ROI.

Additional considerations that can help guide your choice in toolset would be future goals and initiatives. Is a rebrand coming soon? Is your team going to quickly expand with more technical staff? These might point towards Site Studio as the right choice. Is your top priority consistency and limiting unnecessary customizations? Then standard structured content might be the best approach. Do you want to able to customize your site, but just don’t have the time or budget to undertake Site Studio? Layout Builder might be something you should closely look at.

Perficient starts these considerations at the first discussions with our potential clients, and continue to guide them through the sales and estimation process to ensure the right basic Drupal tooling is selected. This then continues through implementation as we continue to inform stakeholders about the best toolsets beyond the core systems. In future articles we will discuss the advantages and disadvantages of various SSO, DAM, Analytics, Drupal module solutions as well as the new Star Shot Drupal Initiative and how it will impact the planning of your next Drupal build!

]]>
https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/feed/ 0 372075
Agentforce Success Starts with Salesforce Data Cloud https://blogs.perficient.com/2024/09/18/agentforce-success-starts-with-salesforce-data-cloud/ https://blogs.perficient.com/2024/09/18/agentforce-success-starts-with-salesforce-data-cloud/#comments Wed, 18 Sep 2024 13:45:48 +0000 https://blogs.perficient.com/?p=369366

In today’s hyper-connected world, organizations are racing to provide their customers with personalized, seamless experiences across every channel. For companies rolling out Agentforce—a cutting-edge Salesforce-based solution for agents, brokers, or any field sales team—having a robust data foundation is crucial. This is where Salesforce Data Cloud shines. By integrating Salesforce Data Cloud into your Agentforce strategy, you can empower your agents with the right insights to better serve customers, close more deals, and enhance operational efficiency.

Here are seven reasons why Salesforce Data Cloud is the key to a successful Agentforce rollout:

1. Unified Customer Data

Salesforce Data Cloud is designed to be the central hub for customer data across all systems. It brings together data from various sources—CRM, social media, marketing platforms, transactional data, and more—into a single, unified profile. For Agentforce, this means agents will have a 360-degree view of each customer, allowing them to engage in more personalized conversations.

Agents can see customer preferences, past interactions, purchase history, and predictive insights in one dashboard. Whether your team is prospecting or assisting existing clients, having this level of insight is invaluable for delivering timely and relevant service.

2. Real-Time Insights for Informed Decision-Making

Data is only valuable if it’s actionable. With Salesforce Data Cloud, Agentforce users gain real-time insights powered by AI and predictive analytics. These insights help agents make data-driven decisions in the moment—whether it’s offering an upsell, adjusting strategies for closing a deal, or tailoring responses to specific client needs.

For example, if an agent notices that a high-value customer is interacting less with your services, the system could flag this and provide recommendations for proactive outreach. This ability to respond in real-time can significantly enhance client retention and satisfaction.

3. Seamless Integration with Existing Systems

Salesforce Data Cloud integrates seamlessly with your existing tools and platforms, whether they are part of the Salesforce ecosystem or external. As Agentforce often involves using multiple apps—like financial systems, call center tools, and communication platforms—Salesforce Data Cloud serves as the glue that binds them together.

This integration helps ensure that agents have accurate, up-to-date information at their fingertips, regardless of where the data originates. The result is a smoother workflow, faster responses, and improved customer experiences.

Mulesoft can be used to bring in data from API based external systems.  Also, noETL sharing can allow for the accessing of Data Lakes like Snowflake and Databricks.

4. Enhanced Personalization Through AI

The power of AI-driven personalization is one of Salesforce Data Cloud’s most compelling features. By leveraging Einstein AI, agents can use predictive analytics to forecast customer needs and behaviors. For Agentforce, this means providing agents with the capability to engage in highly targeted, context-rich interactions that feel tailored to each individual client.

Imagine an insurance agent who, based on data trends, receives a suggestion to recommend a particular product to a customer just before they need it. This level of personalization doesn’t just boost sales—it strengthens customer loyalty and builds trust in your brand.

5. Improved Collaboration Across Teams

In many organizations, the challenge isn’t just managing customer data but ensuring that different departments can effectively collaborate around it. Salesforce Data Cloud’s unified platform allows for better cross-team collaboration. Marketing, sales, service, and IT teams can all access the same customer data, fostering improved communication and aligned strategies.

In the Agentforce environment, this translates to faster handoffs between teams, consistent messaging, and the ability to serve customers holistically. Agents no longer operate in silos but as part of a unified effort to deliver exceptional customer service.

6. Scalable and Future-Proof

As your Agentforce team grows and your business scales, Salesforce Data Cloud ensures that your data infrastructure can keep up. The platform is built to handle vast amounts of data while maintaining fast processing speeds and real-time insights. It’s also highly customizable, meaning you can tailor it to meet the evolving needs of your team and business processes.

Whether you’re adding new agents, expanding into new markets, or launching new products, Salesforce Data Cloud provides the scalability and flexibility needed to support your growth.

7. Enhanced Security and Compliance

For organizations dealing with sensitive customer data—like in insurance, real estate, or financial services—security is paramount. Salesforce Data Cloud is designed with enterprise-grade security features, ensuring that your data is protected at all times. Additionally, the platform is compliant with major global privacy regulations such as GDPR and CCPA, which is critical for industries where data privacy is a top priority.

For Agentforce, this means you can focus on rolling out your strategy with confidence, knowing that your customer data is secure and your organization remains compliant with the latest regulations.

Don’t DIY / Do it Yourself.  Focus on running your business and let Agentforce and Data Cloud wow your customers.

Unlock the Full Potential of Agentforce

Salesforce Data Cloud is the key to unlocking the full potential of your Agentforce rollout. By centralizing customer data, providing real-time insights, enabling AI-driven personalization, and fostering cross-team collaboration, it empowers your agents to deliver exceptional service and drive business success. As your organization grows and your customer base expands, Salesforce Data Cloud offers the scalability and security needed to future-proof your operations.

If you’re looking to ensure your Agentforce rollout is a success, implementing and integrating Salesforce Data Cloud should be at the top of your strategy. With the right data infrastructure in place, your agents will be equipped to meet customer needs with precision, agility, and a personalized touch.

Stay Informed About Agentforce and More! 

Learn more about Salesforce’s new Agentic AI Platform and more by browsing our Salesforce blog site

Perficient + Salesforce  

We are a Salesforce Summit Partner with more than two decades of experience delivering digital solutions in the manufacturing, automotive, healthcare, financial services, and high-tech industries. Our team has deep expertise in all Salesforce Clouds and products, artificial intelligence, DevOps, and specialized domains to help you reap the benefits of implementing Salesforce solutions.   

]]>
https://blogs.perficient.com/2024/09/18/agentforce-success-starts-with-salesforce-data-cloud/feed/ 1 369366