Power Fx is a low-code language expressing logic across the Microsoft Power Platform. It’s a general-purpose, strong-typed, declarative, and functional programming language described in human-friendly text. Makers can use Power Fx directly in an Excel-like formula bar or Visual Studio Code text window. Its concise and straightforward nature makes everyday programming tasks easy for both makers and developers.
Power Fx is expressed in human-friendly text. It’s a low-code language that makers can use directly in an Excel-like formula bar or Visual Studio Code text window. The “low” in low-code is due to the concise and straightforward nature of the language, making everyday programming tasks easy for both makers and developers.
Power Fx enables the full spectrum of development, from no-code makers without any programming knowledge to pro-code for professional developers. It enables diverse teams to collaborate and save time and effort.
To use Power Fx as an expression language in a desktop flow, you must create one and enable the respective toggle button when creating it through Power Automate for the desktop’s console.
Each Power Fx expression must start with an “=” (equals to sign).
If you’re transitioning from flows where Power Fx is disabled, you might notice some differences. To streamline your experience while creating new desktop flows, here are some key concepts to keep in mind:
With Power Fx Disabled
Give your collection a name (e.g., myCollection) in the Variable Name field.
In the Value field, define the collection. Collections in PAD are essentially arrays, which you can define by enclosing the values in square brackets [ ].
Action: Set Variable
Variable Name: myNumberCollection
Value: [1, 2, 3, 4, 5]
Action: Set Variable
Variable Name: myTextCollection
Value: [“Alice”, “Bob”, “Charlie”]
You can also create collections with mixed data types. For example, a collection with both numbers and strings:
Action: Set Variable
Variable Name: mixedCollection
Value: [1, “John”, 42, “Doe”]
If you want to use the dollar sign ($) followed by a opening curly brace sign ({) within a Power Fx expression or in the syntax of a UI/Web element selector and have Power Automate for desktop not treat it as the string interpolation syntax, make sure to follow this syntax: $${ (the first dollar sign will act as an escape character)
For the complete list of all available functions in Power Automate for desktop flows, go to Formula reference – desktop flows.
Yes, use Power Fx if your flow needs custom logic, data transformation, or integration with Power Apps and you’re comfortable with the learning curve.
No, avoid it if your flows are relatively simple or if you’re primarily focused on automation tasks like file manipulation, web scraping, or UI automation, where Power Automate Desktop’s native features will be sufficient.
]]>Choosing the right framework for your first cross-platform app can be challenging, especially with so many great options available. To help you decide, let’s compare Kotlin Multiplatform (KMP), React Native, and Flutter by building a simple “Hello World” app with each framework. We’ll also evaluate them across key aspects like setup, UI development, code sharing, performance, community, and developer experience. By the end, you’ll have a clear understanding of which framework is best suited for your first app.
Kotlin Multiplatform allows you to share business logic across platforms while using native UI components. Here’s how to build a “Hello World” app:
shared
module, create a Greeting
class with a function to return “Hello World”.
// shared/src/commonMain/kotlin/Greeting.kt
class Greeting {
fun greet(): String {
return "Hello, World!"
}
}
androidApp
module. For iOS, use SwiftUI or UIKit in the iosApp
module.Android (Jetpack Compose):
// androidApp/src/main/java/com/example/androidApp/MainActivity.kt
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
Text(text = Greeting().greet())
}
}
}
iOS (SwiftUI):
// iosApp/iosApp/ContentView.swift
struct ContentView: View {
var body: some View {
Text(Greeting().greet())
}
}
Pros:
Cons:
React Native allows you to build cross-platform apps using JavaScript and React. Here’s how to build a “Hello World” app:
npx react-native init HelloWorldApp
App.js
and replace the content with the following:
import React from 'react';
import { Text, View } from 'react-native';
const App = () => {
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Text>Hello, World!</Text>
</View>
);
};
export default App;
npx react-native start
Run the app on Android or iOS:
npx react-native run-android
npx react-native run-ios
Pros:
Cons:
Flutter is a UI toolkit for building natively compiled apps for mobile, web, and desktop using Dart. Here’s how to build a “Hello World” app:
flutter create hello_world_app
lib/main.dart
and replace the content with the following:
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: Text('Hello World App')),
body: Center(child: Text('Hello, World!')),
),
);
}
}
flutter run
Pros:
Cons:
flutter doctor
command.Best option: Flutter (for ease of initial setup).
Best option: A tie between KMP (for native UI flexibility) and Flutter (for cross-platform consistency).
Best option: Kotlin Multiplatform (for its focus on sharing business logic).
Winner: Kotlin Multiplatform (for native performance).
Best option: React Native (for its large and mature community), but Flutter is a close contender.
Best option: Flutter (for its excellent developer experience and tooling).
With the rise of AI tools like GitHub Copilot, ChatGPT, Gemini, Claude, etc.. Developers can significantly speed up app development. Let’s evaluate how each framework benefits from AI assistance:
Best option: React Native (due to JavaScript’s widespread support in AI tools).
There’s no one-size-fits-all answer. The best choice depends on your priorities:
Each framework has its strengths and weaknesses, and the best choice depends on your team’s expertise, project requirements, and long-term goals. For your first app, consider starting with Flutter for its ease of use and fast development, React Native if you’re a web developer, or Kotlin Multiplatform if you’re focused on performance and native UIs.
Try building a simple app with each framework to see which one aligns best with your preferences and project requirements.
]]>
I’ve had plenty of opportunities to guide developers new to the React and React Native frameworks. While everyone is different, I wanted to provide a structured guide to help bring a fresh developer into the React fold.
This introduction to React is intended for a developer that at least has some experience with JavaScript, HTML and basic coding practices.
Ideally, this person has coded at least one project using JavaScript and HTML. This experience will aid in understanding the syntax of components, but any aspiring developer can learn from it as well.
There are several tiers for beginner level programmers who would like to learn React and are looking for someone like you to help them get up to speed.
For a developer like this, I would recommend introductory JavaScript and HTML knowledge. Maybe a simple programming exercise or online instruction, before introducing them to React. You can compare JavaScript to a language they are familiar with and cover core concepts. A basic online guide should be sufficient to get them up and running with HTML.
I would go over some basics of JavaScript and HTML to make sure they have enough to grasp the syntax and terminologies used in React. A supplementary course or online guide might be good for a refresher before introducing them to modern concepts.
Even if they haven’t used JavaScript or HTML much, they should be able to ramp up quickly. Reading through React documentation should be enough to jumpstart the learning process.
You can begin their React and React Native journey with the following guidelines:
The React developer documentation is a great place to start if the developer has absolutely no experience or is just starting out. It provides meaningful context in the differences between standard JavaScript and HTML and how React handles them. It also provides a valuable reference on available features and what you can do within the framework.
Pro tip: I recommend starting them right off with functional components. They are more widely used and often have better performance, especially with hooks. I personally find them easier to work with as well.
Class component:
function MyButton() { return ( <button>I'm a button</button> ); }
Functional component:
const MyButton = () => { return ( <button>I'm a button</button> ) }
The difference with such a small example isn’t very obvious, but it becomes much different once you introduce hooks. Hooks allow you to extract functionality into a reusable container, this allows you to keep logic separate or import it in other components. There are also several built-in hooks that make life easier. Hooks always start with “use” (useState, useRef, etc.). You are also able to create custom hooks for your own logic.
Once they understand basic concepts, it’s time to focus on advanced React concepts. State management is an important factor in React which covers component and app-wide states. Learning widely used packages might come in handy. I recommend Redux Toolkit as it’s easy to learn, but extremely extensible. It is great for both big and small projects and offers simple to complex state management features.
Now might be a great time to point out the key differences between React and React Native. They are very similar with a few minor adjustments:
React | React Native | |
---|---|---|
Layout | Uses HTML tags | “core components” (View instead of div for example). |
Styling | CSS | Style objects |
X/Y Coordinate Planes | Flex direction: row | Flex direction: column |
Navigation | URLs | Routes react-navigation |
I would follow the React concepts with an example project. This allows the developer to see how a project is structured and how to code within the framework. Tic-Tac-Toe is a great example for a new React developer to give a try to see if they understand the basic concepts.
Debugging in Chrome is extremely useful for things like console logs and other logging that is beneficial for defects. The Style Inspector is another mandatory tool for React that lets you see how styles are applied to different elements. For React Native, the documentation contains useful links to helpful tools.
Assign the new React developer low-level bugs or feature enhancements to tackle. Closely monitoring their progress via pair programing has been extremely beneficial in my experience. This provides the opportunity to ask real-time questions to which the experienced developer can offer guidance. This also provides an opportunity to correct any mistakes or bad practices before they become ingrained. Merge requests should be reviewed together before approval to ensure code quality.
These tips and tools will give a new React or React Native developer the skills they can develop to contribute to projects. Obviously, the transition to React Native will be a lot smoother for a developer familiar with React, but any developer that is familiar with JavaScript/HTML should be able to pick up both quickly.
Thanks for your time and I wish you the best of luck with onboarding your new developer onto your project!
For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!
]]>Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).
This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.
Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.
Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.
The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.
As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.
Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:
While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.
In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.
Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.
Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.
Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.
Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.
Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.
Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.
Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:
Resource Management:
context.subscriptions.push()
method to register disposables, which will be automatically disposed of when the extension is deactivated.deactivate()
function to clean up any resources that need to be explicitly released when the extension is deactivated.Asynchronous Operations:
async/await
to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.try/catch
blocks. Log errors and provide informative messages to the user.vscode.window.withProgress
to provide feedback to the user during long operations.Security:
Performance:
Code Quality:
User Experience:
By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.
Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.
1. Scaffold the Extension:
First, use the Yeoman generator to create a new extension project:
yo code
2. Modify the Extension Code:
Open the generated src/extension.ts
file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:
import * as vscode from 'vscode'; import axios from 'axios'; export function activate(context: vscode.ExtensionContext) { let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => { const editor = vscode.window.activeTextEditor; if (editor) { const selection = editor.selection; const selectedText = editor.document.getText(selection); const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions'; try { const response = await axios.post( apiUrl, { prompt: `Generate documentation for the following code:\n\n${selectedText}`, max_tokens: 150, n: 1, stop: null, temperature: 0.5, }, { headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, } ); const generatedDocs = response.data.choices[0].text; vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs); } catch (error) { vscode.window.showErrorMessage('Error generating documentation: ' + error.message); } } }); context.subscriptions.push(disposable); } export function deactivate() {}
3. Update package.json
:
Add the following command configuration to the contributes
section of your package.json
file:
"contributes": { "commands": [ { "command": "extension.generateDocs", "title": "Generate Documentation" } ] }
4. Run and Test the Extension:
Press F5
to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.
Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:
1. Package the Extension:
VS Code uses the vsce
(Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:
npm install -g vsce
Navigate to your extension’s root directory and run the following command to package your extension:
vsce package
This will create a .vsix
file, which is the packaged extension.
2. Publish to the Visual Studio Code Marketplace:
To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.
Once you have your PAT, run the following command to publish your extension:
vsce publish
You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.
3. Share Privately:
If you prefer to share your extension privately within your organization, you can distribute the .vsix
file directly to your team members. They can install the extension by running the following command in VS Code:
code --install-extension your-extension.vsix
Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.
Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.
If you are looking for a solution to securely store your secrets like DB credentials, API keys, tokens, passwords, etc., AWS Secret Manager is the service that comes to your rescue. Keeping the secrets as plain text in your code is highly risky. Hence, storing the secrets in AWS secret manager helps you with the following.
AWS Secret Manager is a fully managed service that can store and manage sensitive information. It simplifies secret handling by enabling the auto-rotation of secrets to reduce the risk of compromise, monitoring the secrets for compliance, and reducing the manual effort of updating the credentials in the application after rotation.
At the time of publishing this document, AWS Secret Manager pricing is below. This might be revised in the future.
Component | Cost | Details |
---|---|---|
Secret storage | $0.40 per secret per month | Charges are done per month. If they are stored for less than a month, the cost is prorated. |
API calls | $0.05 per 10,000 API calls | Charges are charged to API interactions like managing secrets / retrieving secrets. |
Let us get deeper into the process of creating secrets.
The secret is created as below.
We can update the code to fetch the secret from Secrets Manager. For this, we need to remove the hardcoded credentials from the code. Based on the code language, there is a need to add a call to the function or method to the code to call the secret manager for the secret stored here. Depending on our requirements, we can modify the rotation strategy, versioning, monitoring, etc.
A secret consists of the secret value and the metadata. To store multiple values in one secret, we can use json with key-value pairs. A secret has a version that holds copies of the encrypted secret values. AWS uses three labels, like:
Custom labeling of the versions is also possible. AWS can never remove labeled versions of secrets, but unlabeled versions are considered deprecated and will be removed at any time.
Secrets stored in AWS Secret Manager can be monitored by services provided by AWS as below.
AWS Secrets Manager offers a secure, automated, scalable solution for managing sensitive data and credentials. It reduces the risk of secret exposure and helps improve application security with minimal manual intervention. Adopting best practices around secret management can ensure compliance and minimize vulnerabilities in your applications.
]]>
As businesses expand their global footprint, the need for a flexible, scalable, and secure networking solution becomes paramount. Enter Azure Virtual WAN (VWAN), a cloud-based offering designed to simplify and centralize network management while ensuring top-notch performance. Let’s dive into what Azure VWAN offers and how to set it up effectively.
Azure Virtual WAN, or VWAN, is a cloud-based network solution that connects secure, seamless, and optimized connectivity across hybrid and multi-cloud environments.
It provides:
This diagram depicts a high-level architecture of Azure Virtual WAN and its connectivity components.
Follow these steps to configure Azure VWAN:
Azure Monitor tracks performance, availability, and network health in real time and provides insights into traffic patterns, latency, and resource usage.
Diagnose network issues with tools like packet capture and connection troubleshooting. Quickly identify and resolve any bottlenecks or disruptions.
Set up alerts for critical issues such as connectivity drops or security breaches. Use detailed logs to analyze network events and maintain robust auditing.
Azure VWAN is a powerful tool for businesses looking to unify and optimize their global networking strategy. Organizations can ensure secure, scalable, and efficient connectivity by leveraging features like ExpressRoute, VNet Peering, and VPN Gateways. With the correct setup and monitoring tools, managing complex networks becomes a seamless experience.
]]>Salesforce has been giving us a ‘No Code’ way to have Data Cloud notify Sales Cloud of changes through Data Actions and Flows. But did you know you can go the other direction too?
The Data Cloud Ingestion API allows us to setup a ‘No Code’ way of sending changes in Sales Cloud to Data Cloud.
You have Sales Cloud, Data Cloud and Marketing Cloud Engagement. As a Marketing Campaign Manager you want to send an email through Marketing Cloud Engagement when a Lead fills out a certain form.
You only want to send the email if the Lead is from a certain state like ‘Minnesota’ and that Email address has ordered a certain product in the past. The historical product data lives in Data Cloud only. This email could come out a few minutes later and does not need to be real-time.
If you need to do this in near real-time, I would suggest to not use the Ingestion API. We can query the Data Cloud product data in a Flow and then update your Lead or other record in a way that triggers a ‘Journey Builder Salesforce Data Event‘ in Marketing Cloud Engagement.
But our above requirements do not require real-time so let’s solve this with the Ingestion API. Since we are sending data to Data Cloud we will have some more power with the Salesforce Data Action to reference more Data Cloud data and not use the Flow ‘Get Records’ for all data needs.
We can build an Ingestion API Data Stream that we can use in a Salesforce Flow. The flow can check to make sure that the Lead is from a certain state like ‘Minnesota’. The Ingestion API can be triggered from within the flow. Once the data lands in the DMO object in Data Cloud we can then use a ‘Data Action’ to listen for that data change, check if that Lead has purchased a certain product before and then use a ‘Data Action Target’ to push to a Journey in Marketing Cloud Engagement. All that should occur within a couple of minutes.
Here is the base Salesforce post sharing that this is possible through Flows, but let’s go deeper for you!
The following are those deeper steps of getting the data to Data Cloud from Sales Cloud. In my screen shots you will see data moving between a VIN (Vehicle Identification Number) custom object to a VIN DLO/DMO in Data Cloud, but the same process could be used for our ‘Lead’ Use Case above.
Congrats!
You have sent data from Sales Cloud to Data Cloud with ‘No Code’ using the Ingestion API!
Setting up the Data Action and connecting to Marketing Cloud Journey Builder is documented here to round out the use case.
Here is the base Ingestion API Documentation.
At Perficient we have experts in Sales Cloud, Data Cloud and Marketing Cloud Engagement. Please reach out and let’s work together to reach your business goals on these platforms and others.
Example YAML Structure:
openapi: 3.0.3
components:
schemas:
VIN_DC:
type: object
properties:
VIN_Number:
type: string
Description:
type: string
Make:
type: string
Model:
type: string
Year:
type: number
created:
type: string
format: date-time
In a previous blog post I discussed various content authoring approaches within Drupal and the importance of selecting the right one for your specific situation. Towards the end I mentioned a new iteration of Drupal(Starshot). It is now here, Startshot, i.e. Drupal CMS was released on Jan 15th. As it becomes part of the Drupal Ecosystem, here are 5 key areas to consider when tackling a new project or build.
1. What is Drupal CMS?
Drupal CMS is a tooling built on top of Drupal 11 Core. This takes some of the most commonly used configurations, recipes, modules and more, puts them into an installable package and offers it for a great starting point for small to moderate complexity websites and portals.
2. What are the advantages of Drupal CMS?
As mentioned above, Drupal CMS is a pre-bundled installation of Drupal 11 Core, Contributed modules, Recipes and configuration that provides a rapid starting point for marketing teams.
The advantages include quicker time to market, easier configuration of toolings for cookie compliance, content workflows, permissions, multilingual support and more. Drupal CMS as a product will enable marketing teams to build and maintain a web presence with limited technical staff requirements. You may be able to take advantage of an implementation partner like Perficient and have much smaller learning curve for web editors and managers as opposed to a completely custom build on top of Drupal Core.
The ability for a CMS to be spun up with limited customization and overhead, is a big departure from traditional Drupal development which required extensive experience and technical support. This will be a huge time and budget saver for certain situations and organizations.
Another advantage of Drupal CMS is that is built upon the standard Drupal 11 core. This allows a site to evolve, grow and take advantage of the more complex technical underpinnings as needed. If you start with Drupal CMS, you are not handcuffed to it, and have the entire Drupal open source ecosystem available to you as you scale.
3. What are the disadvantages of Drupal CMS?
Or course, no situation is a win-win-win, so what are the tradeoffs of Drupal CMS?
The major disadvantages of Drupal CMS would come to light in heavily customized or complex systems. All of the preconfigured toolings that make a simple to moderately complex site easier on Drupal CMS can cause MORE complexity on larger or completely custom builds, as a technical team may find themselves spending unnecessary time undoing the unnecessary aspects of Drupal CMS.
Another (for the meantime)disadvantage of Drupal CMS is that it is built on top of Drupal 11 core, while Drupal 11 is a secure and final release, the community support historically lags. It is worth evaluating support for any contributed modules for Drupal 11 before making the decision on Drupal CMS.
4. Drupal 10, Drupal 11, Drupal CMS, which is the right choice?
With all of the advantages and disadvantages to various Drupal Core and CMS versions. It can be a large choice of what direction to go. When making that decision for your organization, you should evaluate 3 major areas. First, look at the scale of your technical team and implementation budget. A smaller team or budget would suggest evaluating Drupal CMS as a solution.
Secondly, evaluate your technical requirements. Are you building a simple website with standard content needs and workflows? Drupal CMS might be perfect. Are you building a complex B2B commerce site with extensive content, workflow and technical customizations? Drupal Core might be the right choice.
Finally, evaluate your technical requirements for any needs that may not be fully supported by Drupal 11 just yet. If you find an area that isn’t supported, it would be time to evaluate the timeline for support, timeline for your project as well as criticality of the functional gaps. This is where a well versed and community connected implementation partner such as Perficient can provide crucial insights to ensure the proper selection of your underlying tooling.
5. I am already on Drupal 7/8/9/10/11, do I need to move to Drupal CMS?
In my opinion this is highly dependent of where you currently are. If you are on Drupal 7/8, you are many versions behind, lacking support and any upgrade is essentially a rebuild. In this case, Drupal CMS should be considered just like an new build considering the points above. Drupal 9/10/11, an upgrade to Drupal 10/11 respectively might be your best bet. Drupal CMS can be layered on top of this upgrade if you feel the features fit the direction of your website, but it is important to consider all the above pros and cons when making this decision. Again, a trusted implementation partner such as Perficient can help guide and inform you and your team as you tackle these considerations!
]]>Postman is an application programming interface (API) testing tool for designing, testing, and changing existing APIs. It has almost every capability a developer may need to test any API included in Postman.
Postman simplifies the testing process for both REST APIs and SOAP web services with its robust features and intuitive interface. Whether you’re developing a new API or testing an existing one, Postman provides the tools you need to ensure your services are functioning as intended.
Newman is a command-line runner that is used to perform commands and check Postman’s response. The Newman can be used to initiate requests in a Postman Collection in addition to the Collection Runner.
Newman is proficient with GitHub and the NPM registry. Additionally, Jenkin and other continuous integration technologies can be linked to it. If every request is fulfilled correctly, Newman produces code.
In the case of errors, code 1 is generated. Newman uses the npm package management, which is built on the Node.js platform.
Step 1: Ensure that your system has Node.js downloaded and installed. If not, then download and install Node.js.
Step 2: Run the following command in your cli: npm install -g newman
Step 1: Export the Postman collection and save it to your local device.
Step 2: Click on the eye icon in the top right corner of the Postman application.
Step 3: The “MANAGE ENVIRONMENTS” window will open. Provide a variable URL for the VARIABLE field and for INITIAL VALUE. Click on the Download as JSON button. Then, choose a location and save.
Step 4: Export the Environment to the same path where the Collection is available.
Step 5: In the command line, move from the current directory to the direction where the Collection and Environment have been saved.
Step 6: Run the command − newman run <“name of file”>. Please note that the name of the file should be in quotes.
-h, --help | Gives information about the options available |
-v, --version | To check the version |
-e, --environment [file URL] | Specify the file path or URL of environment variables. |
-g, --globals [file URL] | Specify the file path or URL of global variables. |
-d, --iteration-data [file] | Specify the file path or URL of a data file (JSON or CSV) to use for iteration data. |
-n, --iteration-count [number] | Specify the number of times for the collection to run. Use with the iteration data file. |
--folder [folder Name] | Specify a folder to run requests from. You can specify more than one folder by using this option multiple times, specifying one folder for each time the option is used. |
--working-dir [path] | Set the path of the working directory to use while reading files with relative paths. Defaults to the current directory. |
--no-insecure-file-read | Prevents reading of files located outside of the working directory. |
--export-environment [path] | The path to the file where Newman will output the final environment variables file before completing a run |
--export-globals [path] | The path to the file where Newman will output the final global variables file before completing a run. |
--export-collection [path] | The path to the file where Newman will output the final collection file before completing a run. |
--postman-api-key [api-key] | The Postman API Key used to load resources using the Postman API. |
--delay-request [number] | Specify a delay (in milliseconds) between requests. |
--timeout [number] | Specify the time (in milliseconds) to wait for the entire collection run to complete execution. |
--timeout-request [number] | Specify the time (in milliseconds) to wait for requests to return a response. |
--timeout-script [number] | Specify the time (in milliseconds) to wait for scripts to complete execution. |
--ssl-client-cert [path] | The path to the public client certificate file. Use this option to make authenticated requests. |
-k, --insecure | Turn off SSL verification checks and allow self-signed SSL certificates. |
--ssl-extra-ca-certs | Specify additionally trusted CA certificates (PEM) |
API performance testing involves mimicking actual traffic and watching how your API behaves. It is a procedure that evaluates how well the API performs regarding availability, throughput, and response time under the simulated load.
Testing the performance of APIs can help us in:
Step 1: Select the Postman Collection for Performance testing.
Step 2: Click on the 3 dots beside the Collection.
Step 3: Click on the “Run Collection” option.
Step 4: Click on the “Performance” option
Step 5: Set up the Performance test (Load Profile, Virtual User, Test Duration).
Step 6: Click on the Run button.
After completion of the Run, we can also download a report in a.pdf format, which states how our collection ran.
A strong and adaptable method for ensuring your APIs fulfill functionality and performance requirements is to use Newman with Postman alongside performance testing. You may automate your tests and provide comprehensive reports that offer insightful information about the functionality of your API by utilizing Newman’s command-line features.
This combination facilitates faster detection and resolution of performance issues by streamlining the testing process and improving team collaboration. Using Newman with Postman will enhance your testing procedures and raise the general quality of your applications as you continue improving your API testing techniques.
Use these resources to develop dependable, strong APIs that can handle the demands of practical use, ensuring a flawless user experience.
]]>Migrating to a cloud-delivered Contact Center as a Service (CCaaS) solution can revolutionize how your organization delivers customer service. However, this transition requires careful planning and execution to avoid disruptions. Assuming you have selected a CCaaS platform that aligns with your organizational needs, the following best practices outline key considerations for a seamless migration.
A successful migration to CCaaS not only enhances operational efficiency and scalability but also ensures a significant improvement in service delivery, directly impacting customer satisfaction and retention. Organizations should consider the risks of not embracing modern cloud-based customer care solutions, which can
include diminished customer service capabilities and potential costs due to outdated or inflexible systems. Moreover, organizations that delay this shift risk falling behind competitors who can adapt more quickly to market demands and customer needs. Thus, embarking on a well-planned migration journey is imperative for companies aiming to optimize their customer care operations and secure a competitive advantage in their respective markets.
Understanding your current environment is critical for a successful transition. Start with a thorough site review to document the infrastructure and identify unique user requirements. Engage with call center managers, team leaders, and power users to uncover specific needs and configured features such as whisper settings, omnichannel components, call management, etc.
Factors such as bandwidth and latency are paramount for seamless operations. Evaluate your facility’s connectivity for both on-site and remote users, ensuring it aligns with the CCaaS product requirements. Fortunately, modern CCaaS solutions such as Amazon Connect, Twilio Flex and Five9 supply agent connectivity tools to verify that workers have sufficient resources to provide good customer service over various channels.
Additionally, document call treatments and station-specific configurations like call coverage paths. Legacy components requiring continued functionality should be cataloged to prepare for integration.
Change management is essential to mitigate risks and maximize adoption. A staged cutover strategy is recommended over a single-event migration, allowing for gradual testing and adjustments.
Develop a robust testing strategy to validate the platform’s performance under real-world conditions. Complement this with an organizational enablement strategy to train users and ensure they are comfortable with the new system. Adoption by your business units and users is one of the most critical factors which will determine the success of your CCaaS migration.
Operational continuity is vital during migration. Start by understanding the reporting requirements for business managers to ensure no loss of visibility into critical metrics. Additionally, review monitoring processes to maintain visibility into system performance post-migration.
Integrating legacy infrastructure with the new CCaaS platform can present significant challenges. Document existing components, including FXO/FXS interfaces, Workforce Management solutions, FAX systems, wallboards, and specialty dialers. Verify that integrations comply with any regulatory requirements, such as HIPAA or FINRA.
Interactive Voice Response (IVR) systems often require specific integrations with local data sources or enterprise middleware. Assess these integrations to ensure call flows function as intended. For specialized applications, verify that they meet operational needs within the new environment.
Testing fault tolerance and disaster recovery capabilities are critical steps in any CCaaS migration. Develop and execute a failsafe testing plan to ensure resilience against both premise-level and carrier-level failures. It is important to align to your IT organization’s standards for recovery time objective (RTO) and business up-time expectations. Disaster recovery plans must reflect these measures and be tested to protect against potential downtime.
CCaaS solutions must scale with your business. Validate scalability by conducting load tests and documenting performance metrics. Compliance is equally important—ensure your migration adheres to industry standards like HIPAA, FedRAMP, or FINRA through thorough compliance testing and documentation.
Conclusion
A successful CCaaS migration hinges on meticulous planning, comprehensive testing, and strong change management. By following these best practices, you can minimize risks, ensure operational continuity, and set your organization up for long-term success with its new contact center platform. The result? An enhanced customer experience and a contact center infrastructure that grows with your business.
]]>
How to ensure your toolset matches your vision, team and long term goals.
Seems common sense right? Use the right tool for the right purpose. However, in the DXP and Drupal space, we often see folks trying to fit their project to the tool and not the tool to the project.
There are many modules, profiles, and approaches to building Drupal out there, and most all of them have their time and place. The key is knowing when to implement which and why. I am going to take a little time here a dive into one of those key decisions that we find ourselves at Perficient facing frequently and how we work with our clients to ensure the proper approach is selected for their Drupal application.
Site Studio vs Standard Drupal(blocks, views, content, etc..) vs Layout Builder
I would say this is the most common area where we see confusion related to the best tooling and how to pick. To start let’s do a summary of the various options(there are many more approaches available but these are the common ones we encounter), as well as their pros and cons.
First, we have Acquia Site Studio, it is a low-code site management tool built on top of Drupal. And it is SLICK. They provide web user editable templates, components, helpers, and more that allow a well trained Content Admin to have control of almost every aspect of the look and feel of the website. There is drag and drop editors for all templates that would traditionally be TWIG, as well as UI editors for styles, fonts and more. This is the cadillac of low code solutions for Drupal, but that comes with some trade offs in terms of developer customizability and config management strategies. We have also noticed, that not every content team actually utilizes the full scope of Site Studio features, which can lead to additional complexity without any benefit, but when the team is right, Site Studio is a very powerful tool.
The next option we frequently see, is a standard Drupal build utilizing Content Types and Blocks to control page layouts, with WYSIWYG editors for rich content and a standard Drupal theme with SASS, TWIG templates, etc…. This is the one you see most developer familiarity with, as well as the most flexibility to implement custom work as well as clean configuration management. The trade off here, is that most customizations will require a developer to build them out, and content editors are limited to “color between the lines” of what was initially built. We have experienced both content teams that were very satisfied with the defined controls, but also teams that felt handcuffed with the limitations and desired more UI/UX customizations without deployments/developer involvement.
The third and final option we will be discussing here, is the Standard Drupal option described above, with the addition of Layout Builder. Layout Builder is a Drupal Core module that enables users to attach layouts, such as 1 column, 2 column and more to various Drupal Entity types(Content, Users, etc..). These layouts then support the placement of blocks into their various regions to give users drag and drop flexibility over laying out their content. Layout Builder does not support full site templates or custom theme work such as site wide CSS changes. Layout Builder can be a good middle ground for content teams not looking for the full customization and accompanying complexity of Site Studio, but desiring some level of content layout control. Layout builder does come with some permissions and configuration management considerations. It is important to decide what is treated as content and what as configuration, as well as define roles and permissions to ensure proper editors have access to the right level of customizations.
Now that we have covered the options as well as the basic pros and cons of each, how do you know which tool is right for your team and your project? This is where we at Perficient start with a holistic review of your needs, short and long term goals, as well as the technical ability of your internal team. It is important to honestly evaluate this. Just because something has all the bells and whistles, do you have the team and time to utilize them, or is it a sunk cost with limited ROI. On the flip side, if you have a very technically robust team, you don’t want to handcuff them and leave them frustrated with limitations that could impact marketing opportunities that could lead to higher ROI.
Additional considerations that can help guide your choice in toolset would be future goals and initiatives. Is a rebrand coming soon? Is your team going to quickly expand with more technical staff? These might point towards Site Studio as the right choice. Is your top priority consistency and limiting unnecessary customizations? Then standard structured content might be the best approach. Do you want to able to customize your site, but just don’t have the time or budget to undertake Site Studio? Layout Builder might be something you should closely look at.
Perficient starts these considerations at the first discussions with our potential clients, and continue to guide them through the sales and estimation process to ensure the right basic Drupal tooling is selected. This then continues through implementation as we continue to inform stakeholders about the best toolsets beyond the core systems. In future articles we will discuss the advantages and disadvantages of various SSO, DAM, Analytics, Drupal module solutions as well as the new Star Shot Drupal Initiative and how it will impact the planning of your next Drupal build!
]]>In today’s hyper-connected world, organizations are racing to provide their customers with personalized, seamless experiences across every channel. For companies rolling out Agentforce—a cutting-edge Salesforce-based solution for agents, brokers, or any field sales team—having a robust data foundation is crucial. This is where Salesforce Data Cloud shines. By integrating Salesforce Data Cloud into your Agentforce strategy, you can empower your agents with the right insights to better serve customers, close more deals, and enhance operational efficiency.
Here are seven reasons why Salesforce Data Cloud is the key to a successful Agentforce rollout:
Salesforce Data Cloud is designed to be the central hub for customer data across all systems. It brings together data from various sources—CRM, social media, marketing platforms, transactional data, and more—into a single, unified profile. For Agentforce, this means agents will have a 360-degree view of each customer, allowing them to engage in more personalized conversations.
Agents can see customer preferences, past interactions, purchase history, and predictive insights in one dashboard. Whether your team is prospecting or assisting existing clients, having this level of insight is invaluable for delivering timely and relevant service.
Data is only valuable if it’s actionable. With Salesforce Data Cloud, Agentforce users gain real-time insights powered by AI and predictive analytics. These insights help agents make data-driven decisions in the moment—whether it’s offering an upsell, adjusting strategies for closing a deal, or tailoring responses to specific client needs.
For example, if an agent notices that a high-value customer is interacting less with your services, the system could flag this and provide recommendations for proactive outreach. This ability to respond in real-time can significantly enhance client retention and satisfaction.
Salesforce Data Cloud integrates seamlessly with your existing tools and platforms, whether they are part of the Salesforce ecosystem or external. As Agentforce often involves using multiple apps—like financial systems, call center tools, and communication platforms—Salesforce Data Cloud serves as the glue that binds them together.
This integration helps ensure that agents have accurate, up-to-date information at their fingertips, regardless of where the data originates. The result is a smoother workflow, faster responses, and improved customer experiences.
Mulesoft can be used to bring in data from API based external systems. Also, noETL sharing can allow for the accessing of Data Lakes like Snowflake and Databricks.
The power of AI-driven personalization is one of Salesforce Data Cloud’s most compelling features. By leveraging Einstein AI, agents can use predictive analytics to forecast customer needs and behaviors. For Agentforce, this means providing agents with the capability to engage in highly targeted, context-rich interactions that feel tailored to each individual client.
Imagine an insurance agent who, based on data trends, receives a suggestion to recommend a particular product to a customer just before they need it. This level of personalization doesn’t just boost sales—it strengthens customer loyalty and builds trust in your brand.
In many organizations, the challenge isn’t just managing customer data but ensuring that different departments can effectively collaborate around it. Salesforce Data Cloud’s unified platform allows for better cross-team collaboration. Marketing, sales, service, and IT teams can all access the same customer data, fostering improved communication and aligned strategies.
In the Agentforce environment, this translates to faster handoffs between teams, consistent messaging, and the ability to serve customers holistically. Agents no longer operate in silos but as part of a unified effort to deliver exceptional customer service.
As your Agentforce team grows and your business scales, Salesforce Data Cloud ensures that your data infrastructure can keep up. The platform is built to handle vast amounts of data while maintaining fast processing speeds and real-time insights. It’s also highly customizable, meaning you can tailor it to meet the evolving needs of your team and business processes.
Whether you’re adding new agents, expanding into new markets, or launching new products, Salesforce Data Cloud provides the scalability and flexibility needed to support your growth.
For organizations dealing with sensitive customer data—like in insurance, real estate, or financial services—security is paramount. Salesforce Data Cloud is designed with enterprise-grade security features, ensuring that your data is protected at all times. Additionally, the platform is compliant with major global privacy regulations such as GDPR and CCPA, which is critical for industries where data privacy is a top priority.
For Agentforce, this means you can focus on rolling out your strategy with confidence, knowing that your customer data is secure and your organization remains compliant with the latest regulations.
Don’t DIY / Do it Yourself. Focus on running your business and let Agentforce and Data Cloud wow your customers.
Salesforce Data Cloud is the key to unlocking the full potential of your Agentforce rollout. By centralizing customer data, providing real-time insights, enabling AI-driven personalization, and fostering cross-team collaboration, it empowers your agents to deliver exceptional service and drive business success. As your organization grows and your customer base expands, Salesforce Data Cloud offers the scalability and security needed to future-proof your operations.
If you’re looking to ensure your Agentforce rollout is a success, implementing and integrating Salesforce Data Cloud should be at the top of your strategy. With the right data infrastructure in place, your agents will be equipped to meet customer needs with precision, agility, and a personalized touch.
Learn more about Salesforce’s new Agentic AI Platform and more by browsing our Salesforce blog site.
We are a Salesforce Summit Partner with more than two decades of experience delivering digital solutions in the manufacturing, automotive, healthcare, financial services, and high-tech industries. Our team has deep expertise in all Salesforce Clouds and products, artificial intelligence, DevOps, and specialized domains to help you reap the benefits of implementing Salesforce solutions.
]]>