Architecture Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/architecture/ Expert Digital Insights Fri, 31 Jan 2025 18:22:29 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Architecture Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/architecture/ 32 32 30508587 Sales Cloud to Data Cloud with No Code! https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/ https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/#respond Fri, 31 Jan 2025 18:15:25 +0000 https://blogs.perficient.com/?p=376326

Salesforce has been giving us a ‘No Code’ way to have Data Cloud notify Sales Cloud of changes through Data Actions and Flows.   But did you know you can go the other direction too?

The Data Cloud Ingestion API allows us to setup a ‘No Code’ way of sending changes in Sales Cloud to Data Cloud.

Why would you want to do this with the Ingestion API?

  1. You are right that we could surely setup a ‘normal’ Salesforce CRM Data Stream to pull data from Sales Cloud into Data Cloud.  This is also a ‘No Code’ way to integrate the two.  But maybe you want to do some complex filtering or logic before sending the data onto Sales Cloud where a Flow could really help.
  2. CRM Data Streams only run on a schedule with every 10 minutes.  With the Ingestion API we can send to Data Cloud immediately, we just need to wait until the Ingestion API can run for that specific request.  The current wait time for the Ingestion API to run is 3 minutes, but I have seen it run faster at times.  It is not ‘real-time’, so do not use this for ‘real-time’ use cases.  But this is faster than CRM Data Streams for incremental and smaller syncs that need better control.
  3. You could also ingest data into Data Cloud easily through an Amazon S3 bucket.  But again, here we have data in Sales Cloud that we want to get to Data Cloud with no code.
  4. We can do very cool integrations by leveraging the Ingestion API outside of Salesforce like in this video, but we want a way to use Flows (No Code!) to send data to Data Cloud.

Use Case:

You have Sales Cloud, Data Cloud and Marketing Cloud Engagement.  As a Marketing Campaign Manager you want to send an email through Marketing Cloud Engagement when a Lead fills out a certain form.

You only want to send the email if the Lead is from a certain state like ‘Minnesota’ and that Email address has ordered a certain product in the past.  The historical product data lives in Data Cloud only.  This email could come out a few minutes later and does not need to be real-time.

Solution A:

If you need to do this in near real-time, I would suggest to not use the Ingestion API.  We can query the Data Cloud product data in a Flow and then update your Lead or other record in a way that triggers a ‘Journey Builder Salesforce Data Event‘ in Marketing Cloud Engagement.

Solution B:

But our above requirements do not require real-time so let’s solve this with the Ingestion API.  Since we are sending data to Data Cloud we will have some more power with the Salesforce Data Action to reference more Data Cloud data and not use the Flow ‘Get Records’ for all data needs.

We can build an Ingestion API Data Stream that we can use in a Salesforce Flow.  The flow can check to make sure that the Lead is from a certain state like ‘Minnesota’.  The Ingestion API can be triggered from within the flow.  Once the data lands in the DMO object in Data Cloud we can then use a ‘Data Action’ to listen for that data change, check if that Lead has purchased a certain product before and then use a ‘Data Action Target’ to push to a Journey in Marketing Cloud Engagement.  All that should occur within a couple of minutes.

Sales Cloud to Data Cloud with No Code!  Let’s do this!

Here is the base Salesforce post sharing that this is possible through Flows, but let’s go deeper for you!

The following are those deeper steps of getting the data to Data Cloud from Sales Cloud.  In my screen shots you will see data moving between a VIN (Vehicle Identification Number) custom object to a VIN DLO/DMO in Data Cloud, but the same process could be used for our ‘Lead’ Use Case above.

  1. Create a YAML file that we will use to define the fields in the Data Lake Object (DLO).  I put an example YAML structure at the bottom of this post.
  2. Go to Setup, Data Cloud, External Integrations, Ingestion API.   Click on ‘New’
    Newingestionapi

    1. Give your new Ingestion API Source a Name.  Click on Save.
      Newingestionapiname
    2. In the Schema section click on the ‘Upload Files’ link to upload your YAML file.
      Newingestionapischema
    3. You will see a screen to preview your Schema.  Click on Save.
      Newingestionapischemapreview
    4. After that is complete you will see your new Schema Object
      Newingestionapischemadone
    5. Note that at this point there is no Data Lake Object created yet.
  3. Create a new ‘Ingestion API’ Data Stream.  Go to the ‘Data Steams’ tab and click on ‘New’.   Click on the ‘Ingestion API’ box and click on ‘Next’.
    Ingestionapipic

    1. Select the Ingestion API that was created in Step 2 above.  Select the Schema object that is associated to it.  Click Next.
      Newingestionapidsnew
    2. Configure your new Data Lake Object by setting the Category, Primary Key and Record Modified Fields
      Newingestionapidsnewdlo
    3. Set any Filters you want with the ‘Set Filters’ link and click on ‘Deploy’ to create your new Data Stream and the associated Data Lake Object.
      Newingestionapidsnewdeploy
    4. If you want to also create a Data Model Object (DMO) you can do that and then use the ‘Review’ button in the ‘Data Mapping’ section on the Data Stream detail page to do that mapping.  You do need a DMO to use the ‘Data Action’ feature in Data Cloud.
  4. Now we are ready to use this new Ingestion API Source in our Flow!  Yeah!
  5. Create a new ‘Start from Scratch’, ‘Record-Triggered Flow’ on the Standard or Custom object you want to use to send data to Data Cloud.
  6. Configure an Asynchronous path.  We cannot connect to this ‘Ingestion API’ from the ‘Run Immediately’ part of the Flow because this Action will be making an API to Data Cloud.  This is similar to how we have to use a ‘Future’ call with an Apex Trigger.
    Newingestionapiflowasync
  7. Once you have configured your base Flow, add the ‘Action’ to the ‘Run Asynchronously’ part of the Flow.    Select the ‘Send to Data Cloud’ Action and then map your fields to the Ingestion API inputs that are available for that ‘Ingestion API’ Data Stream you created.
    Newingestionapiflowasync2
  8. Save and Activate your Flow.
  9. To test, update your record in a way that will trigger your Flow to run.
  10. Go into Data Cloud and see your data has made it there by using the ‘Data Explorer’ tab.
  11. The standard Salesforce Debug Logs will show the details of your Flow steps if you need to troubleshoot something.

Congrats!

You have sent data from Sales Cloud to Data Cloud with ‘No Code’ using the Ingestion API!

Setting up the Data Action and connecting to Marketing Cloud Journey Builder is documented here to round out the use case.

Here is the base Ingestion API Documentation.

At Perficient we have experts in Sales Cloud, Data Cloud and Marketing Cloud Engagement.  Please reach out and let’s work together to reach your business goals on these platforms and others.

Example YAML Structure:

Yaml Pic

openapi: 3.0.3
components:
schemas:
VIN_DC:
type: object
properties:
VIN_Number:
type: string
Description:
type: string
Make:
type: string
Model:
type: string
Year:
type: number
created:
type: string
format: date-time

]]>
https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/feed/ 0 376326
Drupal CMS is here, what it means for you and your organization. https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/ https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/#respond Thu, 16 Jan 2025 14:19:32 +0000 https://blogs.perficient.com/?p=375772

In a previous blog post I discussed various content authoring approaches within Drupal and the importance of selecting the right one for your specific situation. Towards the end I mentioned a new iteration of Drupal(Starshot). It is now here, Startshot, i.e. Drupal CMS was released on Jan 15th. As it becomes part of the Drupal Ecosystem, here are 5 key areas to consider when tackling a new project or build.

 

1. What is Drupal CMS?

Drupal CMS is a tooling built on top of Drupal 11 Core. This takes some of the most commonly used configurations, recipes, modules and more, puts them into an installable package and offers it for a great starting point for small to moderate complexity websites and portals.

 

2. What are the advantages of Drupal CMS?

As mentioned above, Drupal CMS is a pre-bundled installation of Drupal 11 Core, Contributed modules, Recipes and configuration that provides a rapid starting point for marketing teams.

The advantages include quicker time to market, easier configuration of toolings for cookie compliance, content workflows, permissions, multilingual support and more. Drupal CMS as a product will enable marketing teams to build and maintain a web presence with limited technical staff requirements. You may be able to take advantage of an implementation partner like Perficient and have much smaller learning curve for web editors and managers as opposed to a completely custom build on top of Drupal Core.

The ability for a CMS to be spun up with limited customization and overhead, is a big departure from traditional Drupal development which required extensive experience and technical support. This will be a huge time and budget saver for certain situations and organizations.

Another advantage of Drupal CMS is that is built upon the standard Drupal 11 core. This allows a site to evolve, grow and take advantage of the more complex technical underpinnings as needed. If you start with Drupal CMS, you are not handcuffed to it, and have the entire Drupal open source ecosystem available to you as you scale.

 

3. What are the disadvantages of Drupal CMS?

Or course, no situation is a win-win-win, so what are the tradeoffs of Drupal CMS?

The major disadvantages of Drupal CMS would come to light in heavily customized or complex systems. All of the preconfigured toolings that make a simple to moderately complex site easier on Drupal CMS can cause MORE complexity on larger or completely custom builds, as a technical team may find themselves spending unnecessary time undoing the unnecessary aspects of Drupal CMS.

Another (for the meantime)disadvantage of Drupal CMS is that it is built on top of Drupal 11 core, while Drupal 11 is a secure and final release, the community support historically lags. It is worth evaluating support for any contributed modules for Drupal 11 before making the decision on Drupal CMS.

 

4. Drupal 10, Drupal 11, Drupal CMS, which is the right choice?

With all of the advantages and disadvantages to various Drupal Core and CMS versions. It can be a large choice of what direction to go. When making that decision for your organization, you should evaluate 3 major areas. First, look at the scale of your technical team and implementation budget. A smaller team or budget would suggest evaluating Drupal CMS as a solution.

Secondly, evaluate your technical requirements. Are you building a simple website with standard content needs and workflows? Drupal CMS might be perfect. Are you building a complex B2B commerce site with extensive content, workflow and technical customizations? Drupal Core might be the right choice.

Finally, evaluate your technical requirements for any needs that may not be fully supported by Drupal 11 just yet. If you find an area that isn’t supported, it would be time to evaluate the timeline for support, timeline for your project as well as criticality of the functional gaps. This is where a well versed and community connected implementation partner such as Perficient can provide crucial insights to ensure the proper selection of your underlying tooling.

 

5. I am already on Drupal 7/8/9/10/11, do I need to move to Drupal CMS?

In my opinion this is highly dependent of where you currently are. If you are on Drupal 7/8, you are many versions behind, lacking support and any upgrade is essentially a rebuild. In this case, Drupal CMS should be considered just like an new build considering the points above. Drupal 9/10/11, an upgrade to Drupal 10/11 respectively might be your best bet. Drupal CMS can be layered on top of this upgrade if you feel the features fit the direction of your website, but it is important to consider all the above pros and cons when making this decision. Again, a trusted implementation partner such as Perficient can help guide and inform you and your team as you tackle these considerations!

]]>
https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/feed/ 0 375772
Newman Tool and Performance Testing in Postman https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/ https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/#respond Thu, 16 Jan 2025 12:13:41 +0000 https://blogs.perficient.com/?p=375112

Postman is an application programming interface (API) testing tool for designing, testing, and changing existing APIs. It has almost every capability a developer may need to test any API included in Postman.

Postman simplifies the testing process for both REST APIs and SOAP web services with its robust features and intuitive interface. Whether you’re developing a new API or testing an existing one, Postman provides the tools you need to ensure your services are functioning as intended.

  • Using Postman to test the APIs offers a wide range of benefits that eventually help in the overall testing of the application. Postman’s interface is very user-friendly, which allows users to easily create and manage requests without extensive coding knowledge, making it accessible to both developers and testers.
  • Postman supports multiple protocols such as HTTP, SOAP, GraphQL, and WebSocket APIs, which ensures a versatile testing set-up for a wide range of services.
  • To automate the process of validating the API Responses under various scenarios, users can write tests in JavaScript to ensure that the API behavior is as expected.
  • Postman offers an environment management feature that enables the user to set up different environments with environment-specific variables, which makes switching between development, staging, and production settings possible without changing requests manually.
  • Postman provides options for creating collection and organization, which makes it easier to manage requests, group tests, and maintain documentation.
  • Postman supports team collaboration, which allows multiple users to work on the same collections, share requests, and provide feedback in real-time.

Newman In Postman

Newman is a command-line runner that is used to perform commands and check Postman’s response. The Newman can be used to initiate requests in a Postman Collection in addition to the Collection Runner.

Newman is proficient with GitHub and the NPM registry. Additionally, Jenkin and other continuous integration technologies can be linked to it. If every request is fulfilled correctly, Newman produces code.

In the case of errors, code 1 is generated. Newman uses the npm package management, which is built on the Node.js platform.

How to install Newman

Step 1: Ensure that your system has Node.js downloaded and installed. If not, then download and install Node.js.

Step 2: Run the following command in your cli: npm install -g newman

How to use Newman: 

Step 1: Export the Postman collection and save it to your local device.

Step 2: Click on the eye icon in the top right corner of the Postman application.

Step 3: The “MANAGE ENVIRONMENTS” window will open. Provide a variable URL for the VARIABLE field and for INITIAL VALUE. Click on the Download as JSON button. Then, choose a location and save.

Step 4: Export the Environment to the same path where the Collection is available.

Step 5: In the command line, move from the current directory to the direction where the Collection and Environment have been saved.

Step 6: Run the command − newman run <“name of file”>. Please note that the name of the file should be in quotes.

Helpful CLI Commands to Use Newman

-h, --helpGives information about the options available
-v, --versionTo check the version
-e, --environment [file URL]Specify the file path or URL of environment variables.
-g, --globals [file URL]Specify the file path or URL of global variables.
-d, --iteration-data [file]Specify the file path or URL of a data file (JSON or CSV) to use for iteration data.
-n, --iteration-count [number]Specify the number of times for the collection to run. Use with the iteration data file.
--folder [folder Name]Specify a folder to run requests from. You can specify more than one folder by using this option multiple times, specifying one folder for each time the option is used.
--working-dir [path]Set the path of the working directory to use while reading files with relative paths. Defaults to the current directory.
--no-insecure-file-readPrevents reading of files located outside of the working directory.
--export-environment [path]The path to the file where Newman will output the final environment variables file before completing a run
--export-globals [path]The path to the file where Newman will output the final global variables file before completing a run.
--export-collection [path]The path to the file where Newman will output the final collection file before completing a run.
--postman-api-key [api-key]The Postman API Key used to load resources using the Postman API.
--delay-request [number]Specify a delay (in milliseconds) between requests.
--timeout [number]Specify the time (in milliseconds) to wait for the entire collection run to complete execution.
--timeout-request [number]Specify the time (in milliseconds) to wait for requests to return a response.
--timeout-script [number]Specify the time (in milliseconds) to wait for scripts to complete execution.
--ssl-client-cert [path]The path to the public client certificate file. Use this option to make authenticated requests.
-k, --insecureTurn off SSL verification checks and allow self-signed SSL certificates.
--ssl-extra-ca-certs Specify additionally trusted CA certificates (PEM)

Picture2

 

Picture3 Min

Picture4

Performance Testing in Postman

API performance testing involves mimicking actual traffic and watching how your API behaves. It is a procedure that evaluates how well the API performs regarding availability, throughput, and response time under the simulated load.

Testing the performance of APIs can help us in:

  • Test that the API can manage the anticipated load and observe how it reacts to load variations.
  • To ensure a better user experience, optimize and enhance the API’s performance.
  • Performance testing also aids in identifying the system’s scalability and fixing bottlenecks, delays, and failures.

How to Use Postman for API Performance Testing

Step 1: Select the Postman Collection for Performance testing.

Step 2: Click on the 3 dots beside the Collection.

Step 3:  Click on the “Run Collection” option.

Step 4:  Click on the “Performance” option

Step 5: Set up the Performance test (Load Profile, Virtual User, Test Duration).

Step 6: Click on the Run button.

After completion of the Run, we can also download a report in a.pdf format, which states how our collection ran.

A strong and adaptable method for ensuring your APIs fulfill functionality and performance requirements is to use Newman with Postman alongside performance testing. You may automate your tests and provide comprehensive reports that offer insightful information about the functionality of your API by utilizing Newman’s command-line features.

This combination facilitates faster detection and resolution of performance issues by streamlining the testing process and improving team collaboration. Using Newman with Postman will enhance your testing procedures and raise the general quality of your applications as you continue improving your API testing techniques.

Use these resources to develop dependable, strong APIs that can handle the demands of practical use, ensuring a flawless user experience.

]]>
https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/feed/ 0 375112
CCaaS Migration Best Practices: Tips for moving your customer care platform to the cloud https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/ https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/#respond Fri, 06 Dec 2024 16:28:56 +0000 https://blogs.perficient.com/?p=373159

Migrating to a cloud-delivered Contact Center as a Service (CCaaS) solution can revolutionize how your organization delivers customer service. However, this transition requires careful planning and execution to avoid disruptions. Assuming you have selected a CCaaS platform that aligns with your organizational needs, the following best practices outline key considerations for a seamless migration.

A successful migration to CCaaS not only enhances operational efficiency and scalability but also ensures a significant improvement in service delivery, directly impacting customer satisfaction and retention. Organizations should consider the risks of not embracing modern cloud-based customer care solutions, which can

include diminished customer service capabilities and potential costs due to outdated or inflexible systems. Moreover, organizations that delay this shift risk falling behind competitors who can adapt more quickly to market demands and customer needs. Thus, embarking on a well-planned migration journey is imperative for companies aiming to optimize their customer care operations and secure a competitive advantage in their respective markets.

 

  1. Physical Infrastructure Migration

Understanding your current environment is critical for a successful transition. Start with a thorough site review to document the infrastructure and identify unique user requirements. Engage with call center managers, team leaders, and power users to uncover specific needs and configured features such as whisper settings, omnichannel components, call management, etc.

Factors such as bandwidth and latency are paramount for seamless operations. Evaluate your facility’s connectivity for both on-site and remote users, ensuring it aligns with the CCaaS product requirements. Fortunately, modern CCaaS solutions such as Amazon Connect, Twilio Flex and Five9 supply agent connectivity tools to verify that workers have sufficient resources to provide good customer service over various channels.

Additionally, document call treatments and station-specific configurations like call coverage paths. Legacy components requiring continued functionality should be cataloged to prepare for integration.

 

  1. Change Management Planning

Change management is essential to mitigate risks and maximize adoption. A staged cutover strategy is recommended over a single-event migration, allowing for gradual testing and adjustments.

Develop a robust testing strategy to validate the platform’s performance under real-world conditions. Complement this with an organizational enablement strategy to train users and ensure they are comfortable with the new system. Adoption by your business units and users is one of the most critical factors which will determine the success of your CCaaS migration.

 

  1. Operational Considerations

Operational continuity is vital during migration. Start by understanding the reporting requirements for business managers to ensure no loss of visibility into critical metrics. Additionally, review monitoring processes to maintain visibility into system performance post-migration.

 

  1. Integration Planning

Integrating legacy infrastructure with the new CCaaS platform can present significant challenges. Document existing components, including FXO/FXS interfaces, Workforce Management solutions, FAX systems, wallboards, and specialty dialers. Verify that integrations comply with any regulatory requirements, such as HIPAA or FINRA.

Interactive Voice Response (IVR) systems often require specific integrations with local data sources or enterprise middleware. Assess these integrations to ensure call flows function as intended. For specialized applications, verify that they meet operational needs within the new environment.

 

  1. Fault Tolerance and Disaster Recovery

Testing fault tolerance and disaster recovery capabilities are critical steps in any CCaaS migration. Develop and execute a failsafe testing plan to ensure resilience against both premise-level and carrier-level failures. It is important to align to your IT organization’s standards for recovery time objective (RTO) and business up-time expectations. Disaster recovery plans must reflect these measures and be tested to protect against potential downtime.

 

  1. Scalability and Compliance

CCaaS solutions must scale with your business. Validate scalability by conducting load tests and documenting performance metrics. Compliance is equally important—ensure your migration adheres to industry standards like HIPAA, FedRAMP, or FINRA through thorough compliance testing and documentation.

 

Conclusion

A successful CCaaS migration hinges on meticulous planning, comprehensive testing, and strong change management. By following these best practices, you can minimize risks, ensure operational continuity, and set your organization up for long-term success with its new contact center platform. The result? An enhanced customer experience and a contact center infrastructure that grows with your business.

 

 

]]>
https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/feed/ 0 373159
Don’t try to fit a Layout Builder peg in a Site Studio hole. https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/ https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/#respond Thu, 14 Nov 2024 19:39:04 +0000 https://blogs.perficient.com/?p=372075

How to ensure your toolset matches your vision, team and long term goals.

Seems common sense right? Use the right tool for the right purpose. However, in the DXP and Drupal space, we often see folks trying to fit their project to the tool and not the tool to the project.

There are many modules, profiles, and approaches to building Drupal out there, and most all of them have their time and place. The key is knowing when to implement which and why. I am going to take a little time here a dive into one of those key decisions that we find ourselves at Perficient facing frequently and how we work with our clients to ensure the proper approach is selected for their Drupal application.

Site Studio vs Standard Drupal(blocks, views, content, etc..) vs Layout Builder

I would say this is the most common area where we see confusion related to the best tooling and how to pick. To start let’s do a summary of the various options(there are many more approaches available but these are the common ones we encounter), as well as their pros and cons.

First, we have Acquia Site Studio, it is a low-code site management tool built on top of Drupal. And it is SLICK. They provide web user editable templates, components, helpers, and more that allow a well trained Content Admin to have control of almost every aspect of the look and feel of the website. There is drag and drop editors for all templates that would traditionally be TWIG, as well as UI editors for styles, fonts and more. This is the cadillac of low code solutions for Drupal, but that comes with some trade offs in terms of developer customizability and config management strategies. We have also noticed, that not every content team actually utilizes the full scope of Site Studio features, which can lead to additional complexity without any benefit, but when the team is right, Site Studio is a very powerful tool.

The next option we frequently see, is a standard Drupal build utilizing Content Types and Blocks to control page layouts, with WYSIWYG editors for rich content and a standard Drupal theme with SASS, TWIG templates, etc…. This is the one you see most developer familiarity with, as well as the most flexibility to implement custom work as well as clean configuration management. The trade off here, is that most customizations will require a developer to build them out, and content editors are limited to “color between the lines” of what was initially built. We have experienced both content teams that were very satisfied with the defined controls, but also teams that felt handcuffed with the limitations and desired more UI/UX customizations without deployments/developer involvement.

The third and final option we will be discussing here, is the Standard Drupal option described above, with the addition of Layout Builder. Layout Builder is a Drupal Core module that enables users to attach layouts, such as 1 column, 2 column and more to various Drupal Entity types(Content, Users, etc..). These layouts then support the placement of blocks into their various regions to give users drag and drop flexibility over laying out their content. Layout Builder does not support full site templates or custom theme work such as site wide CSS changes. Layout Builder can be a good middle ground for content teams not looking for the full customization and accompanying complexity of Site Studio, but desiring some level of content layout control. Layout builder does come with some permissions and configuration management considerations. It is important to decide what is treated as content and what as configuration, as well as define roles and permissions to ensure proper editors have access to the right level of customizations.

Now that we have covered the options as well as the basic pros and cons of each, how do you know which tool is right for your team and your project? This is where we at Perficient start with a holistic review of your needs, short and long term goals, as well as the technical ability of your internal team. It is important to honestly evaluate this. Just because something has all the bells and whistles, do you have the team and time to utilize them, or is it a sunk cost with limited ROI. On the flip side, if you have a very technically robust team, you don’t want to handcuff them and leave them frustrated with limitations that could impact marketing opportunities that could lead to higher ROI.

Additional considerations that can help guide your choice in toolset would be future goals and initiatives. Is a rebrand coming soon? Is your team going to quickly expand with more technical staff? These might point towards Site Studio as the right choice. Is your top priority consistency and limiting unnecessary customizations? Then standard structured content might be the best approach. Do you want to able to customize your site, but just don’t have the time or budget to undertake Site Studio? Layout Builder might be something you should closely look at.

Perficient starts these considerations at the first discussions with our potential clients, and continue to guide them through the sales and estimation process to ensure the right basic Drupal tooling is selected. This then continues through implementation as we continue to inform stakeholders about the best toolsets beyond the core systems. In future articles we will discuss the advantages and disadvantages of various SSO, DAM, Analytics, Drupal module solutions as well as the new Star Shot Drupal Initiative and how it will impact the planning of your next Drupal build!

]]>
https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/feed/ 0 372075
Agentforce Success Starts with Salesforce Data Cloud https://blogs.perficient.com/2024/09/18/agentforce-success-starts-with-salesforce-data-cloud/ https://blogs.perficient.com/2024/09/18/agentforce-success-starts-with-salesforce-data-cloud/#respond Wed, 18 Sep 2024 13:45:48 +0000 https://blogs.perficient.com/?p=369366

In today’s hyper-connected world, organizations are racing to provide their customers with personalized, seamless experiences across every channel. For companies rolling out Agentforce—a cutting-edge Salesforce-based solution for agents, brokers, or any field sales team—having a robust data foundation is crucial. This is where Salesforce Data Cloud shines. By integrating Salesforce Data Cloud into your Agentforce strategy, you can empower your agents with the right insights to better serve customers, close more deals, and enhance operational efficiency.

Here are seven reasons why Salesforce Data Cloud is the key to a successful Agentforce rollout:

1. Unified Customer Data

Salesforce Data Cloud is designed to be the central hub for customer data across all systems. It brings together data from various sources—CRM, social media, marketing platforms, transactional data, and more—into a single, unified profile. For Agentforce, this means agents will have a 360-degree view of each customer, allowing them to engage in more personalized conversations.

Agents can see customer preferences, past interactions, purchase history, and predictive insights in one dashboard. Whether your team is prospecting or assisting existing clients, having this level of insight is invaluable for delivering timely and relevant service.

2. Real-Time Insights for Informed Decision-Making

Data is only valuable if it’s actionable. With Salesforce Data Cloud, Agentforce users gain real-time insights powered by AI and predictive analytics. These insights help agents make data-driven decisions in the moment—whether it’s offering an upsell, adjusting strategies for closing a deal, or tailoring responses to specific client needs.

For example, if an agent notices that a high-value customer is interacting less with your services, the system could flag this and provide recommendations for proactive outreach. This ability to respond in real-time can significantly enhance client retention and satisfaction.

3. Seamless Integration with Existing Systems

Salesforce Data Cloud integrates seamlessly with your existing tools and platforms, whether they are part of the Salesforce ecosystem or external. As Agentforce often involves using multiple apps—like financial systems, call center tools, and communication platforms—Salesforce Data Cloud serves as the glue that binds them together.

This integration helps ensure that agents have accurate, up-to-date information at their fingertips, regardless of where the data originates. The result is a smoother workflow, faster responses, and improved customer experiences.

Mulesoft can be used to bring in data from API based external systems.  Also, noETL sharing can allow for the accessing of Data Lakes like Snowflake and Databricks.

4. Enhanced Personalization Through AI

The power of AI-driven personalization is one of Salesforce Data Cloud’s most compelling features. By leveraging Einstein AI, agents can use predictive analytics to forecast customer needs and behaviors. For Agentforce, this means providing agents with the capability to engage in highly targeted, context-rich interactions that feel tailored to each individual client.

Imagine an insurance agent who, based on data trends, receives a suggestion to recommend a particular product to a customer just before they need it. This level of personalization doesn’t just boost sales—it strengthens customer loyalty and builds trust in your brand.

5. Improved Collaboration Across Teams

In many organizations, the challenge isn’t just managing customer data but ensuring that different departments can effectively collaborate around it. Salesforce Data Cloud’s unified platform allows for better cross-team collaboration. Marketing, sales, service, and IT teams can all access the same customer data, fostering improved communication and aligned strategies.

In the Agentforce environment, this translates to faster handoffs between teams, consistent messaging, and the ability to serve customers holistically. Agents no longer operate in silos but as part of a unified effort to deliver exceptional customer service.

6. Scalable and Future-Proof

As your Agentforce team grows and your business scales, Salesforce Data Cloud ensures that your data infrastructure can keep up. The platform is built to handle vast amounts of data while maintaining fast processing speeds and real-time insights. It’s also highly customizable, meaning you can tailor it to meet the evolving needs of your team and business processes.

Whether you’re adding new agents, expanding into new markets, or launching new products, Salesforce Data Cloud provides the scalability and flexibility needed to support your growth.

7. Enhanced Security and Compliance

For organizations dealing with sensitive customer data—like in insurance, real estate, or financial services—security is paramount. Salesforce Data Cloud is designed with enterprise-grade security features, ensuring that your data is protected at all times. Additionally, the platform is compliant with major global privacy regulations such as GDPR and CCPA, which is critical for industries where data privacy is a top priority.

For Agentforce, this means you can focus on rolling out your strategy with confidence, knowing that your customer data is secure and your organization remains compliant with the latest regulations.

Don’t DIY / Do it Yourself.  Focus on running your business and let Agentforce and Data Cloud wow your customers.

Unlock the Full Potential of Agentforce

Salesforce Data Cloud is the key to unlocking the full potential of your Agentforce rollout. By centralizing customer data, providing real-time insights, enabling AI-driven personalization, and fostering cross-team collaboration, it empowers your agents to deliver exceptional service and drive business success. As your organization grows and your customer base expands, Salesforce Data Cloud offers the scalability and security needed to future-proof your operations.

If you’re looking to ensure your Agentforce rollout is a success, implementing and integrating Salesforce Data Cloud should be at the top of your strategy. With the right data infrastructure in place, your agents will be equipped to meet customer needs with precision, agility, and a personalized touch.

Stay Informed About Agentforce and More! 

Learn more about Salesforce’s new Agentic AI Platform and more by browsing our Salesforce blog site

Perficient + Salesforce  

We are a Salesforce Summit Partner with more than two decades of experience delivering digital solutions in the manufacturing, automotive, healthcare, financial services, and high-tech industries. Our team has deep expertise in all Salesforce Clouds and products, artificial intelligence, DevOps, and specialized domains to help you reap the benefits of implementing Salesforce solutions.   

]]>
https://blogs.perficient.com/2024/09/18/agentforce-success-starts-with-salesforce-data-cloud/feed/ 0 369366
Computational Complexity Theory https://blogs.perficient.com/2024/09/10/computational-complexity-theory/ https://blogs.perficient.com/2024/09/10/computational-complexity-theory/#respond Tue, 10 Sep 2024 14:40:48 +0000 https://blogs.perficient.com/?p=368922

Computational complexity studies the efficiency of algorithms. It helps classify the algorithm in terms of time and space to identify the amount of computing resources needed to solve a problem. The Big Ω, and Big θ notations are used to describe the asymptotic behavior of an algorithm as a function of the input size. In computer science, computational complexity theory is fundamental to understanding the limits of how efficiently an algorithm can be computed.

This paper seeks to determine when an algorithm provides solvable solutions in a short com- putational time and to find those that generate solutions with long computational times that can be categorized as intractable or unsolvable, using these polynomial functions as a classical repre- sentation of computational complexity. Some mathematical notations to represent computational complexity, its mathematical definition from the perspective of function theory and predicate cal- culus, as well as complexity classes and their main characteristics to find polynomial functions will be explained. Mathematical expressions can explain the time behavior of a function and show the computational complexity. In a nutshell, we can compare the behavior of an algorithm over time with a mathematical function such as f (n), f (n2), etc.

In logic and algorithms, there has always been a search for how to measure execution time, calculate the computational time to store data, determine whether an algorithm generates a cost or a benefit in solving a problem, or design algorithms that generate a viable solution.

Asymptotic notations

What is it?

Asymptotic notation describes how an algorithm behaves over time, when its arguments tend to a specific limit, usually when they grow very large (tend to infinity). It is mainly used in the analysis of algorithms to show their efficiency and performance, especially in terms of execution time or memory usage as the size of the input data increases.

The asymptotic notation represents the behavior of an algorithm over time by making a com- parison with mathematical functions. The algorithm has a cycle while repeating different actions until a condition is fulfilled, it can be said that this algorithm has a behavior similar to a linear function, but if it has another cycle within the one already mentioned, it can be compared to a quadratic function.

How is an asymptotic notation represented?

Asymptotic notations can be expressed in 3 ways:

  • O(n): The term ‘Big O’ or BigO refers to an upper limit on the execution time of an algorithm. It is used to describe the worst-case It is used to describe the worst-case scenario. For example, if an algorithm is O(n2) in the worst-case scenario, its execution time will increase proportionally to n2 where the n is the input size.
  • Ω(n): The ‘Big Ω’ or BigΩ, describes a minimum limit on the execution time of an algorithm and is used to describe the best-case scenario. The algorithm has the behavior of Ω(n), which means that in the best case, the execution time of the algorithm will grow at least proportionally a n.
  • Θ(n): ‘Big Θ’ or BigΘ, are to both an upper and a lower bound of the time behavior of an algorithm. It is used to explain that, regardless of the case, the execution time of the algorithm increases proportionally to the specified value. For example, if an algorithm is Θ(nlogn), your execution time will increase proportionally to nlogn at both ends.

In a nutshell, asymptotic notation is a mathematical representation of computational com- plexity expressed in terms of computational complexity. Now, if we express in polynomial terms an asymptotic notation, it allows us to see how the computational cost increases as a reference variable increases. For example, let’s evaluate a polynomial function f (n) = n + 7 to conclude that this function has a linear growth. Compare this linear function with a second one given what g(n) = n3 − 2, the function g(h) will have a cubic growth when n is larger.

Computational Complexity 1

Figure 1: f (n) = n + 7 vs g(n) = n3 − 2

From a mathematical point of view, it can be stated that:

The function f (n) = O(n) and that the function g(n) = O(n3)

 

Computational complexity types

Finding an algorithm that solves a problem efficiently is crucial in analyzing algorithms. To achieve this we must be able to express the algorithm’s behavior in functions, for example, if we can express the algorithm as the polynomial f (n) function, a polynomial time can be set to determine the algorithmic efficiency. In general, a good design of an algorithm depends on whether it runs in polynomial time or less.

Frequency counter and arithmetic sum and bounding rules

To express an algorithm as a mathematical function and know it is execution time, it is neces- sary to find an algebraic expression that represents the number of executions or instructions of the algorithm. The frequency counter is a polynomial representation that has been worked on throughout the topic of computational complexity. with some simple examples in Csharp on how to calculate the computational complexity of some algorithms. Use the Big O, because expresses computational complexity in the worst-case scenario.

Computational complexity Constant

Analyze the function that adds 2 numbers and returns the result of the sum:

Computational Complexity 2

With the Big O notation for each of the instructions in the above algorithm, the number of times each line of code is executed can be determined. In this case, each line is executed only once. Now, to determine the computational complexity or the Big O of this algorithm, the complexity for each of the instructions must be summed up:

O(1) + O(1) = O(2)

The constant value is equal 2, the polynomial time of the algorithm is constant, i.e. O(1).

Polynomial Computational Complexity

Now let’s look at another example with a slightly more complex algorithm. We need to traverse an array containing the numbers from 1 to 100 and the total sum of the whole array is required:

Computational Complexity 3

In the sequence of the algorithm, lines 2 and 6 are executed only once, but lines 3 and 4 will be repeated n times, until reaching 100 iterations (n = 100 the size of the array), to calculate the computational cost of this algorithm, the following is done:

O(1) + O(n) + O(n) + O(1) = O(2n + 2)

From this result, it can be stated that the algorithm is executed in time lineal given that O(2n + 2) ≈ O(n). Let’s analyze another algorithm, similar but with two cycles one after the other. These algo- rithms are those whose execution time depends on two variables, n and m, linearly. This indicates that the length of the algorithm is proportional to the sum of the sizes of two independent inputs. The computational complexity for this type of algorithm is O(n + m).

Computational Complexity 4

In this algorithm, the two cycles are independent since the first while represents n + 1 times while the second while represents m + 1, being n ̸= m. Therefore, the computational cost is given by:

O(7) + O(2n) + O(2m) ≈ O(n + m)

Exponential computational complexity

For the third example, the computational cost for an algorithm containing nested cycles is analyzed:

Computational Complexity 5

The conditions in a while (while) and do-while (do while) cycles are executed n + 1 times, as compared to a foreach cycle. These loops do one additional step: validate the condition to end the loop. In line number 7, by repeating n times and doing its corresponding validation, the computational complexity at this point is n(n + 1). In the end, the result of the computational complexity of this algorithm would result in the following:

O(6) + O(4n) + O(2n2) = O(2n2 + 4n + 6) ≈ O(n2)

Logarithmic computational complexity

  • Logarithmic Complexity in base 2 (log2(n)): Algorithms with logarithmic complexity O(logn) grow very slowly compared to other complexity types such as O(n) or O(n2). Even for large inputs, the number of trades does not increase Let us analyze the following algorithm:

2024 09 10 07h23 12

Using a table, let us analyze the step-by-step execution of the algorithm proposed above:

 

2024 09 09 15h10 13

Table 1: Logarithmic loop algorithm execution

If you examine the sequence in Table reftab:tab1, you can see that their behavior has a logarithmic correlation. A logarithm is the power that must be raised to get another number. For example, log10100 = 2 because 102 = 100. Therefore, it is clear that the base 2 must be used for the proposed algorithm:

64/2 = 32

32/2 = 16

16/2 = 8

8/2 = 4

4/2 = 2

2/2 = 1

It can be calculated that log264 = 6, which means that the six (6) loop has been executed six (6) times (i.e. when k takes values {0, 1, 2, 3, 4, 5}). This conclusion confirms that the while loop of this algorithm is log2(n), and the computational cost is shown as:

 

O(1) + O(1) + O(log2(n) + 1) + O(log2(n)) + O(log2(n)) + O(1)

= O(4) + O(3log2(n))

O(4) + O(3log2(n)) ≈ O(log2(n))

  • Logarithmic complexity (nlog(n)): Algorithms O(nlog(n)) have an execution time that increases in proportion to the product of the input size n and the logarithm of n. This indicates that the execution time does not double if the input size is doubled, on the contrary, it increases less significantly due to the logarithmic factor. This type of complexity has a lower efficiency than O(n2) but higher than O(n).

2024 09 10 07h24 27

 

O(2 ∗ (n/2)) + O(1) ≈ O(nlog(n))

Analyzing the algorithm proposed above, mentioning the merge sort algorithm, the algorithm performs a similar division, but instead of sorting elements, it counts the possible divisions into subgroups. The complexity of this algorithm is O(nlog(n)) due to recursion and n operations are performed at each recursion level until the base case is reached.

Finally, in a summary graph, you can see, the behavior of the number of operations performed by the functions based on their computational complexity.

Example

An integration service is periodically executed to retrieve customer IDs associated with four or more companies registered with a parent company. The process performs individual queries for each company, accessing various databases that use different persistence technologies. As a result, an array of data containing the customer IDs is generated without checking or removing possible duplicates.

In this case, the initial approach would involve comparing each employee ID with all other elements in the array, resulting in a quadratic number of comparisons, i.e., O(n2):

2024 09 10 07h28 19

In a code review, the author of this algorithm will be advised to optimize the current approach due to its inefficiency. To solve the problems related to nested loops, a more efficient approach can be taken by using a HashSet. Here is how to use this object to improve performance, reducing complexity from O(n2) to O(n):

2024 09 10 07h33 23

Currently, in C# you can use an object called IEnumerable, which allows you to perform the same task in a single line of code. But in this approach, several clarifications must be made:

  • Previously, it was noted that a single line of code can be interpreted as having O(1) complex- ity. In this case, it is different because the Distinct function traverses the original collection and returns a new sequence containing only the unique elements, removing any duplicates using a HashSet, which, as mentioned earlier, results in O(n) complexity.
  • The HashSet also has a drawback: in the worst case, when collisions are frequent, the complexity can degrade to O(n2). However, this is extremely rare and typically depends on the quality of the hash function and the characteristics of the data in the collection.

The correct approach should be:

2024 09 10 07h34 06

Conclusions

In general, we can reach three important conclusions about computational complexity.

  • To evaluate and compare the efficiency of various algorithms, computational complexity is essential. Helps to understand how the execution time or resource usage (such as memory) of an algorithm increases with input size. This analysis is essential for choosing the most appropriate algorithm for a particular problem, especially when working with significant amounts of data.
  • Algorithms with lower computational complexity can improve system performance signifi- cantly. For example, the choice of an algorithm O(nlogn) instead of one O(n2) can have a significant impact on the amount of time required to process large amounts of data. Ef- ficient algorithms are essential to ensure that the system is fast and scalable in real-world applications such as search engines, image processing, and big data analytics.

Cuadro (1)

Figure 2: Operation vs Elements

 

  • Understanding computational complexity helps developers and data scientists to design and optimize algorithms. It allows for finding bottlenecks and performance improvements. By adapting the algorithm design to the specific needs of the problem and the constraints of the execution environment, computational complexity analysis allows informed trade-offs between execution time and the use of other resources, such as memory.

References

  • Roberto Flórez Algoritmia Básica, Second Edition, Universidad de Antioquia, 2011.
  • Thomas Mailund. Introduction to Computational Thinking: Problem Solving, Algorithms, Data Structures, and More, Apress, 2021.
]]>
https://blogs.perficient.com/2024/09/10/computational-complexity-theory/feed/ 0 368922
Understanding Microservices Architecture: Benefits and Challenges Explained https://blogs.perficient.com/2024/08/06/understanding-microservices-architecture-benefits-and-challenges-explained/ https://blogs.perficient.com/2024/08/06/understanding-microservices-architecture-benefits-and-challenges-explained/#comments Tue, 06 Aug 2024 07:55:38 +0000 https://blogs.perficient.com/?p=366833

Understanding Microservices Architecture: Benefits and Challenges Explained

Microservices architecture is a transformative approach in backend development that has gained immense popularity in recent years. To fully grasp the advantages of microservices, it is essential first to understand monolithic architecture, as microservices emerged primarily to address its limitations. This article will delve into the differences between monolithic and microservices architectures, the benefits and challenges of adopting microservices, and how they function in a modern development landscape.

What is Monolithic Architecture?

Monolithic architecture is a traditional software development model where an application is built as a single, unified unit. This means all components of the application, such as user interface, business logic, and database access, are intertwined within one codebase. For instance, if we consider an application like eCommerce Web Application, all functionalities, including payment processing, user authentication, and products listings, would be combined into one single repository.

While this model is intuitive and easier to manage for small projects or startups, it has significant drawbacks. The primary issues include:

  • Redeployment Challenges: Any minor change in one component necessitates redeploying the entire application.
  • Scaling Limitations: Scaling specific functionalities, like authentication, is not feasible without scaling the entire application.
  • High Inter dependencies: Multiple developers working on the same codebase can lead to conflicts and dependencies that complicate development.

Example : eCommerce Web Application

Mono1

The Shift to Microservices

As organizations like Netflix began to face the limitations of monolithic architecture, they sought solutions that could enhance flexibility, scalability, and maintainability. This led to the adoption of microservices architecture, which involves breaking down applications into smaller, independent services. Each service functions as a standalone unit, enabling teams to develop, deploy, and scale them independently.

Defining Microservices Architecture

Microservices architecture is characterized by several key features:

  • Independently Deployable Services: Each microservice can be deployed independently without affecting the entire application.
  • Loosely Coupled Components: Services interact with each other through well-defined APIs, minimizing dependencies.
  • Technology Agnostic: Different services can be built using different technologies, allowing teams to choose the best tools for their needs.

Micro1

Create new independent projects and separate deployment pipelines for the following services in an eCommerce web application:

  1. Authentication Service
  2. Shipping Service
  3. Taxation Service
  4. Product Listing Service
  5. Payment Service

These services can be accessed in the UI through an API gateway.

Benefits of Microservices Architecture

Transitioning to microservices offers numerous advantages that can significantly improve development workflows and application performance:

1. Independent Deployment

One of the most significant benefits is the ability to deploy services independently. For example, if a change is made to the authentication microservice, it can be updated without redeploying the entire application. This minimizes downtime and ensures that other services remain operational.

2. Flexible Scaling

With microservices, scaling becomes much more manageable. If there is an increase in user activity, developers can scale specific services, such as the payments service, without impacting others. This flexibility allows for efficient resource management and cost savings.

3. Technology Flexibility

Microservices architecture enables teams to use different programming languages or frameworks for different services. For instance, a team might choose Python for the authentication service while using Java for payment processing, optimizing performance based on service requirements.

How Microservices Communicate

Microservices need to communicate effectively to function as a cohesive application. There are several common methods for interaction:

1. Synchronous Communication

In synchronous communication, microservices communicate through API calls. Each service exposes an API endpoint, allowing other services to send requests and receive responses. For example, the payments service might send a request to the listings service to verify availability.

2. Asynchronous Communication

Asynchronous communication can be achieved using message brokers, such as RabbitMQ or Apache Kafka. In this model, a service sends a message to the broker, which then forwards it to the intended recipient service. This method decouples services and enhances scalability.

3. Service Mesh

A service mesh, like Istio, can be utilized to manage service-to-service communications, providing advanced routing, load balancing, and monitoring capabilities. This approach is particularly effective in complex microservices environments.

Challenges of Microservices Architecture

Despite its advantages, microservices architecture is not without challenges. Organizations must be aware of potential drawbacks:

1. Management Overhead

With multiple microservices, management complexity increases. Each service requires its deployment pipeline, monitoring, and maintenance, leading to higher overhead costs.

2. Infrastructure Costs

The infrastructure needed to support microservices can be expensive. Organizations must invest in container orchestration tools, like Kubernetes, and ensure robust networking to facilitate communication between services.

3. Development Complexity

While microservices can simplify specific tasks, they also introduce new complexities. Developers must manage inter-service communication, data consistency, and transaction management across independent services.

When to Use Microservices

Microservices architecture is generally more beneficial for larger organizations with complex applications. It is particularly suitable when:

  • You have a large application with distinct functionalities.
  • Your teams are sizable enough to manage individual microservices.
  • Rapid deployment and scaling are critical for your business.
  • Technology diversity is a requirement across services.

Microservices architecture presents a modern approach to application development, offering flexibility, scalability, and independent service management. While it comes with its set of challenges, the benefits often outweigh the drawbacks for larger organizations. As businesses continue to evolve, understanding when and how to implement microservices will be crucial for maintaining competitive advantage in the digital landscape.

By embracing microservices, organizations can enhance their development processes, improve application performance, and respond more effectively to changing market demands. Whether you are considering a transition to microservices or just beginning your journey, it is essential to weigh the pros and cons carefully and adapt your approach to meet your specific needs.

 

]]>
https://blogs.perficient.com/2024/08/06/understanding-microservices-architecture-benefits-and-challenges-explained/feed/ 1 366833
Revolutionizing OpenAI Chatbot UI Deployment with DevSecOps https://blogs.perficient.com/2024/07/05/revolutionizing-openai-chatbot-ui-deployment-with-devsecops/ https://blogs.perficient.com/2024/07/05/revolutionizing-openai-chatbot-ui-deployment-with-devsecops/#respond Fri, 05 Jul 2024 17:12:29 +0000 https://blogs.perficient.com/?p=365644

In the contemporary era of digital platforms, capturing and maintaining user interest stands as a pivotal element determining the triumph of any software. Whether it’s websites or mobile applications, delivering engaging and tailored encounters to users holds utmost significance. In this project, we aim to implement DevSecOps for deploying an OpenAI Chatbot UI, leveraging Kubernetes (EKS) for container orchestration, Jenkins for Continuous Integration/Continuous Deployment (CI/CD), and Docker for containerization.

What is ChatBOT?

A ChatBOT is an artificial intelligence-driven conversational interface that draws from vast datasets of human conversations for training. Through sophisticated natural language processing methods, it comprehends user inquiries and furnishes responses akin to human conversation. By emulating the nuances of human language, ChatBOTs elevate user interaction, offering tailored assistance and boosting engagement levels.

What Makes ChatBOTs a Compelling Choice?

The rationale behind opting for ChatBOTs lies in their ability to revolutionize user interaction and support processes. By harnessing artificial intelligence and natural language processing, ChatBOTs offer instantaneous and personalized responses to user inquiries. This not only enhances user engagement but also streamlines customer service, reduces response times, and alleviates the burden on human operators. Moreover, ChatBOTs can operate round the clock, catering to users’ needs at any time, thus ensuring a seamless and efficient interaction experience. Overall, the adoption of ChatBOT technology represents a strategic move towards improving user satisfaction, operational efficiency, and overall business productivity.

Key Features of a ChatBOT Include:

  1. Natural Language Processing (NLP): ChatBOTs leverage NLP techniques to understand and interpret user queries expressed in natural language, enabling them to provide relevant responses.
  2. Conversational Interface: ChatBOTs utilize a conversational interface to engage with users in human-like conversations, facilitating smooth communication and interaction.
  3. Personalization: ChatBOTs can tailor responses and recommendations based on user preferences, past interactions, and contextual information, providing a personalized experience.
  4. Multi-channel Support: ChatBOTs are designed to operate across various communication channels, including websites, messaging platforms, mobile apps, and voice assistants, ensuring accessibility for users.
  5. Integration Capabilities: ChatBOTs can integrate with existing systems, databases, and third-party services, enabling them to access and retrieve relevant information to assist users effectively.
  6. Continuous Learning: ChatBOTs employ machine learning algorithms to continuously learn from user interactions and improve their understanding and performance over time, enhancing their effectiveness.
  7. Scalability: ChatBOTs are scalable and capable of handling a large volume of concurrent user interactions without compromising performance, ensuring reliability and efficiency.
  8. Analytics and Insights: ChatBOTs provide analytics and insights into user interactions, engagement metrics, frequently asked questions, and areas for improvement, enabling organizations to optimize their ChatBOT strategy.
  9. Security and Compliance: ChatBOTs prioritize security and compliance by implementing measures such as encryption, access controls, and adherence to data protection regulations to safeguard user information and ensure privacy.
  10. Customization and Extensibility: ChatBOTs offer customization options and extensibility through APIs and development frameworks, allowing organizations to adapt them to specific use cases and integrate additional functionalities as needed.

Through the adoption of DevSecOps methodologies and harnessing cutting-edge technologies such as Kubernetes, Docker, and Jenkins, we are guaranteeing the safe, scalable, and effective rollout of ChatBOT. This initiative aims to elevate user engagement and satisfaction levels significantly.

I extend our heartfelt appreciation to McKay Wrigley, the visionary behind this project. His invaluable contributions to the realm of DevSecOps have made endeavors like the ChatBOT UI project achievable.

Pipeline Workflow

Chatbotuiflow.drawio

 

Let’s start, building our pipelines for the deployment of OpenAI Chatbot application. I will be creating two pipelines in Jenkins,

  1. Creating an infrastructure using terraform on AWS cloud.
  2. Deploying the Chatbot application on EKS cluster node.

Prerequisite: Jenkins Server configured with Docker, Trivy, Sonarqube, Terraform, AWS CLI, Kubectl.

Once, we successfully established and configured a Jenkins server, equipped with all necessary tools to create a DevSecOps pipeline for deployment by following my previous blog. We can start building our DevSecOps pipeline for OpenAI chatbot deployment.

First thing, we need to do is configure terraform remote backend.

  1. Create a S3 bucket with any name.
  2. Create a DynamoDB table with name “Lock-Files” and Partition Key as “LockID”.
  3. Update the S3 bucket name and DynamoDB table name in backend.tf file, which is in EKS-TF folder in Github Repo.

Create Jenkins Pipeline

Let’s login into our Jenkins Server Console as you have completed the prerequisite. Click on “New Item” and give it a name, select pipeline and then ok.

I want to create this pipeline with build parameters to apply and destroy while building only. You must add this inside job like the below image.

Terraform Parameter

Let’s add a pipeline, Definition will be Pipeline Script.

pipeline{
    agent any
    stages {
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/sunsunny-hub/Chatbot-UIv2.git'
            }
        }
        stage('Terraform version'){
             steps{
                 sh 'terraform --version'
             }
        }
        stage('Terraform init'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform init'
                   }      
             }
        }
        stage('Terraform validate'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform validate'
                   }      
             }
        }
        stage('Terraform plan'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform plan'
                   }      
             }
        }
        stage('Terraform apply/destroy'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform ${action} --auto-approve'
                   }      
             }
        }
    }
}

Let’s apply and save and build with parameters and select action as apply.

Stage view it will take max 10mins to provision.

Blue ocean output

Terraform Pipe

Check in Your Aws console whether it created EKS cluster or not.

Awscluster

Ec2 instance is created for the Node group.

Nodeinstace

Now let’s create new pipeline for chatbot clone. In this pipeline will deploy chatbot application on docker container after successful deployment, will deploy the same docker image on above provisioned eks cluster.

Under Pipeline section Provide below details.

Definition: Pipeline script from SCM
SCM : Git
Repo URL : Your GitHub Repo 
Credentials: Created GitHub Credentials
Branch: Main
Path: Your Jenkinsfile path in GitHub repo.

Deploy Pipe1

Deploy Pipe2

Apply and Save and click on Build. Upon successful execution you can see all stages as green.

Deploy Output

Sonar- Console:

Sonar Result

You can see the report has been generated and the status shows as failed. You can ignore this as of now for this POC, but in real time project all this quality profile/gates need to be passed.

Dependency Check:

Dependency Check

Trivy File scan:

Trivyfile Scan

Trivy Image Scan:

Trivyimage Scam

Docker Hub:

Dockerhub

Now access the application on port 3000 of Jenkins Server Ec2 Instance public IP.

Note: Ensure that port 3000 is permitted in the Security Group of the Jenkins server.

Chatbotui Docker

Click on openai.com(Blue in colour)

This will redirect you to the ChatGPT login page where you can enter your email and password. In the API Keys section, click on “Create New Secret Key.”

Apikey

Give a name and copy it. Come back to chatbot UI that we deployed and bottom of the page you will see OpenAI API key and give the Generated key and click on save (RIGHT MARK).

Apikey2

UI look like:

Chatbotui Docker Apikey

Now, You can ask questions and test it.

Chatbotui Docker Apikey2

Deployment on EKS

Now we need to add credential for eks cluster, which will be used for deploying application on eks cluster node. For that ssh into Jenkins server. Give this command to add context.

aws eks update-kubeconfig --name <clustername> --region <region>

It will generate a Kubernetes configuration file. Navigate to the directory where the config file is located and copy its contents.

cd .kube
cat config

Save the copied configuration in your local file explorer at your preferred location and name it as a text file.

Kubeconfig

Next, in the Jenkins Console, add this file to the Credentials section with the ID “k8s” as a secret file.

K8s Credential

Finally, incorporate this deployment stage into your Jenkins file.

stage('Deploy to kubernetes'){
            steps{
                withAWS(credentials: 'aws-key', region: 'ap-south-1'){
                script{
                    withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                       sh 'kubectl apply -f k8s/chatbot-ui.yaml'
                  }
                }
            }
        }
      }

Now rerun the Jenkins Pipeline again.

Upon Success:

Eks Deploy

In the Jenkins give this command

kubectl get all
kubectl get svc #use anyone

This will create a Classic Load Balancer on the AWS Console.

Loadbalancer

Loadbalancer Console

Copy the DNS name and paste it into your browser to use it.

Note: Do the same process to get OpenAI API Key and add key to get output on Chatbot UI.

Chatbotui Eks

The Complete Jenkins file:

pipeline{
    agent any
    tools{
        jdk 'jdk17'
        nodejs 'node19'
    }
    environment {
        SCANNER_HOME=tool 'sonar-scanner'
    }
    stages {
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/sunsunny-hub/Chatbot-UIv2.git'
            }
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
        stage("Sonarqube Analysis "){
            steps{
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Chatbot \
                    -Dsonar.projectKey=Chatbot '''
                }
            }
        }
        stage("quality gate"){
           steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' 
                }
            } 
        }
        stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
            }
        }
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . > trivyfs.json"
            }
        }
        stage("Docker Build & Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){   
                       sh "docker build -t chatbot ."
                       sh "docker tag chatbot surajsingh16/chatbot:latest "
                       sh "docker push surajsingh16/chatbot:latest "
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image surajsingh16/chatbot:latest > trivy.json" 
            }
        }
        stage ("Remove container") {
            steps{
                sh "docker stop chatbot | true"
                sh "docker rm chatbot | true"
             }
        }
        stage('Deploy to container'){
            steps{
                sh 'docker run -d --name chatbot -p 3000:3000 surajsingh16/chatbot:latest'
            }
        }
        stage('Deploy to kubernetes'){
            steps{
                withAWS(credentials: 'aws-key', region: 'ap-south-1'){
                script{
                    withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                       sh 'kubectl apply -f k8s/chatbot-ui.yaml'
                  }
                }
            }
        }
        }
    }
    }

I hope you have successfully deployed the OpenAI Chatbot UI Application. You can also delete the resources using the same Terraform pipeline by selecting the action as “destroy” and running the pipeline.

]]>
https://blogs.perficient.com/2024/07/05/revolutionizing-openai-chatbot-ui-deployment-with-devsecops/feed/ 0 365644
Unleashing CI/CD Magic in Boomi’s Integration https://blogs.perficient.com/2024/06/12/unleashing-ci-cd-magic-in-boomis-integration/ https://blogs.perficient.com/2024/06/12/unleashing-ci-cd-magic-in-boomis-integration/#respond Wed, 12 Jun 2024 12:18:03 +0000 https://blogs.perficient.com/?p=364011

What is CI/CD?   

A CI/CD (Continuous Integration/Continuous Deployment) pipeline is an automated workflow or series of steps that developers use to build, test, and deploy their code changes. It’s a crucial part of modern software development, promoting efficiency, reliability, and consistency in the software development process.
Picture1

Why Do We Need CI/CD?

In many organizations, the integration processes and other artifacts developed in the integration platform must be integrated with external CI/CD workflows and tools such as GitHub and Jenkins to automate and coordinate deployment.

With the help of CI/CD, it is possible to develop software and products faster than ever before, move code into production continuously, and guarantee a steady stream of new features.

Understanding CI/CD

Continuous Integration (CI) involves frequently merging code changes into a shared repository. Each integration developer’s changes are validated through automated builds and tests to detect integration issues early in the development cycle. This ensures that the main codebase is always in a working state.

Continuous Deployment (CD) automates the process of deploying the validated code changes to different environments, such as development, testing, staging, and production. It ensures that the latest tested code is delivered to users quickly and efficiently.

Getting Started with CI/CD implementation

Boomi is a powerful integration platform. While it offers a wide range of features, certain functionalities might not be readily available in certain scenarios. The extensive collection of Boomi Atomspher APIs allows users to automate tasks like creating package components, component deployment, monitoring, and more.

Utilizing these APIs, we have developed CI/CD implementations that enhance code quality and facilitate test-driven development. This blog goes into more detail about the reference implementation and provides example code to assist with it.

There are two primary components:

  1. Atomsphere API
  2. GitHub

Let’s examine the mentioned components in depth by considering a use case in which we’ll automate the creation of a package component and its deployment to the desired environment in Boomi Atom Cloud.

Atomsphere API

Please refer to the images below for the mentioned use case. In this use case, the AtomSphere platform API is at the core. To use AtomSphere APIs, the Boomi AtomSphere API connector is necessary. By using this connector, we can create a process responsible for creating a package and deploying it in the desired environment.

When automating this process, the Env ID and packaged ID must be passed to process deployment and the Component ID for packaged component creation. The flow diagram helps you understand the implementation in more depth.

You can set the process name by copying the process name from the Process Component, and the Env ID can be fetched from Boomi’s Atom Management.

Pic2

The rest of the dependent parameters are shown in the diagram (Boomi: Process overview).

Using the AtomSphere API connector requires following a sequence. We might need multiple AtomSphere connectors to extract different values. The sequence of the Boomi Atomsphere API connector for our use case is shown to the right.

In the Flow Diagram, we are using the query action in the AtomSphere API connector. This process requires various parameters, which are mentioned below according to their dependency.

You can set process name by copying the Name of the process from the Process Component and Env Id can be fetched from the Boomi’s Atom Management.

The rest of the dependent parameters are shown in the diagram (Boomi: Process overview).

 

Pic3

The functionality of each Atomsphere API Connector shown in the image is explained below:

  1. Get Process Details: Enter the process name as a parameter value, and action of connector is Query.
  2. For Component Detail: Enter the process ID from the process detail as a parameter value, and action of connector is GET.
  3. For Creating a Packaged Component: Enter the component ID from the component detail as a parameter value. The connector’s action is CREATE.
  4. For Deployment: Enter the packaged ID from the response of packaged component and Env ID as a parameter value and action of connector is CREATE.

The Process should be web service server-based and use the Set property shape to set the value of process name and Env ID dynamically using the dynamic process property.

If you want to enhance the CICD process, you can utilize any source-controlling tool and deployment pipeline. Here I have used GitHub as a Source control tool and GitHub Actions as a deployment pipeline.

GitHub

In this implementation, we are going to use GitHub Action to manage the CI/CD workflow. Organizations generally follow branching strategies by involving environment-specific branches (Feature, Dev, QA, UAT & PROD).

Here, we will be using four branches, which will give us a better understanding. Below are the steps to implement continuous deployment for a Boomi process.

  1. Proceed to GitHub and establish a new repository with the name of the Boomi Process that we are going to deploy.
  2. Inside this repository, we’ll create four branches: “Feature,” “Dev,” “Test,” and “Prod.”
  3. In the Feature branch, the previously mentioned changes will be pushed from our local system.
  4. Then, a pull request (PR) will be raised from the feature to Dev to merge the code.
  5. Similarly, the code will be promoted to higher environments. GitHub Workflow will be triggered each time a PR is merged, and Boomi processes will deploy to the desired environment.
  6. Workflow must be configured in the Dev, Test, and Main branch (Prod) for this. To set up workflow in GitHub, go to the Desired branch to set up workflow >> Action >> new workflow >> Set up Workflow yourself. Proceed to compose code using a (.yml) extension.

Pic4

This is a sample workflow code for reference. This can be changed according to the requirements.

In this reference code, we used the Base URL for API Requests of the Boomi process, which we created and deployed earlier over Boomi Cloud. As part of authentication, we need a few passwords and secrets. You can directly add passwords and secrets in line (which is not recommendable); otherwise, you can utilize GitHub’s secret store, which is a secure way to add and store credentials. To create secrets, go to Settings >> Secrets and Variables >> Secrets or Variables.

Understand How GitHub Works

Github

The job description shows that processes have been successfully deployed to the desired environments. You can also verify this on the Boomi Platform.

Pic5

 

Pic6

We’re all done with the implementation of the automated deployment process of Boomi. Here we used GitHub to manage the CI/CD workflow; you can use another tool as well, such as Jenkins.

Conclusion

While Boomi does not have an inherent functionality to automate the creation of package components and their deployment to the Atom Cloud, developers can overcome this limitation using Boomi’s APIs and scripting capabilities. Organizations can achieve a more efficient and automated integration process by leveraging these APIs, third-party tools, and CI/CD integration.

This, in turn, results in reduced developer time and errors and an overall improvement in the integration workflow, making Boomi an even more valuable tool for businesses seeking seamless connectivity between applications and data.

Perficient + Boomi 

At Perficient, we excel in tactical Boomi implementations by helping you address the full spectrum of challenges with lasting solutions rather than relying on band-aid fixes. The result is intelligent, multifunctional assets that reduce costs over time and equip your organization to proactively prepare for future integration demands. 

Contact us today to learn how we can help you to implement integration solutions with Boomi.

]]>
https://blogs.perficient.com/2024/06/12/unleashing-ci-cd-magic-in-boomis-integration/feed/ 0 364011
Level Up Your Map with the ArcGIS SDK https://blogs.perficient.com/2024/05/09/level-up-your-map-with-the-arcgis-sdk/ https://blogs.perficient.com/2024/05/09/level-up-your-map-with-the-arcgis-sdk/#respond Thu, 09 May 2024 16:12:08 +0000 https://blogs.perficient.com/?p=356410

In today’s tech-driven world, the ability to visualize data spatially has been vital for various industries. Enter ArcGIS, a Geographic Information System (GIS) developed by ESRI, which is here to help us solve our client’s needs. Let’s chart our way into the world of ArcGIS and how it empowers businesses to harness the full capabilities of today’s mapping software.

Overview

At its core, ArcGIS is a comprehensive mapping solution that enables you to deliver a high quality experience for your users. It integrates various geographic data sets, allows users to overlay layers, analyze spatial relationships and extract meaningful insights. The user-friendly features and wide array of capabilities differentiates ArcGIS from competitors.

Standard Features

ArcGIS offers a plethora of map features designed to level up your user’s experience. Basic features such as customizable basemap tiles, display the user’s location in real-time and intuitive pan and zoom functions all makes map navigation a smooth and familiar experience.

However, the true power of ArcGIS lies in its ability to visualize and interact with objects on a map. Custom-styled map markers with the same look and feel of pre-existing symbols, enables users to identify and track objects just as they’re used to seeing them. And if you have many objects in close proximity to one another? Group them together with “clusters” that can break apart or regroup at specific zoom levels.

Advanced Features

By providing methods to display object details or toggle visibility based on predefined groups, ArcGIS gives businesses the power to streamline asset management. And that just scratches the surface of the advanced features available!

With ArcGIS, you can draw on the map to indicate an area, or even let your users draw on the map themselves. You can apply a “highlight” styling on visible objects that meet a criteria. You can search for objects with a multitude of filters, such as object type, any custom attributes (defined and set by your organization’s data management team), or even search for objects within a defined geographical boundary.

The limit of its applications is your imagination.

Offline Maps

But what happens when you’re off the grid? Won’t we lose all of these convenient features? Fear not, as ArcGIS enables continued productivity even in offline environments.

By downloading map sections for offline use, users can still access critical data and functionalities without internet connectivity, a feature especially useful for your on-the-go users.

If storage space is a concern, you can decide which data points for objects are downloaded. So if your users just need to see the symbols on the map, you can omit the attributes data to cut down on payload sizes.

In conclusion, ArcGIS stands as one of the leaders in mapping technology, empowering businesses to unlock new opportunities. From basic map features to advanced asset management capabilities, ArcGIS is more than just a mapping solution—it’s a gateway to spatial intelligence. So, embrace the power of ArcGIS and chart your path to success in the digital age!

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/05/09/level-up-your-map-with-the-arcgis-sdk/feed/ 0 356410
React Native – A Web Developer’s Perspective on Pivoting to Mobile https://blogs.perficient.com/2024/04/29/from-react-to-react-native-a-high-level-view/ https://blogs.perficient.com/2024/04/29/from-react-to-react-native-a-high-level-view/#respond Mon, 29 Apr 2024 20:35:43 +0000 https://blogs.perficient.com/?p=362248

Making the Switch

I’ve been working with React Web for the last 6 years of my dev career. I’m most familiar and comfortable in this space and enjoy working with React. However, I was presented with an opportunity to move into mobile development towards the end of 2023. Having zero professional mobile development experience on past projects, I knew I needed to ramp up fast if I was going to be able to seize this opportunity. I was excited to take on the challenge!

I have plenty to learn, but I wanted to share some of the interesting things that I have learned along the way. I also wanted to share this with a perspective since I’m already comfortable with React. Just how much is there to learn in order to be a successful contributor on a React Native project?

Existing React Knowledge I Leveraged

Components! It’s still React.

You have functional components that return stuff. These components have access to the same hooks you are familiar with (useState, useEffect etc.) which means you have the same state management/rendering. The “stuff” I mentioned above is JSX, a familiar syntax. You can also leverage Redux for global application state. All of the things I mentioned have very thorough and reliable documentation as well. Bringing all of this to the table when you pivot to React-Native gets you over 50% of the way there.

The New Bits

There is no DOM. But that’s OK! Because you were already leveraging JSX instead of HTML anyways. The JSX you use for React Native is almost identical, except with no HTML elements.

Example code snippet (Source: https://reactnative.dev/docs/tutorial)

import React, {useState} from 'react';
import {View, Text, Button, StyleSheet} from 'react-native';

const App = () => {
  const [count, setCount] = useState(0);

  return (
    <View style={styles.container}>
      <Text>You clicked {count} times</Text>
      <Button
        onPress={() => setCount(count + 1)}
        title="Click me!"
      />
    </View>
  );
};

// React Native Styles
const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
  },
});

There are only 2 things in the example above that differ from web:

  1. Unique React Native Core Components
  2. Styles are created with a different syntax

Additionally, there is also no baked in browser support (no console or network tab). So, debugging your app by default is a bit more complex. Fortunately, there are tools out there to bridge the gap. Flipper will help with seeing your console logs similar to what Chrome would do on web. For inspecting UI elements you can hit a hotkey from your simulator command + control + z and see a helpful menu Show Element Inspector.

Additional Considerations

  • There are components referred to as Core Components. Surprisingly, there aren’t a ton, and you can accomplish a lot with only learning a handful. These will be your primary components you use in place of HTML looking JSX from web.
  • There is no CSS. You can set up styles in a similar fashion via a styling API which is passed into individual JSX elements via a style prop which has a similar look to web. Your styles do not cascade like they would with CSS by default; but there are ways around this too.
  • You have access to physical hardware on the phone (the camera). You can leverage location services as well as share content via the native OS share prompts.
  • The biggest shock to switching to React Native and mobile in general, application deployment is more complicated. Instead of deploying your built code to a web server you now must play ball with Apple and Google for them to host your app within their store. Which means instead of deploying to a web server, you have to deploy twice for mobile. One for App Store Connect and another for Google Play.

Final Thoughts

I covered the details I encountered on my journey from web to mobile. It’s important to spend time learning what the React Native API offers for you in place of the DOM elements you are already familiar with. I hope this helps anyone planning to get into mobile development.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/04/29/from-react-to-react-native-a-high-level-view/feed/ 0 362248