Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Fri, 31 Jan 2025 18:22:29 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 Sales Cloud to Data Cloud with No Code! https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/ https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/#respond Fri, 31 Jan 2025 18:15:25 +0000 https://blogs.perficient.com/?p=376326

Salesforce has been giving us a ‘No Code’ way to have Data Cloud notify Sales Cloud of changes through Data Actions and Flows.   But did you know you can go the other direction too?

The Data Cloud Ingestion API allows us to setup a ‘No Code’ way of sending changes in Sales Cloud to Data Cloud.

Why would you want to do this with the Ingestion API?

  1. You are right that we could surely setup a ‘normal’ Salesforce CRM Data Stream to pull data from Sales Cloud into Data Cloud.  This is also a ‘No Code’ way to integrate the two.  But maybe you want to do some complex filtering or logic before sending the data onto Sales Cloud where a Flow could really help.
  2. CRM Data Streams only run on a schedule with every 10 minutes.  With the Ingestion API we can send to Data Cloud immediately, we just need to wait until the Ingestion API can run for that specific request.  The current wait time for the Ingestion API to run is 3 minutes, but I have seen it run faster at times.  It is not ‘real-time’, so do not use this for ‘real-time’ use cases.  But this is faster than CRM Data Streams for incremental and smaller syncs that need better control.
  3. You could also ingest data into Data Cloud easily through an Amazon S3 bucket.  But again, here we have data in Sales Cloud that we want to get to Data Cloud with no code.
  4. We can do very cool integrations by leveraging the Ingestion API outside of Salesforce like in this video, but we want a way to use Flows (No Code!) to send data to Data Cloud.

Use Case:

You have Sales Cloud, Data Cloud and Marketing Cloud Engagement.  As a Marketing Campaign Manager you want to send an email through Marketing Cloud Engagement when a Lead fills out a certain form.

You only want to send the email if the Lead is from a certain state like ‘Minnesota’ and that Email address has ordered a certain product in the past.  The historical product data lives in Data Cloud only.  This email could come out a few minutes later and does not need to be real-time.

Solution A:

If you need to do this in near real-time, I would suggest to not use the Ingestion API.  We can query the Data Cloud product data in a Flow and then update your Lead or other record in a way that triggers a ‘Journey Builder Salesforce Data Event‘ in Marketing Cloud Engagement.

Solution B:

But our above requirements do not require real-time so let’s solve this with the Ingestion API.  Since we are sending data to Data Cloud we will have some more power with the Salesforce Data Action to reference more Data Cloud data and not use the Flow ‘Get Records’ for all data needs.

We can build an Ingestion API Data Stream that we can use in a Salesforce Flow.  The flow can check to make sure that the Lead is from a certain state like ‘Minnesota’.  The Ingestion API can be triggered from within the flow.  Once the data lands in the DMO object in Data Cloud we can then use a ‘Data Action’ to listen for that data change, check if that Lead has purchased a certain product before and then use a ‘Data Action Target’ to push to a Journey in Marketing Cloud Engagement.  All that should occur within a couple of minutes.

Sales Cloud to Data Cloud with No Code!  Let’s do this!

Here is the base Salesforce post sharing that this is possible through Flows, but let’s go deeper for you!

The following are those deeper steps of getting the data to Data Cloud from Sales Cloud.  In my screen shots you will see data moving between a VIN (Vehicle Identification Number) custom object to a VIN DLO/DMO in Data Cloud, but the same process could be used for our ‘Lead’ Use Case above.

  1. Create a YAML file that we will use to define the fields in the Data Lake Object (DLO).  I put an example YAML structure at the bottom of this post.
  2. Go to Setup, Data Cloud, External Integrations, Ingestion API.   Click on ‘New’
    Newingestionapi

    1. Give your new Ingestion API Source a Name.  Click on Save.
      Newingestionapiname
    2. In the Schema section click on the ‘Upload Files’ link to upload your YAML file.
      Newingestionapischema
    3. You will see a screen to preview your Schema.  Click on Save.
      Newingestionapischemapreview
    4. After that is complete you will see your new Schema Object
      Newingestionapischemadone
    5. Note that at this point there is no Data Lake Object created yet.
  3. Create a new ‘Ingestion API’ Data Stream.  Go to the ‘Data Steams’ tab and click on ‘New’.   Click on the ‘Ingestion API’ box and click on ‘Next’.
    Ingestionapipic

    1. Select the Ingestion API that was created in Step 2 above.  Select the Schema object that is associated to it.  Click Next.
      Newingestionapidsnew
    2. Configure your new Data Lake Object by setting the Category, Primary Key and Record Modified Fields
      Newingestionapidsnewdlo
    3. Set any Filters you want with the ‘Set Filters’ link and click on ‘Deploy’ to create your new Data Stream and the associated Data Lake Object.
      Newingestionapidsnewdeploy
    4. If you want to also create a Data Model Object (DMO) you can do that and then use the ‘Review’ button in the ‘Data Mapping’ section on the Data Stream detail page to do that mapping.  You do need a DMO to use the ‘Data Action’ feature in Data Cloud.
  4. Now we are ready to use this new Ingestion API Source in our Flow!  Yeah!
  5. Create a new ‘Start from Scratch’, ‘Record-Triggered Flow’ on the Standard or Custom object you want to use to send data to Data Cloud.
  6. Configure an Asynchronous path.  We cannot connect to this ‘Ingestion API’ from the ‘Run Immediately’ part of the Flow because this Action will be making an API to Data Cloud.  This is similar to how we have to use a ‘Future’ call with an Apex Trigger.
    Newingestionapiflowasync
  7. Once you have configured your base Flow, add the ‘Action’ to the ‘Run Asynchronously’ part of the Flow.    Select the ‘Send to Data Cloud’ Action and then map your fields to the Ingestion API inputs that are available for that ‘Ingestion API’ Data Stream you created.
    Newingestionapiflowasync2
  8. Save and Activate your Flow.
  9. To test, update your record in a way that will trigger your Flow to run.
  10. Go into Data Cloud and see your data has made it there by using the ‘Data Explorer’ tab.
  11. The standard Salesforce Debug Logs will show the details of your Flow steps if you need to troubleshoot something.

Congrats!

You have sent data from Sales Cloud to Data Cloud with ‘No Code’ using the Ingestion API!

Setting up the Data Action and connecting to Marketing Cloud Journey Builder is documented here to round out the use case.

Here is the base Ingestion API Documentation.

At Perficient we have experts in Sales Cloud, Data Cloud and Marketing Cloud Engagement.  Please reach out and let’s work together to reach your business goals on these platforms and others.

Example YAML Structure:

Yaml Pic

openapi: 3.0.3
components:
schemas:
VIN_DC:
type: object
properties:
VIN_Number:
type: string
Description:
type: string
Make:
type: string
Model:
type: string
Year:
type: number
created:
type: string
format: date-time

]]>
https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/feed/ 0 376326
Understanding In-Out and Input Parameters in IICS https://blogs.perficient.com/2025/01/24/understanding-in-out-and-input-parameters-in-iics/ https://blogs.perficient.com/2025/01/24/understanding-in-out-and-input-parameters-in-iics/#respond Fri, 24 Jan 2025 10:57:29 +0000 https://blogs.perficient.com/?p=375732

In Informatica Intelligent Cloud Services (IICS), In-Out and Input Parameters provide flexibility in managing dynamic values for your mappings. This allows you to avoid hard-coding values directly into the mapping and instead configure them externally through parameter files, ensuring ease of maintenance, especially in production environments. Below, we’ll walk through the concepts and how to use these parameters effectively in your IICS mappings.

In-Out Parameters

  1. Similar to Mapping Variables in Informatica PowerCenter:In-Out parameters in IICS function similarly to mapping parameters or variables in Informatica PowerCenter. These parameters allow you to define values that can be used across the entire mapping and changed externally without altering the mapping itself.
  2. Frequently Updating Values: In scenarios where a field value needs to be updated multiple times, such as a Product Discount that changes yearly, quarterly, or daily, In-Out parameters can save time and reduce errors. Instead of hard-coding the discount value in the mapping, you can define an In-Out parameter and store the value in a parameter file.
  3. For Example – Product Discount: If the Product Discount changes yearly, quarterly, or daily, you can create an In-Out parameter in your IICS mapping to store the discount value. Instead of updating the mapping each time the discount value changes, you only need to update the value in the parameter file.
  4. Changing Parameter Values: Whenever the discount value needs to be updated, simply change it in the parameter file. This eliminates the need to modify and redeploy the mapping itself, saving time and effort.
  5. Creating an In-Out Parameter: You can create an In-Out parameter in the mapping by specifying the parameter name and its value in the parameter file.Image (1)
  6. Configuring the Parameter File Path: In the Mapping Configuration Task (MCT), you can download the parameter file template. Provide the path and filename of the parameter file, and you can see the In-Out parameter definition in the MCT.Image
  7. Download the Parameter File Template: You can download the parameter file template directly from the MCT by clicking on “Download Parameter File Template.” After downloading, place the file in the specified directory.Image (2)
  8. Defining Parameter Values: In the parameter file, define the values for your parameters. For example, if you’re setting a Discount value, your file could look like this:#USE_SECTIONS[INFORMATICA].[INOUT_PARAM].[m_test]$Product_Discount=10[Global]
  9. Creating Multiple Parameters: You can create as many parameters as needed, using standard data types in the In-Out Parameters section. Common real-world parameters might include values like Product Category, Model, etc.

Input Parameters:

Input parameters are primarily used for parameterizing Source and Target Connections or objects. Here’s how to use input parameters effectively:

  1. Create the Mapping First: Start by designing your mapping logic, completing field mappings, and validating the mapping. Once the mapping is ready, configure the input parameters.
  2. Parameterizing Source and Target Connections: When parameterizing connections, create parameters for the source and target connections in the mapping. This ensures flexibility, especially when you need to change connection details without modifying the mapping itself. To create the Input parameter, go to the Parameter panel, click on Input Parameter, and create the Source and Target Parameter connections. Select the type as Connection, and choose the appropriate connection type (e.g., Oracle, SQL Server, Salesforce) from the drop-down menu.Image (3)
  3. Overriding Parameters at Runtime: If you select the “Allow Parameters to be Overridden at Runtime” option, IICS will use the values defined in the parameter file, overriding any hard-coded values in the mapping. This ensures that the runtime environment is always in sync with the latest configuration.
  4. Configuring Source and Target Connection Parameters: Specify the values for your source and target connection parameters in the parameter file, which will be used during runtime to establish connections.
    For example:
    #USE_SECTIONS
    [INFORMATICA].[INOUT_PARAM].[m_test]$$Product_Discount=10$$SRC_Connection=$$TGT_Connection=[Global]

Conclusion

In-Out and Input Parameters in IICS offer a powerful way to create flexible, reusable, and easily configurable mappings. By parameterizing values like field values, Source and Target Connections, or Objects, you can maintain and update your mappings efficiently.

]]>
https://blogs.perficient.com/2025/01/24/understanding-in-out-and-input-parameters-in-iics/feed/ 0 375732
What if Your Digital Transformation Was as Easy as Changing Your Mind? An Interview With Brian Solis https://blogs.perficient.com/2025/01/22/brian-solis-mindshift/ https://blogs.perficient.com/2025/01/22/brian-solis-mindshift/#respond Wed, 22 Jan 2025 14:06:33 +0000 https://blogs.perficient.com/?p=376063

In this episode of the “What If? So What?” podcast, Jim Hertzfeld talks with Brian Solis, a renowned futurist, author, and the head of global innovation at ServiceNow. Brian’s work has been instrumental in shaping digital strategies and customer experience  strategies for many organizations. In this episode, He shares insights from his latest book, “MindShift,” and discusses the evolving landscape of leadership and innovation.

Brian’s journey began in Silicon Valley in the 90s, and since then he has been at the forefront of understanding technology trends and their impact on human behavior. In his new book, Brian emphasizes the importance of self-awareness, cognitive bias, and the explores the importance of self-awareness, cognitive bias, and explains how the beginner’s mindset can drive business transformation

Brian explains how storytelling can inspire creativity and imagination and help leaders envision and communicate a better future for their organizations. He also highlights the impact of Generative AI on business transformation and the need for leaders to embrace new technologies to stay ahead of the curve.

Listen now to the “What If? So What?” podcast to learn more about the evolving role of leadership and the impact of Generative AI on the future of work.

Listen now on your favorite podcast platform or visit our website.

 

Subscribe Where You Listen

Apple | Spotify | Amazon | Overcast

Meet our Guest

Brian Solis Headshot

Brian Solis, Head of Global Innovation, ServiceNow

Brian Solis is the Head of Global Innovation at ServiceNow, a nine-time best-selling author, international keynote speaker, and digital anthropologist. Recognized by Forbes as “one of the more creative and brilliant business minds of our time” and by ZDNet as “one of the 21st-century business world’s leading thinkers,” Brian is a thought leader on innovation and transformation.

In his latest book, “Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future,” Brian shares empowering insights from his career and inspiring leaders to embrace change and drive progress. His message: the time to change the world is now, and it starts with you.

Connect with Brian

 

Meet the Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim:

LinkedIn | Perficient

]]>
https://blogs.perficient.com/2025/01/22/brian-solis-mindshift/feed/ 0 376063
Creating a Mega Menu using Acquia Site Studio https://blogs.perficient.com/2025/01/21/creating-a-mega-menu-using-acquia-site-studio/ https://blogs.perficient.com/2025/01/21/creating-a-mega-menu-using-acquia-site-studio/#respond Tue, 21 Jan 2025 23:42:36 +0000 https://blogs.perficient.com/?p=375991

Mega menus are an expandable menu feature with a multitude of options within a single interface, utilizing a dropdown format.

Mega menu designs can vary in complexity. They are particularly beneficial for managing a considerable amount of content or providing a quick overview of a sub-category of pages.

Mega Menu Img 1

Steps to create:

Required:

Acquia Site Studio: (https://www.acquia.com/drupal/site-studio) 

Menu Item Extras module: (https://www.drupal.org/project/menu_item_extras)

Adding a field to your menu:

A field can be added to the menu through the “Structure -> Menus” section. To initiate this process, you must edit your existing menu or create a new one. Once in the menu editor, the next step is to add a field. This field can be of any type based on your requirements.  

Since this is a Site Studio example, we’re going to choose a “Site Studio – Layout Canvas” field. This way we’re able to drop any component into the menu.

Mega Menu Img 2

Adding content to your new menu field:

Depending on the type of field you are using, you may need to edit the “Manage Display” options for that specific field. 

In our example, we associate the mega menu “content” with the first-level menu item.  

This menu item functions as the designated location for displaying your content. So, that’s the exact menu item where you should add your content.

Mega Menu Img 7

Mega Menu Img 3


Rendering your menu item field on a Site Studio menu template:  

To learn more about creating a multi-level menu, refer to Acquia’s documentation here. 

Begin by building your menu structure. Once you have the menu structure in place, identify the location where you want your layout canvas field to be displayed within the menu. In our example, we are placing the second-level menu AND our “layout canvas” field inside the first level of the menu. 

The wrapper will act as a dropdown that will be toggled.

Mega Menu Img 4

You can utilize various elements to insert your “content” from your Drupal menu. For this example, we will use the Inline element.

Mega Menu Img 5

Adding your token to the inline element:

Next, we’ll need to locate the token associated with the field we added to our menu item. 

Within your element, use the token browser to locate the token under the “Custom menu link” dropdown. In our example, we called it “Mega Menu Canvas”. 

Mega Menu Img 6

Save the menu template and refresh your website. Hover over the parent menu item to which you added the mega menu! 

]]>
https://blogs.perficient.com/2025/01/21/creating-a-mega-menu-using-acquia-site-studio/feed/ 0 375991
Drupal CMS is here, what it means for you and your organization. https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/ https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/#respond Thu, 16 Jan 2025 14:19:32 +0000 https://blogs.perficient.com/?p=375772

In a previous blog post I discussed various content authoring approaches within Drupal and the importance of selecting the right one for your specific situation. Towards the end I mentioned a new iteration of Drupal(Starshot). It is now here, Startshot, i.e. Drupal CMS was released on Jan 15th. As it becomes part of the Drupal Ecosystem, here are 5 key areas to consider when tackling a new project or build.

 

1. What is Drupal CMS?

Drupal CMS is a tooling built on top of Drupal 11 Core. This takes some of the most commonly used configurations, recipes, modules and more, puts them into an installable package and offers it for a great starting point for small to moderate complexity websites and portals.

 

2. What are the advantages of Drupal CMS?

As mentioned above, Drupal CMS is a pre-bundled installation of Drupal 11 Core, Contributed modules, Recipes and configuration that provides a rapid starting point for marketing teams.

The advantages include quicker time to market, easier configuration of toolings for cookie compliance, content workflows, permissions, multilingual support and more. Drupal CMS as a product will enable marketing teams to build and maintain a web presence with limited technical staff requirements. You may be able to take advantage of an implementation partner like Perficient and have much smaller learning curve for web editors and managers as opposed to a completely custom build on top of Drupal Core.

The ability for a CMS to be spun up with limited customization and overhead, is a big departure from traditional Drupal development which required extensive experience and technical support. This will be a huge time and budget saver for certain situations and organizations.

Another advantage of Drupal CMS is that is built upon the standard Drupal 11 core. This allows a site to evolve, grow and take advantage of the more complex technical underpinnings as needed. If you start with Drupal CMS, you are not handcuffed to it, and have the entire Drupal open source ecosystem available to you as you scale.

 

3. What are the disadvantages of Drupal CMS?

Or course, no situation is a win-win-win, so what are the tradeoffs of Drupal CMS?

The major disadvantages of Drupal CMS would come to light in heavily customized or complex systems. All of the preconfigured toolings that make a simple to moderately complex site easier on Drupal CMS can cause MORE complexity on larger or completely custom builds, as a technical team may find themselves spending unnecessary time undoing the unnecessary aspects of Drupal CMS.

Another (for the meantime)disadvantage of Drupal CMS is that it is built on top of Drupal 11 core, while Drupal 11 is a secure and final release, the community support historically lags. It is worth evaluating support for any contributed modules for Drupal 11 before making the decision on Drupal CMS.

 

4. Drupal 10, Drupal 11, Drupal CMS, which is the right choice?

With all of the advantages and disadvantages to various Drupal Core and CMS versions. It can be a large choice of what direction to go. When making that decision for your organization, you should evaluate 3 major areas. First, look at the scale of your technical team and implementation budget. A smaller team or budget would suggest evaluating Drupal CMS as a solution.

Secondly, evaluate your technical requirements. Are you building a simple website with standard content needs and workflows? Drupal CMS might be perfect. Are you building a complex B2B commerce site with extensive content, workflow and technical customizations? Drupal Core might be the right choice.

Finally, evaluate your technical requirements for any needs that may not be fully supported by Drupal 11 just yet. If you find an area that isn’t supported, it would be time to evaluate the timeline for support, timeline for your project as well as criticality of the functional gaps. This is where a well versed and community connected implementation partner such as Perficient can provide crucial insights to ensure the proper selection of your underlying tooling.

 

5. I am already on Drupal 7/8/9/10/11, do I need to move to Drupal CMS?

In my opinion this is highly dependent of where you currently are. If you are on Drupal 7/8, you are many versions behind, lacking support and any upgrade is essentially a rebuild. In this case, Drupal CMS should be considered just like an new build considering the points above. Drupal 9/10/11, an upgrade to Drupal 10/11 respectively might be your best bet. Drupal CMS can be layered on top of this upgrade if you feel the features fit the direction of your website, but it is important to consider all the above pros and cons when making this decision. Again, a trusted implementation partner such as Perficient can help guide and inform you and your team as you tackle these considerations!

]]>
https://blogs.perficient.com/2025/01/16/drupal-cms-is-here-what-it-means-for-you-and-your-organization/feed/ 0 375772
Newman Tool and Performance Testing in Postman https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/ https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/#respond Thu, 16 Jan 2025 12:13:41 +0000 https://blogs.perficient.com/?p=375112

Postman is an application programming interface (API) testing tool for designing, testing, and changing existing APIs. It has almost every capability a developer may need to test any API included in Postman.

Postman simplifies the testing process for both REST APIs and SOAP web services with its robust features and intuitive interface. Whether you’re developing a new API or testing an existing one, Postman provides the tools you need to ensure your services are functioning as intended.

  • Using Postman to test the APIs offers a wide range of benefits that eventually help in the overall testing of the application. Postman’s interface is very user-friendly, which allows users to easily create and manage requests without extensive coding knowledge, making it accessible to both developers and testers.
  • Postman supports multiple protocols such as HTTP, SOAP, GraphQL, and WebSocket APIs, which ensures a versatile testing set-up for a wide range of services.
  • To automate the process of validating the API Responses under various scenarios, users can write tests in JavaScript to ensure that the API behavior is as expected.
  • Postman offers an environment management feature that enables the user to set up different environments with environment-specific variables, which makes switching between development, staging, and production settings possible without changing requests manually.
  • Postman provides options for creating collection and organization, which makes it easier to manage requests, group tests, and maintain documentation.
  • Postman supports team collaboration, which allows multiple users to work on the same collections, share requests, and provide feedback in real-time.

Newman In Postman

Newman is a command-line runner that is used to perform commands and check Postman’s response. The Newman can be used to initiate requests in a Postman Collection in addition to the Collection Runner.

Newman is proficient with GitHub and the NPM registry. Additionally, Jenkin and other continuous integration technologies can be linked to it. If every request is fulfilled correctly, Newman produces code.

In the case of errors, code 1 is generated. Newman uses the npm package management, which is built on the Node.js platform.

How to install Newman

Step 1: Ensure that your system has Node.js downloaded and installed. If not, then download and install Node.js.

Step 2: Run the following command in your cli: npm install -g newman

How to use Newman: 

Step 1: Export the Postman collection and save it to your local device.

Step 2: Click on the eye icon in the top right corner of the Postman application.

Step 3: The “MANAGE ENVIRONMENTS” window will open. Provide a variable URL for the VARIABLE field and for INITIAL VALUE. Click on the Download as JSON button. Then, choose a location and save.

Step 4: Export the Environment to the same path where the Collection is available.

Step 5: In the command line, move from the current directory to the direction where the Collection and Environment have been saved.

Step 6: Run the command − newman run <“name of file”>. Please note that the name of the file should be in quotes.

Helpful CLI Commands to Use Newman

-h, --helpGives information about the options available
-v, --versionTo check the version
-e, --environment [file URL]Specify the file path or URL of environment variables.
-g, --globals [file URL]Specify the file path or URL of global variables.
-d, --iteration-data [file]Specify the file path or URL of a data file (JSON or CSV) to use for iteration data.
-n, --iteration-count [number]Specify the number of times for the collection to run. Use with the iteration data file.
--folder [folder Name]Specify a folder to run requests from. You can specify more than one folder by using this option multiple times, specifying one folder for each time the option is used.
--working-dir [path]Set the path of the working directory to use while reading files with relative paths. Defaults to the current directory.
--no-insecure-file-readPrevents reading of files located outside of the working directory.
--export-environment [path]The path to the file where Newman will output the final environment variables file before completing a run
--export-globals [path]The path to the file where Newman will output the final global variables file before completing a run.
--export-collection [path]The path to the file where Newman will output the final collection file before completing a run.
--postman-api-key [api-key]The Postman API Key used to load resources using the Postman API.
--delay-request [number]Specify a delay (in milliseconds) between requests.
--timeout [number]Specify the time (in milliseconds) to wait for the entire collection run to complete execution.
--timeout-request [number]Specify the time (in milliseconds) to wait for requests to return a response.
--timeout-script [number]Specify the time (in milliseconds) to wait for scripts to complete execution.
--ssl-client-cert [path]The path to the public client certificate file. Use this option to make authenticated requests.
-k, --insecureTurn off SSL verification checks and allow self-signed SSL certificates.
--ssl-extra-ca-certs Specify additionally trusted CA certificates (PEM)

Picture2

 

Picture3 Min

Picture4

Performance Testing in Postman

API performance testing involves mimicking actual traffic and watching how your API behaves. It is a procedure that evaluates how well the API performs regarding availability, throughput, and response time under the simulated load.

Testing the performance of APIs can help us in:

  • Test that the API can manage the anticipated load and observe how it reacts to load variations.
  • To ensure a better user experience, optimize and enhance the API’s performance.
  • Performance testing also aids in identifying the system’s scalability and fixing bottlenecks, delays, and failures.

How to Use Postman for API Performance Testing

Step 1: Select the Postman Collection for Performance testing.

Step 2: Click on the 3 dots beside the Collection.

Step 3:  Click on the “Run Collection” option.

Step 4:  Click on the “Performance” option

Step 5: Set up the Performance test (Load Profile, Virtual User, Test Duration).

Step 6: Click on the Run button.

After completion of the Run, we can also download a report in a.pdf format, which states how our collection ran.

A strong and adaptable method for ensuring your APIs fulfill functionality and performance requirements is to use Newman with Postman alongside performance testing. You may automate your tests and provide comprehensive reports that offer insightful information about the functionality of your API by utilizing Newman’s command-line features.

This combination facilitates faster detection and resolution of performance issues by streamlining the testing process and improving team collaboration. Using Newman with Postman will enhance your testing procedures and raise the general quality of your applications as you continue improving your API testing techniques.

Use these resources to develop dependable, strong APIs that can handle the demands of practical use, ensuring a flawless user experience.

]]>
https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/feed/ 0 375112
Unlock the Future of Integration with IBM ACE https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/ https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/#respond Wed, 15 Jan 2025 07:13:29 +0000 https://blogs.perficient.com/?p=375312

Have you ever wondered about integration in API development or how to become familiar with the concept?

In this blog, we will discuss one of the integration technologies that is very easy and fun to learn, IBM ACE.

What is IBM ACE?

IBM ACE stands for IBM App Connect Enterprise. It is an integration platform that allows businesses to connect various applications, systems, and services, enabling smooth data flow and communication across diverse environments. IBM ACE supports the creation of Integrations using different patterns, helping organizations streamline their processes and improve overall efficiency in handling data and business workflows.

Through a collection of connectors to various data sources, including packaged applications, files, mobile devices, messaging systems, and databases, IBM ACE delivers the capabilities needed to design integration processes that support different integration requirements.

One advantage of adopting IBM ACE is that it allows current applications to be configured for Web Services without costly legacy application rewrites. By linking any application or service to numerous protocols, including SOAP, HTTP, and JMS, IBM ACE minimizes the point-to-point pressure on development resources.

Modern secure authentication technologies, including LDAP, X-AUTH, O-AUTH, and two-way SSL, are supported through MQ, HTTP, and SOAP nodes, including the ability to perform activities on behalf of masquerading or delegated users.

How to Get Started

Refer to Getting Started with IBM ACE: https://www.ibm.com/docs/en/app-connect/12.0?topic=enterprise-get-started-app-connect

For installation on Windows, follow the document link below. Change the IBM App Connect version to 12.0 and follow along: https://www.ibm.com/docs/en/app-connect/11.0.0?topic=software-installing-windows

IBM ACE Toolkit Interface

Interface

This is what an IBM ACE toolkit interface looks like. You can see all the applications/APIs and libraries you created during application development. In Pallete, you can see all the nodes and connectors needed for application development.

Learn more about nodes and connectors: https://www.ibm.com/docs/en/app-connect/12.0?topic=development-built-in-nodes

IBM ACE provides flexibility in creating an Integration Servers and Integration Node where you can deploy and test your developed code and application, which you can do with the help of mqsi commands.

How to Create a New Application

  • To create a new application, click on File -> New -> Application.

Picture3

  • Give the Application a name and click finish.

Picture4

 

  • To add a message flow, click on New under Application, then Message Flow.

Picture5

  • Give the message flow a name and click finish.

Picture6

  • Once your flow is created, double-click on its name. The message flow will open, and you can implement the process.
  • Drag the required node and connectors to the canvas for your development.

Picture7

How to Create an Integration Node and Integration Server

  • Open your command window for your current installation.

Picture8

  • To create an Integration server, run the following command in the command shell and specify the parameter for the integration server you want to create: mqsicreateexecutiongroup IBNODE -e IServer_2
  • To create an Integration node, run the following command in the command shell and specify the parameter for the integration node you want to create.
    • For example, If you want to create an Integration node with queue manager ACEMQ, use the following command: mqsicreatebroker MYNODE -i wbrkuid -a wbrkpw -q ACEMQ

 How to Deploy the Application

  • Right-click on the application, then click on Deploy.

Picture9

  • Then click on the Integration node and Finish.

Picture10

Advantages of IBM ACE

  • ACE offers powerful integration possibilities. Allowing for smooth communication between different applications, systems, and data sources.
  • It supports a variety of message patterns and data formats, allowing it to handle a wide range of integration scenarios.
  • It meets industry standards, ensuring compatibility and interoperability with many technologies and protocols.
  • ACE has complete administration and monitoring features, allowing administrators to track integration processes’ performance and health.
  • The platform encourages the production of reusable integration components, which decreases development time and effort for comparable integration tasks.
  • ACE offers comprehensive security measures that secure data during transmission and storage while adhering to enterprise-level security standards.
  • ACE offers a user-friendly development environment and tools to design, test, and deploy integration solutions effectively.

Conclusion

In this introductory blog, we have explored IBM ACE and how to create a basic application to learn about this integration technology.

Here at Perficient, we develop complex, scalable, robust, and cost-effective solutions using IBM ACE. This empowers our clients to improve efficiency and reduce manual work, ensuring seamless communication and data flow across their organization.

Contact us today to explore more options for elevating your business.

]]>
https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/feed/ 0 375312
How to Upgrade MuleSoft APIs to Java 17: A Comprehensive Guide https://blogs.perficient.com/2025/01/09/how-to-upgrade-mulesoft-apis-to-java-17-a-comprehensive-guide/ https://blogs.perficient.com/2025/01/09/how-to-upgrade-mulesoft-apis-to-java-17-a-comprehensive-guide/#respond Thu, 09 Jan 2025 11:38:52 +0000 https://blogs.perficient.com/?p=370174

The Evolution of Java and Its Significance in Enterprise Applications

Java has been the go-to language for enterprise software development for decades, offering a solid and reliable platform for building scalable applications. Over the years, it has evolved with each new version.

Security Enhancements of Java 17

Long-Term Support

Java 17, being a Long-Term Support (LTS) release, is a strategic choice for enterprises using MuleSoft. The LTS status ensures that Java 17 will receive extended support, including critical security updates and patches, over the years.

This extended support is crucial for maintaining the security and stability of MuleSoft applications, often at the core of enterprise integrations and digital transformations.

By upgrading to Java 17, MuleSoft developers can ensure that their APIs and integrations are protected against newly discovered vulnerabilities, reducing the risk of security breaches that could compromise sensitive data.

The Importance of Long-Term Support

  1. Stay Secure: Java 17 is an LTS release with long-term security updates and patches. Upgrading ensures your MuleSoft Applications are protected against the latest vulnerabilities, keeping your deep, safe data.
  2. Better Performance: With Java 17, you get a more optimized runtime to make your MuleSoft application run faster. This means quicker response times and a smoother experience for you.
  3. Industry Standards Compliance: Staying on an LTS version like Java 17 helps meet industry standards and compliance requirements. It shows that your applications are built on a stable, well-supported platform.

Getting Started with Java 17 and Anypoint Studio

Before you start upgrading your MuleSoft APIs to Java 17, it’s important to make sure your development environment is set up properly. Here are the key prerequisites to help you transition smoothly.

Install and Set Up Java 17

  • Download Java 17: Get Java 17 from the Oracle Java SE or Eclipse Adoptium Downloads page, or use OpenJDK for your OS.
  • Install Java 17: Run the installer and set JAVA_HOME to the Java 17 installation directory.
  • Verify the Installation: Confirm Java 17 is installed by typing java-version into the terminal or command prompt.

Download and Install Anypoint Studio 7.1x Version

Upgrading to Java 17 and Anypoint Studio

As we begin upgrading our MuleSoft Application to Java 17, we have undertaken several initial setup steps in our local and developed environments. These steps are outlined below:

Step 1

  • Update the Anypoint Studio to the latest version 7.17.0.Picture1
  • Please Note: If Anypoint Studio isn’t working after the update, make sure to follow Step 2 and Step 6 for troubleshooting.

Step 2

  • Downloaded and installed Java 17 JDK in the local system.

Picture2

Step 3

  • In Anypoint Studio, we must download the latest Mule runtime, 4.6.x. For that, click on ‘Install New Software…’ under the Help section.

Picture3

  • Click on the Mule runtimes and select and install the 4.6.x version.

Picture4

Step 6

  • Now, close Anypoint Studio.
  • Navigate to the Studio configuration files in Anypoint Studio and open the AnypointStudio.ini file.
  • Update the path for Java 17 in the Anypoint Studio ‘ini’ file as mentioned below.Picture6
  • Restart the Anypoint studio.

Step 7

  • In Anypoint Studio, navigate to the Run section at the top and select Run Configurations.
  • Go to the JRE section and select the Runtime JRE – Project JRE (jdk-17.0.11-9-hotspot).
    Picture7
  • Go to preferences select tooling, and select Java VM Studio ServiceProject JRE (jdk-17.0.11-9-hotspot)Picture8

 

So, our setup is complete after following all the above steps, and you can deploy your MuleSoft application on Java 17!

Conclusion

Upgrading to Java 17 is essential for enhancing the security, performance, and stability of your MuleSoft APIs. As a Long-Term Support (LTS) release, Java 17 provides extended support, modern features, and critical security updates, ensuring your applications stay robust and efficient. By installing Java 17 and configuring Anypoint Studio accordingly, you position your MuleSoft integrations for improved performance.

]]>
https://blogs.perficient.com/2025/01/09/how-to-upgrade-mulesoft-apis-to-java-17-a-comprehensive-guide/feed/ 0 370174
Comparing MuleSoft and Boomi: A Deep Dive into Features and Components https://blogs.perficient.com/2025/01/09/comparing-mulesoft-and-boomi-a-deep-dive-into-features-and-components/ https://blogs.perficient.com/2025/01/09/comparing-mulesoft-and-boomi-a-deep-dive-into-features-and-components/#respond Thu, 09 Jan 2025 08:03:21 +0000 https://blogs.perficient.com/?p=374968

MuleSoft and Boomi are two popular businesses that provide reliable solutions for integrating devices, data, and apps in the constantly changing field of enterprise integration. Every platform has distinct characteristics and advantages that make it appropriate for various company requirements. To help you make a wise choice, we will examine the key features and elements of Boomi and MuleSoft in this blog.

Overview of MuleSoft and Boomi

Mulesoft

MulelogoMuleSoft’s Anypoint Platform is an integration solution that focuses on API-led connectivity. It enables organizations to connect applications, data, and devices seamlessly. Key features include robust API management, extensive pre-built connectors, and a powerful data transformation language called DataWeave. MuleSoft is ideal for enterprises with complex integration needs, offering flexible deployment options (on-premises, cloud, or hybrid) and a comprehensive set of tools for building and managing APIs.

Boomi

BoomilogoBoomi, a Dell Technologies company, provides a cloud-based integration platform (iPaaS) service designed for ease of use. It features a user-friendly interface with drag-and-drop functionality, making it accessible to non-technical users. Boomi offers over 200 pre-built connectors and supports real-time and batch integrations. It is particularly suited for mid-sized businesses looking for rapid deployment and straightforward integration solutions without the complexity of enterprise-level features.

Integration Capabilities

MuleSoft’s Integration Capabilities

  • Supports various integration patterns, including API-led integration, data integration, and event-driven architectures.
  • Offers pre-built connectors for numerous applications, databases, and protocols.
  • Allows for complex transformations using DataWeave, a powerful data transformation language.
Mule Integration Capabilities 01

Figure 1: A Mulesoft flow with four components.

 

Mule Integration Capabilities 02

Figure 2: Dataweave script used in Transform message

 

Mule Integration Capabilities 03

Figure 3: The MuleSoft Mule Palette, listing available components for building flows.

Boomi’s Integration Capabilities

  • It provides a visual interface with drag-and-drop functionality, making it accessible to users without extensive technical backgrounds.
  • Offers over 200 pre-built connectors and a rich library of integration processes.
  • It includes real-time and batch-processing features and is suitable for various business scenarios.
Boomi Integration Capabilities 01

Figure 4: The flow in Boomi with four main steps.

Boomi Integration Capabilities 02

Figure 5: Mapping function used in Boomi

                                          Mule Integration Capabilities 01Boomi Integration Capabilities 03.1Mule Integration Capabilities 01                                                              

API Management 

MuleSoft

  • Excels in API management, allowing users to design, document, secure, and analyze APIs from a single platform.

Offers features like API gateways, policy enforcement, and analytics to monitor API performance.

Mule Api Management 01

Figure 7: MuleSoft Anypoint Platform and available options


Boomi

  • Includes API management capabilities but is more focused on integration than comprehensive API lifecycle management.
  • Users can create APIs easily, but the depth of management features may not be as robust as MuleSoft’s.
    Boomi Api Management 01

    Figure 8: Management features in Boomi

MuleSoft and Boomi User Experience

MuleSoft

  • The Anypoint Platform provides a unified experience for integration and API management, but it can have a steeper learning curve, particularly for new users.
  • Offers extensive documentation and training resources to help users get up to speed.

Boomi

  • Known for its user-friendly interface, which simplifies the integration process.
  • Ideal for business users who need to quickly build and manage integrations without extensive technical expertise.

Deployment Options for MuleSoft and Boomi

MuleSoft

  • Offers flexibility in deployment, including on-premises, cloud, and hybrid environments.
  • This versatility allows organizations to choose the best option based on their infrastructure and security requirements.

Boomi

  • Primarily, it is a cloud-based solution that simplifies deployment and maintenance.
  • Some users may find this limiting if they require on-premises solutions for sensitive data.

Scalability and Performance

MuleSoft

  • It is suitable for enterprise-level applications and is designed to handle large-scale integrations and high data volumes.
  • The architecture supports microservices, enhancing scalability and performance.

Boomi

  • While Boomi is scalable, it may be better suited for mid-sized businesses or those with moderate integration needs.
  • Performance can vary based on the complexity of integrations and data volume.

MuleSoft and Boomi Support and Communities

MuleSoft

  • Offers a robust support system with various plans, including premium options for enterprises.
  • Has a strong community and ecosystem, providing forums, events, and extensive resources for users.

Boomi

  • Also provides solid customer support and a knowledge base, but the community aspect may not be as extensive as MuleSoft’s.
  • Users often report quick response times and helpful support staff.

Pricing

MuleSoft

  • MuleSoft offers a subscription-based pricing model containing various features and support, along with their base and support plans for every business size and requirement. The pricing structure of MuleSoft can be complicated, and additional costs may be added when one uses advanced services like API and data quality management. Here’s a pricing reference: https://www.mulesoft.com/anypoint-pricing

Boomi

  • Boomi is priced on a subscription basis depending on the specific functionality the businesses seek and the scale at which it is offered. The pricing will most likely fluctuate depending on the number of integrations, used connectors, and volume of data processed. Here’s the pricing reference: https://boomi.com/pricing/

When Should You Choose MuleSoft or Boomi?

When to Choose MuleSoft?

Here are key considerations for selecting the MuleSoft integration platform:

  • When your organization requires a robust platform capable of managing complex integrations.
  • When you have a technically proficient team of developers who can handle a code-centric approach when necessary.
  • When customization is a critical requirement for your business operations.

When to Choose Boomi?

Consider selecting the Boomi integration platform in the following scenarios:

  • When the primary goal is quick integration, even if it involves limited customization.
  • Ease of use is a key priority and must be a seamless experience.
  • When there is a significant need for cloud-based integrations.

Conclusion

Although Boomi and MuleSoft have strong integration features, their user bases and business requirements differ. MuleSoft excels at enterprise-level scalability and complex API management, which makes it the perfect choice for big businesses with complex integration needs. However, Boomi is a fantastic option for companies looking for efficient integration solutions due to its user-friendly interface and speedy deployment.

Your unique needs, technical know-how, and long-term integration plan will determine which of MuleSoft and Boomi is best for you. Both systems offer the resources required to support successful integration endeavors, regardless of your preference for a simple user interface or strong API administration.

 

 

]]>
https://blogs.perficient.com/2025/01/09/comparing-mulesoft-and-boomi-a-deep-dive-into-features-and-components/feed/ 0 374968
How Copilot Vastly Improved My React Development https://blogs.perficient.com/2025/01/08/how-copilot-vastly-improved-my-react-development/ https://blogs.perficient.com/2025/01/08/how-copilot-vastly-improved-my-react-development/#respond Wed, 08 Jan 2025 18:37:01 +0000 https://blogs.perficient.com/?p=375355

I am always looking to write better, more performant and cleaner code. GitHub Copilot checks all the boxes and makes my life easier. I have been using it since the 2021 public beta, the hype is real!

According to the GitHub Copilot website, it is:

“The world’s most widely adopted AI developer tool.”  

While that sounds impressive, the proof is in the features that help the average developer produce higher quality code, faster. It doesn’t replace a human developer, but that is not the point. The name says it all, it’s a tool designed to work alongside developers. 

When we look at the stats, we see some very impressive numbers:

  • 75% of developers report more satisfaction with their jobs 
  • 90% of Fortune 100 companies use Copilot 
  • With 55% of developers prefer Copilot 
  • Developers report a 25% increase in speed 

Day in the Life

I primarily use Copilot for code completion and test cases for ReactJS and JavaScript code.

When typing predictable text such as “document” in a JavaScript file, Copilot will review the current file and public repositories to provide a context correct completion. This is helpful when I create new code or update existing code. Code suggestion via Copilot chat enables me to ask for possible solutions to a problem. “How do I type the output of this function in Typescript?”  

Additionally, it can explain existing code, “Explain lines 29-54.” Any developer out there should be able to see the value there. An example of this power comes from one of my colleagues: 

“Copilot’s getting better all the time. When I first started using it, maybe 10% of the time I’d be unable to use its suggestions because it didn’t make sense at all. The other day I had it refactor two classes by moving the static functions and some common logic into a static third class that the other two used, and it was pretty much correct, down to style. Took me maybe thirty seconds to figure out how to tell Copilot what to do and another thirty seconds for it to do the work.” 

Generally, developers dislike writing comments.  Worry not, Copilot can do that! In fact, I use it to write the first draft of every comment in my code.  Copilot goes a step further and writes user tests from the context of a file — “Write Jest tests for this file.”  

One of my favorite tools is /fix– which provides an attempt to resolve any errors in the code. This is not limited to errors visible in the IDE. Occasionally after compilation, there will be one or more errors. Asking Copilot to fix these errors is often successful, even though the error(s) may not visible. The enterprise version will even create commented pull requests! 

Although these features are amazing, there are methods to get the most out of it. You must be as specific as possible. This is most important when using code suggestions.

If I ask “I need this code to solve the problem created by the other functions” — I am not likely to get a helpful solution. However, if I ask “Using lines 10 – 150, and the following functions (a, b, and c) from file two, give me a solution that will solve the problem.”

It is key whenever possible, to break up the requests into small tasks. 

Copilot Wave 2 

The future of Copilot is exciting, indeed. While I have been talking about GitHub Copilot, the entire Microsoft universe is getting the “Copilot” treatment. In what Microsoft calls Copilot Wave 2, it is added to Microsoft 365.  

Wave 2 features include: 

  • Python for Excel 
  • Email prioritization in Outlook 
  • Team Copilot 
  • Better transcripts with the ability to ask Copilot a simple question as we would a co-worker, “What did I miss?”  

The most exciting new Copilot feature is Copilot Agents.  

“Agents are AI assistants designed to automate and execute business processes, working with or for humans. They range in capability from simple, prompt-and-response agents to agents that replace repetitive tasks to more advanced, fully autonomous agents.” 

With this functionality, the entire Microsoft ecosystem will benefit. Using agents, it would be possible to find information quickly in SharePoint across all the sites and other content areas. Agents can autonomously function and are not like chatbots. Chatbots work on a script, whereas Agents function with the full knowledge of an LLM. I.E. a service agent could provide documentation on the fly based on an English description of a problem. Or answer questions from a human with very human responses based on technical data or specifications. 

There is a new Copilot Studio, providing a low code solution allowing more people the ability to create agents. 

GitHub Copilot is continually updated as well. Since May, there is a private beta for Copilot extensions. This allows third-party vendors to utilize the natural language processing power of Copilot inside of GitHub, a major enhancement jumping Copilot to GPT-4o, and Copilot extensions which will provide customers the ability to use plugins and extensions to expand functionality. 

Conclusion

Using these features with Copilot, I save between 15-25% of my day writing code. Freeing me up for other tasks. I’m excited to see how Copilot Agents will evolve into new tools to increase developer productivity.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/01/08/how-copilot-vastly-improved-my-react-development/feed/ 0 375355
From Code to Cloud: AWS Lambda CI/CD with GitHub Actions https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/ https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/#respond Tue, 31 Dec 2024 02:31:48 +0000 https://blogs.perficient.com/?p=374755

Introduction:

Integrating GitHub Actions for Continuous Integration and Continuous Deployment (CI/CD) in AWS Lambda deployments is a modern approach to automating the software development lifecycle. GitHub Actions provides a platform for automating workflows directly from your GitHub repository, making it a powerful tool for managing AWS Lambda functions.

Understanding GitHub Actions CI/CD Using Lambda

Integrating GitHub Actions for CI/CD with AWS Lambda streamlines the deployment process, enhances code quality, and reduces the time from development to production. By automating the testing and deployment of Lambda functions, teams can focus on building features and improving the application rather than managing infrastructure and deployment logistics. This integration is essential to modern DevOps practices, promoting agility and efficiency in software development.

Prerequisites:

  • GitHub Account and Repository:
  • AWS Account:
  • AWS IAM Credentials:

DEMO:

First, we will create a folder structure like below & open it in Visual Studio.

Image 1

After this, open AWS Lambda and create a function using Python with the default settings. Once created, we will see the default Python script. Ensure that the file name in AWS Lambda matches the one we created under the src folder.

Image 2

Now, we will create a GitHub repository with the same name as our folder, LearnLambdaCICD. Once created, it will prompt us to configure the repository. We will follow the steps mentioned in the GitHub Repository section to initialize and sync the repository.

Image 3

Next, create a folder named .github/workflows under the main folder. Inside the workflows folder, create a file named deploy_cicd.yaml with the following script.

Image 4

As per this YAML, we need to set up the AWS_DEFAULT_REGION according to the region we are using. In our case, we are using ap-south-1. We will also need the ARN number from the AWS Lambda page, and we will use that same value in our YAML file.

We then need to configure the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. To do this, navigate to the AWS IAM role and create a new access key.

Once created, we will use the same access key and secret access key in our YAML file. Next, we will map these access keys in our GitHub repository by navigating to Settings > Secrets and variables > Actions and configuring the keys.

Updates:

We will update the default code in the lambda_function.py file in Visual Studio. This way, once the pipeline builds successfully, we can see the changes in AWS Lambda as well. This modified the file as shown below:

Image 5

Our next step will be to push the code to the Git repository using the following commands:

  • Git add .
  • Git commit -m “Last commit”
  • Git push

Once the push is successful, navigate to GitHub Actions from your repository. You will see the pipeline deploying and eventually completing, as shown below. We can further examine the deployment process by expanding the deploy section. This will allow us to observe the steps that occurred during the deployment.

Image 6

Now, when we navigate to AWS Lambda to check the code, we can see that the changes we deployed have been applied.

Image 7

We can also see the directory changes in the left pane of AWS Lambda.

Conclusion:

As we can see, integrating GitHub Actions for CI/CD with AWS Lambda automates and streamlines the deployment process, allowing developers to focus on building features rather than managing deployments. This integration enhances efficiency and reliability, ensuring rapid and consistent updates to serverless applications. By leveraging GitHub’s powerful workflows and AWS Lambda’s scalability, teams can effectively implement modern DevOps practices, resulting in faster and more agile software delivery.

]]>
https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/feed/ 0 374755
AEM Front-End Developer: 10 Essential Tips for Beginners https://blogs.perficient.com/2024/12/20/aem-front-end-developer-10-essential-tips-for-beginners/ https://blogs.perficient.com/2024/12/20/aem-front-end-developer-10-essential-tips-for-beginners/#comments Fri, 20 Dec 2024 16:45:31 +0000 https://blogs.perficient.com/?p=373468

Three years ago, I started my journey with Adobe Experience Manager (AEM) and I still remember how overwhelmed I was when I started using it. As a front-end developer, my first task in AEM – implementing responsive design – was no cakewalk and required extensive problem solving. 

In this blog, I share the 10 tips and tricks I’ve learned to help solve problems faced by front-end developers. Whether you’re exploring AEM for the first time or seeking to enhance your skills, these tips will empower you to excel in your role as a front-end developer in the AEM ecosystem. 

1. Get Familiar With AEM Architecture

My first tip is to understand AEM’s architecture early on.   

  • Learn Core Concepts – Before diving into code, familiarize yourself with AEM’s components, templates, client libraries, and the content repository. Learn how each of the components interact and fit in your application. 
  • Sling and JCR (Java Content Repository) – Gain a basic understanding of Apache Sling (the web framework AEM is built on) and how JCR stores content. This foundational knowledge will help you understand how AEM handles requests and manages content. 
  • Get Familiar with CRXDE Lite – CRXDE Lite is a lightweight browser-based development tool that comes out of the box with Adobe Experience Manager.   Using CRXDE Lite developers can access and modify the repository in your local development environments within the browser. You can edit files, folders, nodes, and properties. The entire repository is accessible to you in this easy-to-use interface. Keep in mind that CRXDE offers you the possibility to make instant changes to the website. You can even synchronize these changes with your code base using plugins for the most used most used code editors like Visual Studio Code, Brackets, and Eclipse. 
  • Content Package – An AEM front-end developer needs to work on web pages, but we don’t have to create them from the beginning.  We can use the CRXDE Lite build and download content to share with other developers or bring content from production to local development environments.  

The above points are the basic building blocks that FE developers should be aware of to start with. For more detail read check out the AEM architecture intro on Adobe Experience League.

2. Focus on HTML Template Language (HTL)

AEM uses HTL, which is simpler and more secure than JSP. Start by learning how HTL works, as it’s the main way you’ll handle markup in AEM. It’s similar to other templating languages, so you’ll likely find it easy to grasp.

3. Master Client Libraries (Clientlibs)

Efficient Management of CSS/JS  

AEM uses client libraries (in short clientlibs) to manage and optimize CSS and JavaScript files. So, it’s important to learn how to organize CSS/JS files efficiently as categories and dependencies. This helps load only the required CSS/JS for a webpage to help with page performance.   

Minimize and Bundle

Use Out of the Box Adobe Granite HTML Library Manager (com.adobe.granite.ui.clientlibs.impl.HtmlLibraryManagerImpl) OSGI configuration to minify the CSS/JS that will build small file size for CSS/JS and boost in page load time.  

For more information check out Adobe Experience League.  

4. Leverage AEM’s Component-Based Architecture

Build components with reusability in mind. AEM is heavily component-driven, and your components will be used across different pages. Keeping them modular will allow authors to mix and match them to create new pages. 

5. Use AEM’s Editable Templates

Editable templates are better than static. AEM’s editable templates give content authors control over layout without developer intervention. As a front-end developer, CSS/JS that we build must be independent of templates.  Clientlib related to a UI component should work without any issues on any template-based pages. 

6. Get Familiar with AEM Development Tools

There are multiple development tools that you can find within the most used text editors like Brackets, Visual Studio Code and Eclipse. You should use these extensions to speed up your development process. These tools help you synchronize your local environment with AEM, making it easier to test changes quickly.   

Check out Experience League for more information.  

7. Start With Core Components

AEM comes with a set of core components that cover many basic functionalities such as text, image, carousel. Using the core components as building blocks (extending) to build custom components, saves development time and follows best practices.  For more details check out the following links:  

8. Understand the AEM Content Authoring Experience

Work With Content Authors  

As a front-end developer, it’s important to collaborate closely with content authors. Build components and templates that are intuitive to use and provide helpful options for authors. By doing this, you will gain understanding about how authors use your components and will help you make them more “user friendly” each time. 

Test Authoring

Test the authoring experience frequently to ensure that non-technical users can easily create content. The easier you make the interface the less manual intervention will be required later. 

9. Keep Accessibility in Mind

Accessibility First  

Make sure your components are accessible. AEM is often used by large organizations, where accessibility is key. Implement best practices like proper ARIA roles, semantic HTML, and keyboard navigation support. I have spent some time on different projects enhancing accessibility attributes. So, keep it in mind. 

AEM Accessibility Features  

Leverage AEM’s built-in tools for accessibility testing and ensure all your components meet the required standards (e.g., WCAG 2.1). For more information, you can read the Experience League article on accessibility.  

10. Leverage AEM’s Headless Capabilities

Headless CMS With AEM

Explore how to use AEM as a headless CMS and integrate it with your front end using APIs. This approach is particularly useful if you’re working with modern front-end frameworks like React, Angular, or Vue.js. 

GraphQL in AEM

AEM offers GraphQL support, allowing you to fetch only the data your front end needs. Start experimenting with AEM’s headless features to build SPAs or integrate with other systems. 

SPA Editor

The AEM SPA Editor is a specialized tool in Adobe Experience Manager designed to integrate Single Page Applications (SPAs) into the AEM authoring environment. It enables developers to create SPAs using modern frameworks like React and Angular out of the box while allowing content authors to edit and manage content within AEM, just as they would with traditional templates. Do you remember when I mentioned the developer tools for IDEs? Well, there is one to map your spa application to work with AEM ecosystem.   

More Insights for AEM Front-End Developers

In this blog, we’ve discussed AEM architecture, HTL, Clientlibs, templates, tools, components, authoring, accessibility, and headless CMS as focus areas to help you grow and excel as an AEM developer.  

If you have questions, feel free to drop the comments below. And if you have any tips not mentioned in this blog, feel free to share those as well!  

And make sure to follow our Adobe blog for more Adobe platform insights! 

]]>
https://blogs.perficient.com/2024/12/20/aem-front-end-developer-10-essential-tips-for-beginners/feed/ 1 373468