Nitin Garg, Author at Perficient Blogs https://blogs.perficient.com/author/ngarg/ Expert Digital Insights Mon, 11 Jul 2022 20:09:52 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Nitin Garg, Author at Perficient Blogs https://blogs.perficient.com/author/ngarg/ 32 32 30508587 How to Create a Test Plan Using Azure DevOps https://blogs.perficient.com/2022/04/05/how-to-create-a-test-plan-using-azure-devops/ https://blogs.perficient.com/2022/04/05/how-to-create-a-test-plan-using-azure-devops/#respond Tue, 05 Apr 2022 15:00:20 +0000 https://blogs.perficient.com/?p=307214

Azure Test Plans, a service launched with Azure DevOps, provides a browser-based test management solution for exploratory, planned manual, and user acceptance testing. Azure Test Plans also provides a browser extension for exploratory testing and gathering feedback from stakeholders

An Azure DevOps test plan is divided into three sections:

  1. Test Plan – The container to group all your project test suites.
  2. Test Suite – The container to group all your test cases.
  3. Test Cases – The actual test scenario i.e. execution steps to validate the requirement.

In Azure DevOps, there are three different types of test suites you can create:

  1. Static-Based: A static suite of test cases is a suite where the cases have been manually assigned. These are types of Test Suites that can be used to group any Test Cases. This is the default type, but the only disadvantage is that static test cases can’t be directly linked with the requirement.
  2. Requirement Test Suite: A requirements-based suite is one that pulls in all the Test Cases for that requirement. This means Test Suites that are directly created out of requirement and generally these Test Suites are used by QA/BAs to detect whether a user story has passed the Test Execution.
  3. Query Based Test Suite: These are Test Suites created on a query basis. The query can contain any filter like a test title that contains a particular word or is created after a particular date

Follow the steps below to create the Test Plan:

  1. Login to your Azure DevOps account.
  2. Select your project and click on the “Test Plans.”

  1. Click on the “New Test Plan.”

  1. Enter the Test Plan name and select “Area Path” and “Iteration.

  1. Once the test plan is created, you’ll then create the test suite.

  1. In the example below, I’ve created the Static Suite to add the test case required for the “Smoke” flow (ex: Login, Logout, etc.). You can use different test suites as mentioned above depending upon your need.

Note: You can add the existing test cases or add the new test case by clicking on the button “New Test Case” on the right corner of the screen.

New Test Plan Add Existing Test Cases Add Tesxt Cases Using Grid

  1. Once all the test suites and test cases are added to the suite, you can now run all the test cases or a single test case and validate if all the steps added in the test case are working as expected or not.

Smoke Flow

Follow the steps below to run and validate the steps under the tests case:

Note: I don’t come from an automated testing background and can only speak to the Azure DevOps Test Plans as it pertains to manual testing. Therefore in this article, I have used the manual testing workflow.

Note: There are different options to run the test case as shown in the below screenshot.

For the test case for the web application, I’ve chosen to .” You can choose the type depending on your application type.

Once you’ve clicked on “Run for web application Azure DevOps,” open a window and start showing the steps you’ve added to the test case.

Note: If you select multiple test cases, you will start the test from top to bottom.

Once this screen loads, you’ll open a browser and follow the steps added in the test case.

A. Click the “right” icon if the step is working as mentioned.

B. Click on the “cross” icon if the expected result is not matched with the expected result mentioned in the test case above.

Note: When you are marking the “Cross button (X),” you can enter the comment which can be viewed when you review the result for failed test cases.

When all the test cases execute, click on “Save and Close.” It will close the test runner window. Under the test plan, you’ll be able to see the result of the Test Suite for the test.

Smoke Flow Test Points

Here are some additional tips:

  1. During the test run, you can perform the following actions below using the Test Runner window.
  2. Record the screen: It will record the activity screen and add it as an attachment to the test result in the Azure DevOps.

b. Capture screenshot: Capture the image of the entire window, screen, or selected tab.

Login Gmail

c. Create a bug: If a step fails, you can create a bug directly using the task runner window by clicking on the “create issue” link. This will create the ticket in the Azure board and document the steps that failed.

Key Takeaways

These simple steps can be taken to create a test plan using Azure DevOps. For more information, contact our experts today.

]]>
https://blogs.perficient.com/2022/04/05/how-to-create-a-test-plan-using-azure-devops/feed/ 0 307214
How to Monitor Performance Using Application Insights https://blogs.perficient.com/2022/02/15/how-to-monitor-performance-using-application-insights/ https://blogs.perficient.com/2022/02/15/how-to-monitor-performance-using-application-insights/#comments Tue, 15 Feb 2022 16:00:14 +0000 https://blogs.perficient.com/?p=304724

Monitoring your website performance is a key factor to ensure your application performs as expected. There are a lot of application performance monitoring (APM) tools available in the market that can be used to monitor the performance of the application, such as NewRelic, Azure Monitor, AppDynamics, and others. In this post, I’ll talk more about Application Insights, which is a performance monitoring tool provided by Microsoft.

What is Application Insights?

Application Insights is a powerful APM within the Azure Monitor. It monitors and delivers real-time data to detect and diagnose app issues.

In addition to monitoring, the tool gathers and compiles user data to reveal how an app is utilized. This helps developers review and improve the app performance bottleneck.

Application Insights has built-in support for .NET, Java, Node.js, Python, and client-side JavaScript-based applications. In this blog, we’ll talk about .NET applications and how they can be integrated with the Optimizely B2B Commerce SDK version.

What Application Insights Monitors

Application Insights is aimed to assist the development team to help them understand how the app is performing and how it’s used. It monitors:

  1. Performance issues including resourcing problems and dependencies
  2. Pageviews, user, and session counts
  3. Errors and diagnostics codes, etc.

How to Configure Application Insights in Your Project or Application

The configuration of Application Insights is very simple and straightforward. Together we’ll learn how to configure Application Insights on Optimizely B2B Commerce framework.

Before starting the configuration of Application Insights in your application, you’ll need to first create the Application Insights instance using the Azure Portal.

Below are the steps to create the Application Insight instance using the Azure Portal:

  1. Login to the Azure Portal then follow the steps shown in the video.

https://youtube.com/watch?v=Gh_o9ODHBgQ%3Frel%3D0

Once all the above steps have been completed, your Application Insights is finally set up and ready to collect data.

Below are steps to configure Application Insights in your applications. For this blog post, we’re setting up Application Insights for an Optimizely B2B Commerce Framework.

  1. Open the Optimizely B2B Commerce on the Visual Studio
  2. Right-click on “InsiteCommerce.Web” project
  3. Select “Configure Application Insights”
  4. On the Connect Service tab, select “Application Insights SDK (local)”
  5. Click on “next” and “finish.” This installs the required for Application Insight.
  6. Once you install the Nuget package successfully, you’ll expand the InsiteCommerce.Web and locate the “ApplicationInsights.config”
  7. Copy the node “<InstrumentationKey>Your Key</InstrumentationKey>” and paste it to the top of the file
  8. Replace “Your Key” with what we have copied during the Application Insights instance setup
  9. Finally, build and run the application/website on your local instance

Refer to the below video for all the steps mentioned above:

https://youtube.com/watch?v=lxBGgPfpkIk%3Frel%3D0

If all of the above steps are executed successfully, you’ll be able to see the Application Map and logs/traces into the Application Insights. You can see this in the screenshot below:

These are the initial steps to configure Application Insights in your project/application. In the next blog, we’ll talk about the details on implementing custom logging in Application Insights from the Optimizely B2B commerce framework. For more information on these processes, contact our experts today.

 

]]>
https://blogs.perficient.com/2022/02/15/how-to-monitor-performance-using-application-insights/feed/ 1 304724
Six Simple Steps to Adding Pages in the Optimizely Admin Console https://blogs.perficient.com/2021/10/25/six-simple-steps-to-adding-pages-in-the-optimizely-admin-console/ https://blogs.perficient.com/2021/10/25/six-simple-steps-to-adding-pages-in-the-optimizely-admin-console/#respond Mon, 25 Oct 2021 14:00:46 +0000 https://blogs.perficient.com/?p=299263

Optimizely is a leading digital experience commerce platform with more out-of-the-box B2B commerce capabilities for providing customization based on clients’ specific needs.

During the integration of Optimizely B2B Commerce for one of our clients, we ran into a scenario where there was a need to extend the Optimizely B2B Commerce admin console. Optimizely has a restriction over the B2B commerce admin console regarding the customization of the default admin page. However, we can build the extension/custom page inside the Optimizely B2B commerce admin console.

Below are the steps to add a new page in the Optimizely B2B commerce admin console.

Step 1: Create a view (CSHTML) page in the InsiteCommerce.Web project.
Steps To Adding A Page 1
You can follow the folder structure as defined in your project. I tried to follow the same structure as other admin pages follow, “Area/AdminExt/Views.”

Step 2: Add “Insite.Admin” reference to your extension project (library).

Step 3: Create a “controller class” in the extension project/library and follow the naming convention. In my case, I am trying to create an extension page for the table, “Order History Extension.”

Step 4: Add the dependency name attribute to a controller class.

Note: Adding a dependency name attribute will define the URL pattern for the Admin console page. In my case, the URL for my order history page will be “https://domainname/admin/OrderHistoryExtension.”

Step 5: Create an entry on “AdminConsoleMenuExtensions.json”. Adding the entry in this file will create a link into the Admin Menu/Navigation panel.


Note: The value in the “Href” needs to match with the dependency name attribute in step four.

Step 6: Define the “index” method and add the attribute “ReturnIndexForNonAjaxRequests” and return the partial view. The purpose of this attribute is to render the custom page inside the Insite Admin master layout.

Once all of the above steps are completed, you’ll do the following:

  1. Log in to the admin console.
  2. On the “admin left navigation menu,” you will see the extension menu.

  1. Click on the “Order History Extension” link, and on the right-hand side, you will see the OrderHistoryExtension list view.

Steps To Adding A Page 9

Note: Attribute “[ReturnIndexForNonAjaxRequests]” is required for method define on the controller.  If attribute [ReturnIndexForNonAjaxRequests] is missing from the controller method then the custom page will look as shown in the below screenshot.

Once the above is completed, you’ll pull the data from your custom table either using the standard model view controller (MVC) code or by implementing the standard REST API and AngularJS. I’ve used seed/dummy data and display by using the MVC code in this example.

  1. Created the “view model and named it “OrderHistoryExtensionView.”
  2. Then, added the property “OrderNumber,StoreNumber,DeliveredBy.”
  3. Next, added the seed data and returned it as an object model when returning the partial view.
  4. Refer to the below code snippet

Refer to the below code snippet which I have created to demonstrate to show data on the custom page we have created under the Optimizely B2B commerce admin console :

OrderHistoryController.cs


You can view the “List.cshml” view below:

Below is the admin view:

All the above steps will work on the Optimizely B2B Commerce SDK framework without depending on the Optimizely support team. to learn more about our extensive Optimizely expertise and strong partnership.

]]>
https://blogs.perficient.com/2021/10/25/six-simple-steps-to-adding-pages-in-the-optimizely-admin-console/feed/ 0 299263
How to Publish and Subscribe Messages to a Queue Using MSMQ https://blogs.perficient.com/2021/06/16/how-to-publish-and-subscribe-messages-to-a-queue-using-msmq/ https://blogs.perficient.com/2021/06/16/how-to-publish-and-subscribe-messages-to-a-queue-using-msmq/#respond Wed, 16 Jun 2021 15:00:49 +0000 https://blogs.perficient.com/?p=293730

Now that you’ve learned how to install MSMQ technology in your technology stack, you can now set up publishing and subscribing to messages as well. Follow the steps below to set up the code for publishing, which allows the sender application to send the message to the queue.

Step 1: Create the Publisher Application. For this example, I’ve used the Rest API application to publish a message to the queue.

Open the Visual Studio, choose “create a new project,” select “Asp.Net Web Application (.Net Framework),” enter the project name, and choose “Web API.”

Step 2: Add the System.Messaging Reference. Right-click on the “references” option and select “add a reference.”

Step 3: Create the API Controller. For this example, I used the “OrderRefreshController.”

Step 4: Create the entity for the message. We can pass a message as an object to senders with the structure below with the Serializable attribute .net code to use on the sender side.

Step 5: Write the line of code below to send the message to the queue.

Step 6: Execute the above function from the Rest API POST Method.

Step 7: Run and test the application.

https://youtube.com/watch?v=QFzhTSuTEv8%3Frel%3D0

Subscribing Messages Using MSMQ

When you subscribe to a message, you can receive and read the message from the queue after the message is published to the queue.

Follow the code below to implement the subscription to the queue.

Step 1: To subscribe to the messages, I have created the Windows service computer program that operates in the background of the Windows machine or Windows server so that I can install and start on the Windows server. This will keep it running and read the message from the queue as the message published to the queue.

Step 2: Once the Windows service application is created, add the service reference (System.Messaging).

Below is what the Windows service project structure will look like:

Note: I have renamed “Service1.cs” to “SubscribeOrderRefreshQueue.cs” to follow the naming convention. You can use any standard name as per your project naming convention.

Step 3: Add the class file “QueueHelper,” which consists of the methods below:

  • ConnectToQueue: Used to connect to the MSMQ
  • MyReceiveCompleted: Responsible for reading the message when it’s published.

Refer to the below screenshot for the code snippet:

Step 3: Use the method from the windows service. Right-click on service class “SubscribeOrderRefreshQueue.cs” and select “view code.” The “On-start method” will write the code and execute the method “MyReceiveCompleted.

Note: I have added the method “RunInteractive” so that I can run the Windows service in the interactive (debug) mode.

Below are the changes to run the Windows service is debug mode:

  1. Right-click on the “Windows service project” and navigate to “properties.” In the application tab, change the output type to “console application.”

  1. Add the line code below in the “cs file.”

Now you are all set to run the Window service and test the publish and subscription options. See the below video to understand the flow from publishing messages to subscribing/reading the message. In the video, the Postman app calls the application, which pushes the message to the queue, and the Windows Service application will receive and process it. This way, MSMQ acts as a middleman and establishes the communication between the two applications (source and destination) without having a direct connection between each other.

https://youtube.com/watch?v=Onkoy9PZJtc%3Frel%3D0

For more information on setting up these services in your technology stack, contact our commerce experts today. 

]]>
https://blogs.perficient.com/2021/06/16/how-to-publish-and-subscribe-messages-to-a-queue-using-msmq/feed/ 0 293730
5 Easy Steps for MSMQ Installation https://blogs.perficient.com/2021/06/04/5-easy-steps-for-msmq-installation/ https://blogs.perficient.com/2021/06/04/5-easy-steps-for-msmq-installation/#respond Fri, 04 Jun 2021 15:00:33 +0000 https://blogs.perficient.com/?p=293348

The need for mature technology is drastically increasing, placing more emphasis on creating architecture that is modern, reliable, and scalable for not only individual technologies but also how they interact with each other. Microsoft Message Queuing (MSMQ) is one of the simplest and advanced methods that help your business’ technologies interact and connect with each other.

Below are the steps to install MSMQ in Windows 10 along with the code that allows technologies and platforms to send and receive the message to the queue.

Step 1: Open Windows Control Panel. Select program, and then select programs and features.

Step 2: Choose “Turn Windows features on or off.” Then, select “Microsoft Message Queue (MSMQ) Server.”

Step 3: Click on the taskbar search option and search for “Computer Management.”

Step 4: Expand the “services and applications” option. Then, expand “Message Queuing.”

Let’s Put This in Perspective

If all is installed correctly, your ERP system will publish the orders when there are updates to the ERP. This will come across as an order update message. The consumer application, such as a commerce website platform, will then read the published message by the ERP system and perform the updates in the consumer application database.

Quick and Easy Installation

In just a few simple steps, you can install MSMQ into your technology stack to enable seamless communication between applications and platforms. Stay tuned for my next piece, How to Publish and Subscribe Messages to a Queue Using MSMQ, where you’ll learn how to set up publishing and subscribing to messages by using MSMQ. For more information on installation or MSMQ technology, contact our commerce experts today.

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2021/06/04/5-easy-steps-for-msmq-installation/feed/ 0 293348
Setting Up the Release Pipeline for Optimizely B2B Commerce Cloud https://blogs.perficient.com/2021/05/13/setting-up-the-release-pipeline-for-optimizely-b2b-commerce-cloud/ https://blogs.perficient.com/2021/05/13/setting-up-the-release-pipeline-for-optimizely-b2b-commerce-cloud/#respond Thu, 13 May 2021 15:00:13 +0000 https://blogs.perficient.com/?p=292246

Once we’ve completed the build pipeline setup and it’s running successfully, it’s time for the final step in your process: setting up the release pipeline. The release pipeline is used to deploy the artifact generated by the build pipeline to a server.

Follow the steps below to set up the release pipeline:

  1. Set up the Deployment Group. A deployment group is a logical set of deployment target machines that have agents installed on each one. Deployment groups represent the physical environments, such as “Dev,” “Test,” “UAT,” and “Production.” In effect, a deployment group is just another grouping of agents, much like an agent pool.

Follow the steps shown in the video below for setting up the deployment group.

https://youtube.com/watch?v=L_hnEJDlqZ4%3Frel%3D0

  1. Once the Deployment Group is set up and starts showing online, navigate to the pipeline and click “+ New” and choose “New release pipeline.”

Note: InsiteCommerce is now Optimizely B2B Commerce

You will see the options “Artifacts” and “Stages” on the new pipeline configuration page.

  1. “Add an artifact” allows you to choose from where you want to pick the artifacts.

  1. We are using the “Build” option as a source type because my project repo is on the same Azure DevOps environment. After selecting the source type, you need to select the Project and Source (build pipeline).

Add An Artifact Build

  1. Select the Source (i.e. Name or ID of the build pipeline that publishes the artifact).

  1. Select default version “Latest” to deploy the latest build artifact when the release pipeline is executed. The version can be changed for manually created releases at the time of release creation.

  1. It’s time to set up the stages once the artifact is set up. A stage in a release pipeline consists of jobs and tasks.
  2. To add a stage in your release pipeline, select the “Add” and click “New stage release pipeline” on the releases page.
  3. After adding a stage, add the task (job) by selecting the empty job or using the pre-defined template, such as “IIS Website Deployment.”

 

  1. Once you’ve added the stage, you must now configure the task (job), which will require you to deploy the artifacts generated by the build pipeline to the server.

Below are the minimum tasks required to configure to deploy the web application into the IIS (Web Server):

Check out the video for details on the task added in the stage configuration:

https://youtube.com/watch?v=EuS3zTA8jbk%3Frel%3D0

Simple and Easy Pipeline Processes

Following these simple, seamless steps can ensure proper release pipeline setup to your application. Technical processes can be easy as long as you have the right partner like Perficient to help you. For more information on these technical processes, contact our experts today.

]]>
https://blogs.perficient.com/2021/05/13/setting-up-the-release-pipeline-for-optimizely-b2b-commerce-cloud/feed/ 0 292246
How to Deploy SQL Database Changes Using Azure DevOps https://blogs.perficient.com/2021/03/05/how-to-deploy-sql-scripts-using-azure-devops/ https://blogs.perficient.com/2021/03/05/how-to-deploy-sql-scripts-using-azure-devops/#comments Fri, 05 Mar 2021 16:00:43 +0000 https://blogs.perficient.com/?p=289029

The database is a core part of any type of application. The database scheme is constantly changing during the application development phase. It is important to deploy the database changes while deploying the application code to a different instance, such as dev, QA, stage, or production.  

However, manually deploying database changes is a tedious process. By setting up automated deployment, you will save time and deploy the database changes seamlessly along with your application code deployment. In fact, we can use an Azure DevOps pipeline to deploy a .dacpac file by building an SQL Server Database project and using .NET.  

What is a DACPAC File?

Data-Tier Application Package (DACPAC) is a single file containing the database model and all the files representing database objects. It’s a binary representation of database project compatible with SSDT. The name comes from an extension of these files. 

How to Create a DACPAC Using Visual Studio 

  1. Create SQL project using Visual Studio. 

  1. After creating the project, you can see the database project in Solution Explorer.

  1. Once the project is created, you can import .dacpac or database/script to import scripts from the existing database or generate the script using SQL Server Management Studio (SSMS) and add it into the visual studio project.

  1. You can create the script in an SQL server and add it to the visual studio project.
  2. Before adding the script to the visual studio project, you need to set up the folder structure.

Build Script

  1. Go to the SQL Server to generate the SQL Server scripts and add them to the visual studio.

  1. Build the solution once all the steps are completed.

  1. Once the solution is built successfully, the .dacpac file will generate.

  1. Once all these steps are executed, you need to set up the Azure SQL DacpacTask in the release pipeline, which will deploy the database changes to a different instance, such as dev, QA, stage, or production.

Setting up Automated Deployment Using Azure DevOps

Start by setting up the build pipeline. This pipeline will build the solution and generate the .dacpac as an artifact taken by the release pipeline. Follow these steps:

  1. Login into the Azure DevOps Portal (dev.azure.com)
  2. Navigate to the Pipeline and Click on New Pipeline
  3. To set up the Pipeline, follow the steps shown in the below video presentation.

 

https://youtube.com/watch?v=6986Rk12whg%3Frel%3D0

 

Next, you will set up the release pipeline. Once the build pipeline generates the artifact (.dacpac file), the release pipeline will take the .dacpac file and execute it in our SQL instances, such as in dev, QA, state, or production.

To start setting up the release pipeline, click on the pipeline, then the new pipeline, and then releases.

Follow the steps shown in the below video presentation to set up the release pipeline.

https://youtube.com/watch?v=EQeYSU8e_Ec%3Frel%3D0

 

Automated Processes in Just a Few Steps

These steps can help you turn your manual deployment processes into an easily automated deployment process in no time. To learn more, contact our technical experts today, and always stay tuned for more.

]]>
https://blogs.perficient.com/2021/03/05/how-to-deploy-sql-scripts-using-azure-devops/feed/ 1 289029
Continuous Integration (CI) Pipeline Configuration for Episerver B2B Commerce https://blogs.perficient.com/2021/02/18/continuous-integration-ci-pipeline-configuration-for-episerver-b2b-commerce/ https://blogs.perficient.com/2021/02/18/continuous-integration-ci-pipeline-configuration-for-episerver-b2b-commerce/#respond Thu, 18 Feb 2021 16:00:04 +0000 https://blogs.perficient.com/?p=287830

After learning how to set up a basic continuous integration (CI) pipeline through Azure in my first blog, How to Set Up Automated Deployment with CI/CD Pipelines through Azure, the next stage is walking through the steps to build the CI pipeline for the Episerver B2B Commerce application.

Below, you will find a snapshot of the list of agent jobs (tasks) required for this stage.

Follow the steps below for the details of each job (tasks) you must add to build the Episerver B2B Commerce application:

  1. Add NuGet 4.4.1 & NuGet Restore: The task will restore the NuGet packages used in the Epi B2B commerce application.
  2. The Epi B2B Commerce application uses the SASS files for managing the Style Sheet, which is required to install Sass, Ruby, Compass, and Grunt in the build server so that the build pipeline can successfully execute the grunt task and compile the SASS file which generates a .CSS as the output.

Follow the image to command install the Sass.

Install Sass

Follow the image to command install the Sass globally.

Follow the image to command install the Ruby.

Follow the image command to install the Compass.

Follow the image to command install the Grunt.

 

  1. Add the Grunt task to execute the grunt task added in the gruntfile.js file.

  1. Add the “Visual Studio Build” task to build the solution.

Note: Enter the MS Build Argument Command as the following: “/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=”$(build.artifactstagingdirectory)\”

  1. Add the “Publish Symbol” task to index your source code and publish symbols to a file share or Azure Artifacts symbol server.

Note: The -$(BuildPlatform) and $(BuildConfiguration) are the variables that you can configure under the “Variable” tab. Variables are used to configure the values that we need to replace based on instances. For example, Build Configuration in development is different than in production.

  1. Add the “Publish Build Artifacts” task to the Publish build artifact.

  1. Once all the above configuration is complete, click on save and queue. This will save the entire configuration and start the build pipeline.

Once the build pipeline has finished the execution, you will able to see a status of failure or success.

If anything were to go wrong, or the pipeline execution fails, you can check the logs and fix the errors. If it ran successfully, you will able to see the artifact generated by the build pipeline.

Moving Onto The Final Step

Once the CI build pipeline is set up and running successfully, you will then need to step up the continuous development (CD) pipeline, which I will cover in the last installment. For more information on these technical processes, contact our experts today.

]]>
https://blogs.perficient.com/2021/02/18/continuous-integration-ci-pipeline-configuration-for-episerver-b2b-commerce/feed/ 0 287830
How to Set Up Automated Deployment with CI/CD Pipelines through Azure https://blogs.perficient.com/2021/01/19/how-to-set-up-automated-deployment-with-ci-cd-pipelines-through-azure/ https://blogs.perficient.com/2021/01/19/how-to-set-up-automated-deployment-with-ci-cd-pipelines-through-azure/#comments Tue, 19 Jan 2021 16:00:03 +0000 https://blogs.perficient.com/?p=286286

The main purpose of automation within your business is to deploy your application into different environments such as Dev/QA/Production with the push of a button or without manual intervention. Automation reduces the risk of deployment errors, reduces the number of development hours for deploying code changes in multiple environments, and help to deploy the changes more frequently in development and QA environments and comprehensive testing as soon as possible after changes are made.

To deploy Episerver B2B Commerce Application into your Azure platform, you must start with understanding the setup of a CI/CD (Continuous Integration/Continuous Deployment) pipeline. A CI/CD pipeline is the backbone of the modern DevOps environment, as it bridges the gap between development and operations teams by automating the building, testing, and deployment of applications.

Below is a representation of a CI/CD pipeline setup and how it works.

 

To set up automated deployment, start with an automation tool, such as Azure DevOps. Azure DevOps provides us with various interfaces and tools in order to automate the entire process.

  1. The Git repository is where the development team will plan and commit to the code.
  2. The Build Pipeline is the interface with the entire job or specific task. You will take the latest code, and build and create an artifact.
  3. Finally, within the Release Pipeline, collect the latest artifact and deploy the changes to a different environment

 

Next, you will set up the build pipeline within Azure.

  1. Login into the Azure DevOps using the URL (https://dev.azure.com/).
  2. Select your project.

 

  1. Navigate and select “Pipelines”.

Note: This landing page will show the list of Build Pipeline’s you have created. A button will appear to create a new pipeline if no pipeline is created.

  1. Click the button, “New Pipeline” to create the new build pipeline.

  1. Once you click on the new pipeline, it will then ask you to choose, “Where is your code”.

 

Note: There are two ways to create a build Pipeline, “Use the Classic Editor” or using the “YML” file. For this article, we will target the “Use the Classic Editor”

  1. On click to “Use the Classic Editor”, it will ask to select the “Source Control”.
  2. Select your Source Control, and click the “Continue” For this article, we are using the “Azure Repos Git”.
  3. After clicking continue, you will get an option to start from “Empty Job”, or the option to select “Template”. For this article, we are using the “Asp.Net”
  4. Once you have selected the configuration and template, pre-defined tasks will begin to load. In this pane, you can examine the various tasks for your build pipeline. This build pipeline performs various tasks such as fetching sources from the Git repository, restoring dependencies, compiling the application, run tests, and publishing outputs used for deployments.
  5. To add a new task, click on the “+” sign in the agent section. This will show you the list of tasks that you can select and add to your pipeline.
  6. Once all of the tasks are added successfully, click the “Save & Queue” to ensure that all the tasks are working and able to build code and create the publish artifacts.

This is Just the Beginning

Understanding the background information and these few steps opens up the timeline to implementing Episerver B2B Commerce Cloud into your Azure platform. Stay tuned for our next installation, where we explain the next steps of Episerver B2B Commerce Cloud configuration and the setup of a release pipeline. To find out more about our services, contact our experts today.

]]>
https://blogs.perficient.com/2021/01/19/how-to-set-up-automated-deployment-with-ci-cd-pipelines-through-azure/feed/ 3 286286
How Implementing MSMQ Technology Improves Communication between Technologies https://blogs.perficient.com/2021/01/04/how-implementing-msmq-technology-improves-communication-between-technologies/ https://blogs.perficient.com/2021/01/04/how-implementing-msmq-technology-improves-communication-between-technologies/#comments Mon, 04 Jan 2021 16:00:55 +0000 https://blogs.perficient.com/?p=285319

Having clear and consistent communication is key to accomplishing your business goals. It’s even more important to have open communication between your business technologies to ensure your organization continues to run as is, if not better.

In modern technology architecture, performance and decoupled integration is a key factor of application development. Message queues can significantly simplify the coding of decoupled applications to improve performance, reliability, and scalability and decouple heavyweight processing, buffer or batch work, and smooth over a strenuous workload.

Here are some reasons why implementing Microsoft Message Queuing (MSMQ) is beneficial to how your technologies interact and connect:

How MSMQ Works

MSMQ is a technology for asynchronous messaging, meaning an immediate response is not required to move forward throughout the given process. With MSMQ technology, two or more applications can send messages to each other, either through remote machines or over the internet. MSMQ communicates between the application sending the message and the one receiving it. Once the receiving program gets the message, the application can read and respond to the message.

Let’s say you need to submit an order to your ERP system, and the ERP order processing is slow because the system is offline or the validation of the order is slow to respond. You don’t want the slow processing to affect your application. MSMQ is ideal for this kind of scenario because the sender application can create a message for the order and send it to the messaging queue. The ERP system can read when the system is online and available. Once the ERP system receives the order, it can process the message and send back an acknowledgment or denial message to the sender application through the queue. This technique allows the ERP to process the pending order behind the scenes. The sender application can continue to send a new order without waiting for the previous order to finish processing.

The Advantages of Using MSMQ

Better Performance

Message queues enable asynchronous communication, meaning the endpoints producing and consuming messages interact with the queue, not each other. Producers, the sender application, can add requests to the queue without waiting for them to be processed. Consumers, the receiving application can process messages only when they are available. The programs are not dependent on one another, and no component stalls within the system while waiting for another, allowing for optimized data flow. Queues make your data persistent and reduce errors that happen when different parts of your system go offline.

Granular Scalability

Message queues make it possible to scale precisely in the places you need to. When workloads peak, your application can add all of its requests to the queue without the risk of collision. As your queues get longer with incoming requests, you can distribute the workload across a fleet of consumers. Also, producers, consumers, and the queue itself can all grow and shrink on demand.

Simplified Decoupling

Due to MSMQ’s decoupled architecture, the program can communicate and send messages between various types of independent technology platforms and applications through common formats such as a .xml or .json. Message queues are a simple way to decouple distributed systems, whether using monolithic applications, micro-services, or server-less architectures.

Types of Message Queues

Point-to-Point

A message is sent from one application to another application via a queue. More than one consumer can listen, meaning to check in-with the queue, and potentially receive a message from the queue, but only one of the consumers will get the message.

Publish/Subscribe

This messaging model allows publishers and the application publishing and sending the message to send a message to multiple subscribers (the application reading and acting on the message) through a topic (which is the link between a publisher and a subscriber). Subscribers also have the option of whether to acknowledge the published message.

What to Keep in Mind

MSMQ technology comes with many benefits that simplify tech integration and communication, but some points keep in mind with this particular technology.

Limited Queue Space and Message Size

Like a hard-drive, each queue can only hold up to 2GB in storage and cannot store any more messages once the storage is full. You must remain mindful of how many messages are outgoing and stored within the queue to avoid filling up, or a message may be deleted if it is not received in time.

Concerning message size, each individual message has a size limit of 4MB. Although this is a large amount, it is important to remain aware of this limitation if you anticipate sending larger messages or multiple large files.

Accessible through Windows

For applications to communicate through MSMQ, they must run through a Microsoft Windows server. Therefore, for those who do not use a Microsoft Windows server, MSMQ will not work.

How to Learn More

Let MSMQ technology act as the intermediary for all your technology-messaging needs. To learn how to implement it, stay tuned for my second piece, “Steps to Integrating MSMQ Technology into Your Technology Stack.”. For any other questions regarding this type of technology, contact our experts for more information today.

]]>
https://blogs.perficient.com/2021/01/04/how-implementing-msmq-technology-improves-communication-between-technologies/feed/ 1 285319