Microsoft Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/microsoft/ Expert Digital Insights Mon, 06 Jan 2025 16:28:46 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Microsoft Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/microsoft/ 32 32 30508587 Getting Started with VBA Programming: Types of VBA Macros https://blogs.perficient.com/2025/01/06/types-of-vba-macros/ https://blogs.perficient.com/2025/01/06/types-of-vba-macros/#respond Mon, 06 Jan 2025 14:53:57 +0000 https://blogs.perficient.com/?p=374071

What is VBA?

Visual Basic for Applications (VBA) is a programming language developed by Microsoft. Microsoft Office applications like Excel, Word, and Access primarily use VBA to automate repetitive tasks. VBA is a programming language that automates tasks in Microsoft Office applications, especially Excel.

Types of VBA Macros

VBA macros are custom scripts created to automate tasks and improve efficiency within Microsoft Office applications. The types of VBA macros vary in functionality, ranging from simple recorded macros to complex event-driven scripts. Here’s a breakdown of the most commonly used types of VBA macros:

VBA (Visual Basic for Applications) categorizes macros based on their functionality and the events that trigger them. Here are the main types of macros:

A visually appealing infographic showcasing VBA (Visual Basic for Applications) and its different types of macros.

A visually appealing infographic showcasing VBA (Visual Basic for Applications) and its different types of macros.

 

1. Recorded Macros

  • Description: A sequence of actions carried out within an Office application is recorded to create these macros. VBA translates these actions into code automatically.
  • Use Case: Great for automating repetitive tasks without manually writing code.
  • Example: Automatically applying consistent formatting to a set of worksheets in Excel.

Learn more about how to record macros in Excel.

2. Custom-Coded Macros

  • Description: These are manually written scripts that perform specific tasks. They offer more flexibility and functionality than recorded macros.
  • Use Case: Useful for complex tasks that require conditional logic, loops, or interaction between multiple Office applications.
  • Example: Generating customized reports and automating email notifications from Outlook based on Excel data.

3. Event-Driven Macros

  • Description: These macros run automatically in response to specific events, such as opening a document, saving a file, or clicking a button.
  • Use Case: Used for automating tasks that should happen automatically when a certain event occurs.
  • Example: Automatically updating a timestamp in a cell every time a worksheet is modified.

4. User-Defined Functions (UDFs)

  • Description: These are custom functions created using VBA that can be used just like built-in functions in Excel formulas.
  • Use Case: Ideal for creating reusable calculations or functions unavailable in Excel by default.
  • Example: Creating a custom function to calculate a specific financial metric.

5. Macro Modules

  • Description: A module is a container for VBA code, which can include multiple macros, functions, and subroutines. Related macros can be grouped together and organized using these.
  • Use Case: Useful for keeping code organized, especially in large projects.
  • Example: Group all macros related to data processing in one module and all macros associated with reporting in another.

Each type of macro serves a distinct function and suits specific tasks, depending on the requirements. Use these macros actively based on your needs to achieve the best results.

Conclusion

VBA allows you to automate operations and increase productivity in Microsoft Office programs. Understanding the various sorts of macros helps you select the best strategy for your requirements. Whether you are recording activities, building custom scripts, or creating event-driven automated processes, knowing the options can guide your decision. Moreover, this knowledge ensures you choose the most efficient approach for your tasks. Additionally, using the right type of macro can significantly improve your productivity and streamline your workflow. Begin learning VBA to achieve new levels of efficiency in your workflows.

Happy reading and automating!

]]>
https://blogs.perficient.com/2025/01/06/types-of-vba-macros/feed/ 0 374071
Building Azure DevOps CI Pipelines for SPFx https://blogs.perficient.com/2024/12/31/building-azure-devops-ci-pipeline-for-spfx/ https://blogs.perficient.com/2024/12/31/building-azure-devops-ci-pipeline-for-spfx/#respond Tue, 31 Dec 2024 07:07:35 +0000 https://blogs.perficient.com/?p=374442

This blog offers a comprehensive guide to setting up Continuous Integration (CI) in Azure DevOps to automate the integration of SharePoint Framework (SPFx) code by leveraging Azure DevOps pipelines. This process aims to streamline development workflows, improve code quality, and ensure quicker code validation before deployment without any manual processing.

Continuous Integration (CI) is the process of automating the build and testing of code when a developer commits changes to source control. Commit to source control triggers an automated build that grabs the latest code from version control, builds it, and runs tests on it (if configured).

Prerequisite for Building CI pipeline for SPFx in Azure DevOps

To set up Continuous Integration (CI) for SPFx in Azure DevOps, ensure you have the following things already setup:

  • An Azure DevOps account with required access
  • Your SharePoint Framework (SPFx) project should be stored in a Git repository
  • Ensure the repository includes the necessary package.json, gulpfile.js, and other configuration files required to build and bundle your SPFx solution

Implementation

To implement CI, we must create a new Pipeline in Azure DevOps. Building a pipeline includes the following major steps:

  • Create a build definition
  • Install NodeJS
  • Restore npm packages
  • Build the solution
  • Package the solution
  • Prepare the Artifacts
  • Publish the Artifacts

Create a Build Definition

Build definition contains the definition and configuration for the build. Follow the below steps to create a new build definition.

  • Login to Visual Studio Online (Azure DevOps)
  • Select your project to set up a build definition.
  • From the left navigation, click Pipelines > Builds.
  • Click “New pipeline” > Click on “Use the classic editor”.
  • Select “Azure Repos Git” > Select Team Project > Select Repository > Select branch for CI implementation.

Selectsource

  • Under “Select a template”, select “Empty Pipeline”.

Selecttemplate

  • The build definition has a default agent. We can add multiple tasks to the agent to define our build.

Pipelinedetails

In this case, in agent specification, I have used Windows-2022, but you can also choose “Windows-latest” based on the environment in which you want to run your build.

Install NodeJS

  • On the default agent, click the + sign.
  • Search for “Node”.
  • Add Node.js tool installer.

Addnodejstool

  • Make sure you specify 10.x in the Version Spec field. If your project is based on SharePoint Framework 1.7.1 or earlier, use version 8.x.

Selectnotejsversion

Restore npm Packages

SharePoint Framework solution uses third-party npm packages. We need to restore those before starting the build process.

  • Add npm task.
  • Verify if the command is set to install.

Npminstall

Build the Solution

Build the SPFx solution to minify the required assets to upload to CDN.

  • Add gulp task.
  • Set Gulp file path to gulpfile.js.
  • Set Gulp task as a bundle.
  • Set Gulp arguments to –ship.

Buildsolution

Note: Ensure the gulp task has the “–warnoff” command and “–ship” to avoid build failure in a production environment. Refer to the Configuration section below for details.

Package the Solution

The next step is to combine the assets into a package.

  • Add gulp task.
  • Set Gulp file path to gulpfile.js.
  • Set Gulp task as package-solution.
  • Set Gulp arguments to –ship.

Packagesolution

Prepare the Artifacts

Azure DevOps build does not retain any files. The “.sppkg” file created from the above step needs to be copied to the staging directory to be published to the release pipeline.

  • Add “Copy Files” task.
  • Set “Source Folder” to $(Build.Repository.LocalPath)/sharepoint/solution.
  • Set “Contents” to *.sppkg.
  • Set target folder to $(Build.ArtifactStagingDirectory)/drop.

Setartifacts

Publish the Artifacts

Instruct Azure DevOps to keep the files after build execution.

  • Add the “Publish Build Artifacts” task.
  • Set “Path to publish” to $(Build.ArtifactStagingDirectory)/drop.
  • Set “Artifact name” to drop.

Publishartifacts

Configuration

During bundling and packaging of your SharePoint Framework solution, you could see two types of messages:

  • Warnings
  • Errors

When running a DEBUG build, both messages do not cause the process to fail by a stderr (or standard error). But in the PRODUCTION build, you would get the following type of error output:

Stderrcicd

This might be an issue in your automated build/release pipelines. For instance, when you automatically bundle and package your solution on Azure DevOps, there is no way to tell that it should continue when warnings occur. The only option you have is to “continue” on error.

To restrict this, we can add a “warnoff” command in the build process, which won’t cause the build process to fail. For this, make the following changes in gulpfile.js.

// Retrieve the current build config and check if there is a `warnoff` flag set
const crntConfig = build.getConfig();
const warningLevel = crntConfig.args["warnoff"];
// Extend the SPFx build rig, and overwrite the `shouldWarningsFailBuild` property
if (warningLevel) {
    class CustomSPWebBuildRig extends build.SPWebBuildRig {
        setupSharedConfig() {
            build.log("IMPORTANT: Warnings will not fail the build.")
            build.mergeConfig({
                shouldWarningsFailBuild: false
            });
            super.setupSharedConfig();
        }
    }
    build.rig = newCustomSPWebBuildRig();
}
build.initialize(gulp)

Conclusion

Setting up a Continuous Integration (CI) pipeline for SPFx in Azure DevOps automates the process of building, testing, and bundling your SPFx solutions whenever any code changes occur. This pipeline will eventually reduce the need for manual intervention and guarantee that the latest code is thoroughly validated before deployment.

]]>
https://blogs.perficient.com/2024/12/31/building-azure-devops-ci-pipeline-for-spfx/feed/ 0 374442
Building Azure DevOps CD Processes for SPFx https://blogs.perficient.com/2024/12/31/building-azure-devops-cd-process-spfx/ https://blogs.perficient.com/2024/12/31/building-azure-devops-cd-process-spfx/#respond Tue, 31 Dec 2024 07:07:18 +0000 https://blogs.perficient.com/?p=374476

This blog provides a detailed explanation of the technical approach for implementing Continuous Deployment (CD) processes within Azure DevOps. It focuses on automating the deployment of solutions to SharePoint environments. This approach not only speeds up the release cycle but also enhances reliability, minimizes errors, and ensures that updates are deployed quickly and effectively.

Continuous Deployment (CD) takes validated code packages from the build process and deploys them into a staging or production environment. Developers can track successful deployments and narrow issues to specific package versions.

Prerequisite for building CD for SPFx in Azure DevOps

To set up Continuous Deployment(CI) for SPFx in Azure DevOps, ensure you have the following things already setup:

  • An Azure DevOps account with required access
  • CI pipeline for building the required package file .sppkg for deployment
  • Required access to App Catalog for deploying to SharePoint Online

Implementation

We need to create a new Release in Azure DevOps to implement CD. It requires the following steps:

  • Creating the Release Definition
  • Link the Build Artifact
  • Create the Environment
  • Install NodeJS
  • Install Office 365 CLI
  • Connect to App Catalog
  • Add Solution Package to App Catalog
  • Deploy the App
  • Set Environment Variables

Creating the Release Definition

  • Login to Visual Studio Online (Azure DevOps)
  • Select your project to set up a build definition.
  • From the left navigation, click Pipelines > Releases.
  • Click the “+ New” button > click “New Release Pipeline”.

Createreleasedefinition

  • Select template > Empty job > Apply.

Selectreleasetemplate

Linking the Build Artifact

  • Click on Add an artifact.
  • Select Project, Source, etc.

Buildartifact

Note: Give a meaningful name to “Source alias” and note it down. This name will be used in upcoming steps.

Setartifactdetails

Create the Environment

  • Under Stages, click “Stage 1”.
  • Name your environment.

Createreleaseenvironment

Installing NodeJS

  • Go to the “Tasks” tab
  • The task configuration window will appear the same as in the build definition.
  • On the default agent, click + sign.
  • Search for “Node”.
  • Add Node.js tool installer.
  • Specify 10.x in the Version Spec field. If your project is based on SharePoint Framework 1.7.1 or earlier, use version 8.x.

Cdinstallnpdejs

Install Office 365 CLI

Office 365 Common Language Interface (CLI) is an open-source project from the OfficeDev PnP Community.

  • Add npm task.
  • Under “Command,” select custom.
  • In the “Command and Arguments,” type install -g @pnp/office365-cli.

Installoffice365cli

Set Environment Variables

Before connecting to SharePoint, we can define some variables used at multiple steps in the deployment process. So, define the process variables in the “Variables” tab below.

  • Click the Variables tab.
  • Under Pipeline variables, add the variables below.

Setenvironmentvariables

Connect to App Catalog

We need to authenticate against our tenant’s app catalog.

  • Add the “Command Line” task.
  • In the “Script” field, type in the below command:
o365 spo login https://$(tenant).sharepoint.com/$(catalogsite) --authType password --userName $(username) --password $(password)

Connecttoappcatalog

Add Solution Package to App Catalog

Now, we need to upload the solution package to the app catalog.

  • Add “Command Line” task.
  • In the “Script” field, type in the below command:
o365 spo app add -p $(System.DefaultWorkingDirectory)/<Source alias>/drop/ webparts.sppkg --overwrite

Note: “Source alias” is the alias name set up during the “Link the Build Artifact” step.

Addsolutionpackagetoappcatalog

Deploy the App Catalog

Finally, we must deploy the app .sppkg file to the App Catalog to make it available to all site collections within the tenant.

  • Add “Command Line” task.Createreleaseenvironment
  • In the “Script” field, type in the below command.
o365 spo app deploy --name webparts.sppkg --appCatalogUrl https://$(tenant).sharepoint.com/$(catalogsite)

Deployappcatalog

Conclusion

Setting up a Continuous Deployment (CD) for SPFx in Azure DevOps automates the process of solution package deployment to the App Catalog in the SharePoint environment. This process will enable developers to focus on ensuring a seamless and consistent delivery process, accelerate iterations, and maintain a more agile and adaptable development environment.

]]>
https://blogs.perficient.com/2024/12/31/building-azure-devops-cd-process-spfx/feed/ 0 374476
Create and Manage Microsoft Teams and Channels with PowerShell https://blogs.perficient.com/2024/12/24/create-and-manage-microsoft-teams-and-channels-with-powershell/ https://blogs.perficient.com/2024/12/24/create-and-manage-microsoft-teams-and-channels-with-powershell/#comments Tue, 24 Dec 2024 06:05:32 +0000 https://blogs.perficient.com/?p=374149

In this blog, we will walk through the process of creating a Team, adding Team Channels, and managing owners and members within Microsoft Teams using PowerShell scripts. You’ll learn how to automate the creation of Teams, organize them with different channels, and efficiently manage user roles and permissions, such as assigning ownership or membership. This will help you streamline administrative tasks in Microsoft Teams, enabling you to maintain better control and organization within your Teams environment.

Prerequisites for Creating and Managing Microsoft Teams with PowerShell

For executing certain commands, you should have the Microsoft Teams PowerShell module installed on your computer. Use the PowerShell cmdlet below to install the Microsoft Teams PowerShell module. I am performing this in PowerShell ISE.

Install-Module -Name MicrosoftTeams -Force -AllowClobber

Once the setup is completed, open the PowerShell ISE in administrator mode and set up a new team.

Create a Team with PowerShell

To work with Teams using PowerShell, connect the Teams module by importing it in code as follows:

# Sign into Teams online
Import-Module -Name MicrosoftTeams
$credential = Get-Credential
Connect-MicrosoftTeams -Credential $credential

You will be prompted to enter your credentials when you execute the above commands. Ensure that you provide the admin credentials for Microsoft Teams.

Next, create the team as either Private or Public using the following command:

New-Team -DisplayName <TeamName> -Visibility <Private/Public> -Description <DescriptionAboutTeam> -MailNickName <AliasName> -Owner <TeamOwnerEmail>

Ex: New-Team -DisplayName  “Test Team” -Description “This Team is for testing purpose” -Visibility Private -MailNickName TestTeam -Owner abc@xyz.com

Note: Changes made to Teams via PowerShell may take up to two hours to take effect and appear in both the Office 365 admin portal and the Teams client.

Once the team is created, it will display the information about the newly created Team as follows:

Newteam

Parameters

  • Visibility: Public Teams enable any user to join the group, while Private Teams require approval from an owner before a user can join.
  • MailNickName: Email alias for the associated Office 365 group. MailNickName will be used as the PrimarySmtpAddress.
  • Owner: The Team owner can add or remove people from the Team. If the owner is not specified, it is set to the admin user who created the Team.

Add Users to a Team

To add users, you need the GroupId. Use the PowerShell command below to retrieve the GroupId based on the Teams display name.

# Get Team ID
$Group = Get-Team -DisplayName <TeamDisplayName> | select GroupId

This command will return the GroupId, display name, visibility, description, and other details. Then, add the users to the group by executing the following cmdlet.

# Add user to Team
Add-TeamUser -GroupID $Group.GroupId -User username@xyz.com

By default, it adds the user as a Member of the Team. To add a user as the owner of a team, include the -Role parameter and set its value for the owner.

Remove Users from a Team

# Get Team ID
$Group = Get-Team -DisplayName <TeamDisplayName> | select GroupId
# Remove user from Team
Remove-TeamUser -GroupID $Group.GroupId -User abc@xyz.cloud

Once a user is removed from a team, they can no longer access the team’s channels, files, or conversations.

Note: Only the team owner has the authority to execute this command and remove a user

Create Channels for a Team

After the team is created, you can add a channel to it. To do this, use the New-TeamChannel cmdlet with the following syntax.

New-TeamChannel -GroupId $GroupId -DisplayName <NewChannelDisplayName> -MembershipType <Private/Shared/Standard> -Owner <UserEmail>

Parameters

  • MembershipType: Channel membership type, Standard, Shared, or Private.
  • DisplayName: Name of the channel that needs to be created
  • Owner: UPN of owner that can be specified while creating a private channel.

Note: A SharePoint site is also generated when a private or shared channel is created in a team. Access to this site is restricted to the owners and members of the respective private or shared channel.

A team site is automatically created in SharePoint whenever a new team or a private/shared channel is created in Microsoft Teams. The site’s description or classification can be edited within the channel settings in Teams.

Add/Remove Users to a Team Channel

We can manage users within a specific channel, enabling us to assign them as either channel owners or members.

The command will return immediately, but the Teams application will not reflect the update immediately. To see the update, you should refresh the members page.

# Get the Group ID of a Team
$GroupId = (Get-Team -DisplayName <TeamDisplayName>).GroupId
#Add user to the Channel in Teams
Add-TeamChannelUser -GroupId $GroupId -DisplayName <ChannelDisplayName> -User <UserEmail>

You can include a role parameter if the user needs to be the channel owner. Refresh the members page to view the changes in the Team application.

Note: Due to technical limitations of private channels, users must first be members of the team to be added as members of a channel. Additionally, to become an owner of a channel, someone must first be a member of that channel.

Similarly, the following command can be used to remove a user from the team channel:

# Get the Group ID of a Team
$GroupId = (Get-Team -DisplayName <TeamDisplayName>).GroupId
#Remove user from the Channel in Teams
Remove-TeamChannelUser -GroupId $GroupId  -DisplayName <ChannelDisplayName> -User <UserEmail>

Remove a Channel from a Team

The following cmdlet deletes a specific channel from a Microsoft Teams team using the PowerShell module.

# Get the Group ID of a Team 
$GroupId = (Get-Team -DisplayName <TeamDisplayName>).GroupId
Remove-TeamChannel -GroupId $GroupId -DisplayName <ChannelDisplayName>

This action will not delete any content from the tabs associated with the channel. Also, only team admins or users with specific permissions are authorized to delete a channel from the team.

Note: This will “soft delete” a channel, which means its contents are not permanently removed immediately. You can restore this within 21 days before it gets permanently deleted. So, any new request to create a channel with the same information will fail for 21 days.

Deleting a standard channel does not delete the channel site. The channel is removed, but the files remain. Deleting a private channel removes the entire site, including all files and conversations.

Finally, disconnect the Microsoft Teams environment from the PowerShell using the following command:

Disconnect-MicrosoftTeams

Conclusion

You can efficiently create and manage Microsoft Teams and channels by utilizing the commands provided, allowing for seamless administration. This also offers the flexibility to manage users within both the teams and individual channels, whether private or shared. This gives you complete control over user roles, permissions, and access.

]]>
https://blogs.perficient.com/2024/12/24/create-and-manage-microsoft-teams-and-channels-with-powershell/feed/ 2 374149
Insights about GitHub Copilot https://blogs.perficient.com/2024/12/19/insights-about-github-copilot/ https://blogs.perficient.com/2024/12/19/insights-about-github-copilot/#respond Thu, 19 Dec 2024 06:55:49 +0000 https://blogs.perficient.com/?p=373600

Developer tools and practices have evolved significantly over the last decade. Earlier developer ecosystems were IDE e.g. Eclipse, Visual Studio, technical self-help books, stack overflow, and Google. Artificial intelligence terms were first time used in 1956. AI tools have become so popular because of increasing data volumes, advanced algorithms, and computing power and storage improvements. With the evolving times, we have varied options to get assistance for developers, testers, and business analysts.

Before AI Tools Launch

  • Earlier developers may spend a lot of time finding issues with a minor syntax error in the string formation.
  • Earlier developers had to look and browse various links on Google to search for a solution and read multiple suggestions.

What is GitHub Copilot?

Icon

GitHub Copilot is an AI coding assistant that helps you write code faster and with less effort, allowing us to focus more energy on problem-solving, collaboration, and domain. GitHub Copilot has been proven to increase developer productivity and accelerate the pace of software development.

Why use GitHub Copilot: Copilot is a powerful tool in the right hand.

  • It generates code snippets for the developer.
  • It suggests new code syntax the framework launches
  • Design pattern suggestion and explanation
  • Get code performance suggestions
  • Developers can even master new coding language
  • Developers need not leave their development environment to get the solutions. They can just type keywords in their environment and get solutions.

How GitHub Copilot Works

Open AI Codex, a machine learning model that translates natural language into code, powers GitHub Copilot to draw context from comments and code to suggest individual lines and whole functions as you type. Codex is a version of GPT-3 (Generative Pre-trained Transformer 3) fine-tuned for programming tasks.

Features of GitHub Copilot

  • Suggests code as we type in IDE
  • Chat: We can prompt and get suggestions
  • Command line tool: Get code suggestions in the Command line
  • Copilot Enterprise only: Get a description of the changes for pull requests
  • Autocomplete repetitive sections of code for the method and functions
  • Policy management for administrators
  • Conversion of code comments to runnable code
  • Access management for enterprise owners and organization owners

Real-time Usage of GitHub Copilot

  1. Generate class file for JSON structure for API.
  2. Write test case for the Method or function
  3. Generate string for JSON output for post request in API
  4. IntelliSense suggesting code like assistant and developer can accept the code

Different types of subscription for Copilot:

  • CoPilot Individual: For individual developers, freelancers, students, and educators.
  • CoPilot Business: For organizations ready to improve engineering velocity, code quality, and developer experience.
  • CoPilot Enterprise: For companies looking for the most customization based on their organization’s knowledge and codebase.

For more information on the cost, we can get it from the below URL. We can use free trial or buy any of the above plan from this URL and there are contact sales: GitHub Copilot · Your AI pair programmer · GitHub. Even for free trial we need to provide credit card information.

Advantages of using GitHub copilot

  • Git hub copilot is auto enabled in the IDE (Visual Studio, VSCode, Eclipse) it keeps suggesting us auto complete code.
  • Suggests code for object initialization for a class, Writes test cases. To achieve code coverage of 85% or above developer needs to complete it in few hours hence it improves developer productivity
  • Provides suggestions for a new framework and developers can type their question in chat and get appropriate responses. 

Realtime use cases for GitHub Copilot.

  1. Code Snippet for Class Person in C# which has nested class and generating JSON for this Sample response from GitHub Copilot when we want to test Web API POST request with sample data
Class Person
{
   public Name PersonName { get; set; }
   public Address PersonAddress { get; set; }
   public int Height { get; set; }
   public string EyeColor { get; set; }
}

Public Class PersonName
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
}

Public class Address
{
    public string AddressLine1 { get; set; }
    public string AddressLine2 { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string State { get; set; }
    public string ZipCode { get; set; }
}
  1.  Prompt to GitHub is convert to string datatype.
"{\n" +
            "  \"PersonName\": {\n" +
            "    \"FirstName\": \"John\",\n" +
            "    \"LastName\": \"Doe\"\n" +
            "  },\n" +
            "  \"PersonAddress\": {\n" +
            "    \"AddressLine1\": \"123 Main St\",\n" +
            "    \"AddressLine2\": \"Apt 4B\",\n" +
            "    \"City\": \"Anytown\",\n" +
            "    \"State\": \"CA\",\n" +
            "    \"ZipCode\": \"12345\"\n" +
            "  },\n" +
            "  \"Height\": 180,\n" +
            "  \"EyeColor\": \"Brown\"\n"

3. Explain repository pattern: Below response from Github Copilot

Key Concepts:

  • Repository: A class that acts as an intermediary between the business logic layer and the data layer (database). It hides the details of data access and allows the business logic to work with an abstracted interface.
  • Interface: The repository often exposes an interface, allowing for flexibility in how data is retrieved and stored (e.g., from a database, in-memory, or an external service).
  • Abstraction: The repository provides an abstraction of the underlying data source, allowing the rest of the application to work with simple method calls without knowing the details of how data is stored or fetched.

Benefits of the Repository Pattern:

  • Separation of concerns: Keeps the data access logic isolated from the business logic.
  • Testability: Allows for easier unit testing by mocking the repository in tests.

Maintainability: Easier to manage changes to data access logic without impacting the rest of the application.

4. Writing Test cases. Prompt to Github Copilot is write test case for split a string using MOQ

Sure! Here's an example of how you can write a test case using Moq for a string with the value "one, two, three, four, five":

```csharp
using Moq;
using Xunit;

public class StringServiceTests
{
    [Fact]
    public void TestStringService()
    {
        // Arrange
        var mockStringService = new Mock();
        string testString = "one, two, three, four, five";
        mockStringService.Setup(service => service.GetString()).Returns(testString);

        // Act
        var result = mockStringService.Object.GetString();

        // Assert
        Assert.Equal(testString, result);
    }
}

public interface IStringService
{
    string GetString();
}
```

In this example:
- We create a mock of the `IStringService` interface.
- We set up the mock to return the string "one, two, three, four, five" when the `GetString` method is called.
- We then call the `GetString` method on the mock object and assert that the returned value is equal to the expected string.

Feel free to ask if you need further assistance or have any questions!

GitHub is available for following IDE

Visual Studio, Visual Studio Code, JetBrains

GitHub Supports Following programming languages: Python, JavaScript, TypeScript, Go, and Ruby 

Downside of using GitHub Copilot:

  • GitHub copilot keeps on suggesting code which a developer may not intend to write is kind of annoying.
  • It may generate code based on the prompt we provide which may have some error and developer must rework accordingly
  • Once Copilot is integrated into the developer environment it will prompt to press tab to accept the code which may be incorrect, developer completely needs to always use their discretion.
  • Hence it is a powerful tool in the right hand.

Quotes from one of the news about AI tools

I would like to reference a statement by “Nvidia CEO Jensen Huang at the recent World Government Summit in Dubai said youngsters should stop focusing on how to code, at this early stage of AI revolution programming is no longer a vital skill. Coding can be taken care of by AI, humans can take care of more important expertise like domain, biology, education, farming etc.”.

Conclusion:

This tool can be very well utilized by the adept developers and also by new developers for their focused learning and work towards organization productivity.

It is the time to evolve our beliefs and work alongside AI powered tools enhance our knowledge and learn to use these tools so that we could get along with changing times.

Developers can focus on enriching user experience with their product and bring in more innovation.

]]>
https://blogs.perficient.com/2024/12/19/insights-about-github-copilot/feed/ 0 373600
Powering the Future: Key Highlights from PPCC24 and What’s Next for Power Platform https://blogs.perficient.com/2024/09/26/powering-the-future-key-highlights-from-ppcc24-and-whats-next-for-power-platform/ https://blogs.perficient.com/2024/09/26/powering-the-future-key-highlights-from-ppcc24-and-whats-next-for-power-platform/#respond Thu, 26 Sep 2024 23:55:49 +0000 https://blogs.perficient.com/?p=369888

The energy was electric last week as thousands of attendees invaded MGM Grand along the Las Vegas Strip for the 3rd Annual Power Platform Community Conference (PPCC24).

From groundbreaking announcements to new features unveiled during keynotes from Microsoft’s Charles Lamanna, Corporate Vice President of Business Industry and Copilot, and Jeff Teper, President of Apps and Platforms, PPCC24 offered an electrifying three days of innovation and collaboration.

Lamanna kicked off day one with an eye-opening overview of Microsoft’s low-code superhero of today, Power Platform. With more than 48 million active users every month – surpassing the population of Spain – Power Platform has become the “one platform” for everyone, whether it’s for no code, low code or pro code. But what truly stole the show this year was Copilot – set to revolutionize how developers work, bringing automation dreams to life.

The future of low-code development is evolving, and at PPCC24, it was clear: Power Platform plus Copilot equals transformative potential for businesses across industries, signaling a new road ahead for citizen developers and Microsoft automation:


“Most people overestimate what they can do in one year and underestimate what they can do in ten years.”

Let’s dive into key announcements and takeaways from PPC24:

The Rise of AI and Natural Language in Power Platform

AI is more deeply integrated into Power Platform than ever before, with a major emphasis on natural language capabilities and intelligent apps. Here are some of the top features unveiled during the conference:

  • Desktop Flows from Natural Language – Now in public preview, this feature enables users to generate desktop flows in Power Automate simply by using natural language. The barriers to automation just got lower for everyone, regardless of technical expertise.

 

  • Power Automate AI Recording for Desktop Flows – Also in public preview, this “show and tell” experience allows users to record desktop flows, making RPA workflows easier for users of all skill levels. The AI will interpret recordings to generate automated processes, speeding up adoption and productivity.

 

  • AI Agents for Copilot Studio – A game-changer for developers, AI agents will dynamically execute actions based on instructions and automatically handle workflow based on parameters. These agents can be trained and improved continuously, turning Copilot Studio into a true powerhouse for automation.

Coauthoring in Power Apps Now Generally Available

A highly anticipated feature from the Power Community, Co-Authoring in Power will beckon the next level of developer collaboration. This functionality allows up to 10 developers to collaborate in real time, editing apps simultaneously and a bringing new level of teamwork to app development.

As Charles Lamanna put it, “We are now all coauthors of this vision.” The seamless collaboration made possible through coauthoring will undoubtedly push the boundaries of what’s possible for low-code development.


The Road Ahead is Copilot-First

A standout theme from the conference was a Copilot-first vision for the future of low-code development. With tools like Copilot Studio set to be upgraded with GPT-4, the next generation of low-code technologies will be supported by AI agents that assist with tasks like solution design, data modeling, development, and visual design.


Perficient a Standout in Power Platform’s Future

As a leading Microsoft Solutions Partner, ranked 12th for Microsoft Power Platform partners, Perficient is thrilled to be at the forefront of this Community. From hosting a successful happy hour at Chez Bippy’s the night before the conference, to engaging with attendees at our booth—where we proudly supported donations to St. Jude’s Children’s Hospital—we’re excited to continue building on PPCC24 momentum. Our focus on helping organizations harness the full power of the latest Power Platform features to innovate faster and more intelligently will continue to help us lead the way.

While PPCC24 offered new announcements and innovations, it is only the beginning. As an award-winning Microsoft Solutions Provider, we’re committed to building groundbreaking solutions and bringing the robust capabilities of Power Platform to organizations everywhere. Whether it’s through AI-driven automation, real-time app coauthoring, or our continued work with Copilot, we’re dedicated to empowering businesses to innovate at scale.

Read more about our Power Platform practice here and stay tuned for upcoming events, workshops, and other exciting Power Platform activities!

]]>
https://blogs.perficient.com/2024/09/26/powering-the-future-key-highlights-from-ppcc24-and-whats-next-for-power-platform/feed/ 0 369888
Smart Manufacturing, QA, Big Data, and More at The International Manufacturing Technology Show https://blogs.perficient.com/2024/09/19/smart-manufacturing-qa-big-data-and-more-at-the-international-manufacturing-technology-show/ https://blogs.perficient.com/2024/09/19/smart-manufacturing-qa-big-data-and-more-at-the-international-manufacturing-technology-show/#respond Thu, 19 Sep 2024 14:43:19 +0000 https://blogs.perficient.com/?p=369461

For my first time attending the International Manufacturing Technology Show (IMTS), I must say it did not disappoint. This incredible event in Chicago happens every two years and is massive in size, taking up every main hall in McCormick Place. It was a combination of technology showcases, featuring everything from robotics to AI and smart manufacturing.

As a Digital Strategy Director at Perficient, I was excited to see the latest advancements on display representing many of the solutions that our company promotes and implements at the leading manufacturers around the globe. Not to mention, IMTS was the perfect opportunity to network with industry influencers as well as technology partners.

Oh, the People You Will Meet and Things You Will See at IMTS

Whenever you go to a show of this magnitude, you’re bound to run into someone you know. I was fortunate to experience the show with several colleagues, with a few of us getting to meet our Amazon Web Services (AWS) account leaders as well as Google and Microsoft.

Google

The expertise of the engineers at each demonstration was truly amazing, specifically at one Robotic QA display. This robotic display was taking a series of pictures of automobile doors with the purpose of looking for defects. The data collected would go into their proprietary software for analysis and results. We found this particularly intriguing because we had been presented with similar use cases by some of our customers. We were so engrossed in talking with the engineers that our half-hour-long conversation felt like only a minute or two before we had to move on.

 

 

 

robotic manufacturing on displayAfter briefly stopping to grab a pint—excuse me, picture—of the robotic bartender, we made our way to the Smart Manufacturing live presentation on the main stage. The ultra-tech companies presented explanations of how they were envisioning the future with Manufacturing 5.0 and digital twins, featuring big data as a core component.  It was reassuring to hear this, considering that it’s a strength of ours, thus reinforcing the belief that we need to continue focusing on these types of use cases. Along with big data, we should stay the course with trends shaping the industry like Smart Manufacturing, which at its roots are a combination of operations management, cloud, AI, and technology.

Smart Manufacturing Presentation at IMTS

Goodbye IMTS, Hello Future Opportunities with Robotics, AI, and Smart Manufacturing

Overall, IMTS was certainly a worthwhile investment. It provided a platform to connect with potential partners, learn about industry trends, and strengthen our relationships with technology partners. As we look ahead to future events, I believe that a focused approach, leveraging our existing partnerships and adapting to the evolving needs of the manufacturing industry, will be key to maximizing our participation.

If you’d like to discuss these takeaways from IMTS Chicago 2024 at greater depth, please be sure to connect with our manufacturing experts.

 

 

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2024/09/19/smart-manufacturing-qa-big-data-and-more-at-the-international-manufacturing-technology-show/feed/ 0 369461
Custom Weather Forecast Model Using ML Net https://blogs.perficient.com/2024/09/10/custom-weather-forecast-model-using-ml-net/ https://blogs.perficient.com/2024/09/10/custom-weather-forecast-model-using-ml-net/#respond Tue, 10 Sep 2024 20:12:22 +0000 https://blogs.perficient.com/?p=368939

Nowadays, AI is a crucial field with various frameworks like ML.NET that can be used to build amazing applications using pre-built models from cloud providers. It’s important to learn how these services work behind the scenes, how to create custom models, and understand how your application can interact with AI frameworks beyond just cloud providers or the source of the AI services.

How can I use ML Net?

ML Net can be used with Visual Studio 2019 or later, using any version of Visual Studio, and also can be used by Visual Studio Code, but only works on a Windows OS, Its prerequisites are:

  • Visual Studio 2022 or Visual Studio 2019.
  • .NET Core 3.1 SDK or later.

ML Net 1

Image 1: Visual Studio installer, Installation Details contains the ML Net Model builder

ML Net 2

Image 2: Visual Studio Context Menu

After adding the ML Net component to your project, you can see a wizard that allows you to set up your model as you need (Image 3).

ML Net 3

Image 3: ML NET Wizard

Application Overview

The application starts with the weather Groups, every item contains a temperature range, a button to search the Historical data, and a forecast prediction (Image 4).

ML Net 4

Image 4: Weather forecast main page.

The source of those groups is a table named Weather with the attributes:

  • Id: primary key
  • Description: that is the group description, you can see it as the title of the cards in image 4
  • MinRange: Minimal temperature belongs to the group.
  • MaxRange: Maximum temperature to belongs to the group.

The “History” button shows a table with all the historical data paginated. The historical data contains,  the date with format (yyyy-mm-dd),  the temperature, and if the day was cloudy (Image 5)

 

ML Net 5

Image 5: Weather forecast historical page.

The predict option allows the users to generate their own prediction using ML Net through an API endpoint, the input data is the number of days from today that the user will predict and if the day will be cloudy (Image 6)

Image6

Image 6: Prediction page

The API result is the date, the group, and the percentage of probability that the date will belong to the group, also shows a table with the percentage of probability of every group.

Model

In the real world, there are lots of variables to keep in mind if you want to implement a Weather Forecast prediction app, such as wind speed, temperature, the season, humidity, if it was cloudy, etc.(2)

The scope of this approach is to see how ML Net can solve a custom model; therefore, a simple custom model was created, based on the temperature, and the season and if the day is cloudy, the model uses the weather as group of different forecasts, then the custom training model was designed as follow (Image 7):

  • Weather (Id): Every grouper has an ID, so the label to predict it is the ID.
  • Date: it is the feature of the date related to the weather
  • IsCloudy: it’s a Boolean feature that indicates the relationship between weather and clouds.
  • Season (Id): it is a feature that indicates the relationship between weather and season (Every season has an id)

Image7

Image 7: Training data section from ML Net wizard

You can get the data from Files, SQL Server databases, for this case, the data was collected from a View on SQL Server.

Project Architecture Overview

The weather forecast has 2 sites a front-end and a back-end, the data was stored in a SQL Server Database (Image 8). With this overall approach, the system was designed to separate the responsibilities of the business logic, the data, and the user experience.

Image8

Image 8: Sites and database

Front-end

You can find the app repository on GitHub using the following URL: https://github.com/joseflorezr/trainingangularversion

The front-end repository contains an angular 18 solution, which uses angular materials to help improve the user experience, and routing for navigation. The solution contains the following components (image 9):

  • Forecast-header: The top component of the page, it shows the title with its style.
  • Forecast-prediction: Contains the form for weather predictions and shows the results.
  • Forecast results: Contains the historical data.
  • Weather: Shows the groups of weather forecasts
  • Services: Connects to the API to get weather, forecasts, and predictions
  • Model: interfaces that map with the API

Image9

Image 9: Front-end components

Back-end

You can find the app repository on GitHub using the following URL: https://github.com/joseflorezr/WebApiMlNetWeatherForecast.

Image10

Image 10: Back End components

The API solution contains  the following projects:

  • TestWebAPi: Web API with the endpoints, contains 3 controllers, Weather, forecast, and WeatherForecast. WeatherForecast is an abstract class with the logger and the use case reference injection.
  • Business: Contains the classes that contain the business logic, based on the Use Case approach(4)
  • Model: It is the abstraction of the domain objects like Weather, Forecast, Season, and predicted forecast
  • Data: This library contains 2 parts:
    • The integration at the data level, the context with Entity Framework to get to the database.
    • The integration with ML Net, after being added to the solution, some  support files were  scaffolded with the same name but different descriptions, in this case, the file is MLForecastModel:
      • mbconfig: contains the wizard that helps to change the settings.
      • consumption: a partial class that allows interaction with the model.
      • evaluate: a partial class that allows to calculate of the metrics
      • mlnetl: this file contains the knowledge base; it is important to share the file at the  API level.
      • training: Adds the training methods that support the creation of the file.

Database Project(3)

The data was built abstracting the concepts of the Weather and Season as master entities with their description, otherwise Forecast it’s the historical table that contains the information for a specific date (1 row per day) the observation, that means, the temperature, the season id and then the weather id.

Visual Studio contains a database project that allows developers to create, modify, and deploy databases, and can run scripts after the deployment. To create the ML Net model, a View named WeatherForecast was used because it’s easier to connect to the ML Net Wizard.  The image 11 shows the relationship between the tables.

Image11

Image 11: Database diagram

Database projects can be deployed using the SQL Schema comparer tool, there is a post-build script that creates the data to the database model. For this app, a script was executed simulating forecast data from 1900-01-01 to 2024-06-04. The script uses random data, so the results must be different every time that you populate the forecast table.

WeatherForecast view concentrates the data used by ML Net to create the model.

API Project

The API project exposes endpoints that support getting the groups (Weather Controller), getting the historical Forecast data (Forecast Controller), and predict (Forecast Controller)

Image12

Image 12:  Web API Swagger

Note: The ML net file must be added as a resource of the API because the MLForecastModel class at the moment the API uses the prediction functionality, tries to look at the file on a specific path (it could be changed).

 Image13

Image 13: ML Net file location

Model Project

Contains the DTOs that can be transferred to the Front-end, basically, the weather entity has the group description and the temperature ranges, the season contains the description of the starting and end months, the forecast has the temperature, date if the day was cloudy and id, PredictedForecast inherits from forecast and the score, and weather description was added (Image 14).

Image14

Image 14: Entities

Basically, ML Net  creates the MLForecastModel class, it contains the methods to use the prediction model (the result is different for the chosen scenario), but in general terms, the idea is to send an Input object (defined by ML Net) and receive results as follows:

  • For a single object, use the Predict method, it will return the score for the predicted label.
  • If you want to get the labels, use the GetLabels method, it will return all the labels as an IEnumerable.
  • If you want to evaluate all labels, PredictAllLabels is the method, it will return a sorted IEnumerable with key-value pairs (label and score)
  • If you want to map an unlabeled result, use the GetSortedScoresWithLabels, it will return a sorted IEnumerable with key-value pairs (label and score)

The PredictAsync Method (Image 15), creates the input object, starting with the user input (id, days, cloudy), it gets the projected date adds the days, and then finds the season ID based on the month (GetSeasonMethod). After the input project was complete, the chosen method to use was PredictAllLabels. In this case, the label is a Weather ID, so it was needed to get the Description from the Database for every given label.

Image15

Image 15: PredictAsync Implementation

Summary

  • You can use ML NET to create your own Machine Learning models and use them as part of your API solution.
  • There are multiple options (scenarios) to choose from according to your needs.
  • Models can be created using diverse sources, such as Database objects, or files.

References

  1. https://learn.microsoft.com/en-us/dotnet/machine-learning/how-does-mldotnet-work
  2. https://content.meteoblue.com/en/research-education/specifications/weather-variables
  3. https://visualstudio.microsoft.com/vs/features/ssdt/
  4. https://medium.com/@pooja0403keshri/clean-architecture-a-comprehensive-guide-with-c-676beed5bdbb
  5. https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-8.0

 

 

]]>
https://blogs.perficient.com/2024/09/10/custom-weather-forecast-model-using-ml-net/feed/ 0 368939
Computational Complexity Theory https://blogs.perficient.com/2024/09/10/computational-complexity-theory/ https://blogs.perficient.com/2024/09/10/computational-complexity-theory/#respond Tue, 10 Sep 2024 14:40:48 +0000 https://blogs.perficient.com/?p=368922

Computational complexity studies the efficiency of algorithms. It helps classify the algorithm in terms of time and space to identify the amount of computing resources needed to solve a problem. The Big Ω, and Big θ notations are used to describe the asymptotic behavior of an algorithm as a function of the input size. In computer science, computational complexity theory is fundamental to understanding the limits of how efficiently an algorithm can be computed.

This paper seeks to determine when an algorithm provides solvable solutions in a short com- putational time and to find those that generate solutions with long computational times that can be categorized as intractable or unsolvable, using these polynomial functions as a classical repre- sentation of computational complexity. Some mathematical notations to represent computational complexity, its mathematical definition from the perspective of function theory and predicate cal- culus, as well as complexity classes and their main characteristics to find polynomial functions will be explained. Mathematical expressions can explain the time behavior of a function and show the computational complexity. In a nutshell, we can compare the behavior of an algorithm over time with a mathematical function such as f (n), f (n2), etc.

In logic and algorithms, there has always been a search for how to measure execution time, calculate the computational time to store data, determine whether an algorithm generates a cost or a benefit in solving a problem, or design algorithms that generate a viable solution.

Asymptotic notations

What is it?

Asymptotic notation describes how an algorithm behaves over time, when its arguments tend to a specific limit, usually when they grow very large (tend to infinity). It is mainly used in the analysis of algorithms to show their efficiency and performance, especially in terms of execution time or memory usage as the size of the input data increases.

The asymptotic notation represents the behavior of an algorithm over time by making a com- parison with mathematical functions. The algorithm has a cycle while repeating different actions until a condition is fulfilled, it can be said that this algorithm has a behavior similar to a linear function, but if it has another cycle within the one already mentioned, it can be compared to a quadratic function.

How is an asymptotic notation represented?

Asymptotic notations can be expressed in 3 ways:

  • O(n): The term ‘Big O’ or BigO refers to an upper limit on the execution time of an algorithm. It is used to describe the worst-case It is used to describe the worst-case scenario. For example, if an algorithm is O(n2) in the worst-case scenario, its execution time will increase proportionally to n2 where the n is the input size.
  • Ω(n): The ‘Big Ω’ or BigΩ, describes a minimum limit on the execution time of an algorithm and is used to describe the best-case scenario. The algorithm has the behavior of Ω(n), which means that in the best case, the execution time of the algorithm will grow at least proportionally a n.
  • Θ(n): ‘Big Θ’ or BigΘ, are to both an upper and a lower bound of the time behavior of an algorithm. It is used to explain that, regardless of the case, the execution time of the algorithm increases proportionally to the specified value. For example, if an algorithm is Θ(nlogn), your execution time will increase proportionally to nlogn at both ends.

In a nutshell, asymptotic notation is a mathematical representation of computational com- plexity expressed in terms of computational complexity. Now, if we express in polynomial terms an asymptotic notation, it allows us to see how the computational cost increases as a reference variable increases. For example, let’s evaluate a polynomial function f (n) = n + 7 to conclude that this function has a linear growth. Compare this linear function with a second one given what g(n) = n3 − 2, the function g(h) will have a cubic growth when n is larger.

Computational Complexity 1

Figure 1: f (n) = n + 7 vs g(n) = n3 − 2

From a mathematical point of view, it can be stated that:

The function f (n) = O(n) and that the function g(n) = O(n3)

 

Computational complexity types

Finding an algorithm that solves a problem efficiently is crucial in analyzing algorithms. To achieve this we must be able to express the algorithm’s behavior in functions, for example, if we can express the algorithm as the polynomial f (n) function, a polynomial time can be set to determine the algorithmic efficiency. In general, a good design of an algorithm depends on whether it runs in polynomial time or less.

Frequency counter and arithmetic sum and bounding rules

To express an algorithm as a mathematical function and know it is execution time, it is neces- sary to find an algebraic expression that represents the number of executions or instructions of the algorithm. The frequency counter is a polynomial representation that has been worked on throughout the topic of computational complexity. with some simple examples in Csharp on how to calculate the computational complexity of some algorithms. Use the Big O, because expresses computational complexity in the worst-case scenario.

Computational complexity Constant

Analyze the function that adds 2 numbers and returns the result of the sum:

Computational Complexity 2

With the Big O notation for each of the instructions in the above algorithm, the number of times each line of code is executed can be determined. In this case, each line is executed only once. Now, to determine the computational complexity or the Big O of this algorithm, the complexity for each of the instructions must be summed up:

O(1) + O(1) = O(2)

The constant value is equal 2, the polynomial time of the algorithm is constant, i.e. O(1).

Polynomial Computational Complexity

Now let’s look at another example with a slightly more complex algorithm. We need to traverse an array containing the numbers from 1 to 100 and the total sum of the whole array is required:

Computational Complexity 3

In the sequence of the algorithm, lines 2 and 6 are executed only once, but lines 3 and 4 will be repeated n times, until reaching 100 iterations (n = 100 the size of the array), to calculate the computational cost of this algorithm, the following is done:

O(1) + O(n) + O(n) + O(1) = O(2n + 2)

From this result, it can be stated that the algorithm is executed in time lineal given that O(2n + 2) ≈ O(n). Let’s analyze another algorithm, similar but with two cycles one after the other. These algo- rithms are those whose execution time depends on two variables, n and m, linearly. This indicates that the length of the algorithm is proportional to the sum of the sizes of two independent inputs. The computational complexity for this type of algorithm is O(n + m).

Computational Complexity 4

In this algorithm, the two cycles are independent since the first while represents n + 1 times while the second while represents m + 1, being n ̸= m. Therefore, the computational cost is given by:

O(7) + O(2n) + O(2m) ≈ O(n + m)

Exponential computational complexity

For the third example, the computational cost for an algorithm containing nested cycles is analyzed:

Computational Complexity 5

The conditions in a while (while) and do-while (do while) cycles are executed n + 1 times, as compared to a foreach cycle. These loops do one additional step: validate the condition to end the loop. In line number 7, by repeating n times and doing its corresponding validation, the computational complexity at this point is n(n + 1). In the end, the result of the computational complexity of this algorithm would result in the following:

O(6) + O(4n) + O(2n2) = O(2n2 + 4n + 6) ≈ O(n2)

Logarithmic computational complexity

  • Logarithmic Complexity in base 2 (log2(n)): Algorithms with logarithmic complexity O(logn) grow very slowly compared to other complexity types such as O(n) or O(n2). Even for large inputs, the number of trades does not increase Let us analyze the following algorithm:

2024 09 10 07h23 12

Using a table, let us analyze the step-by-step execution of the algorithm proposed above:

 

2024 09 09 15h10 13

Table 1: Logarithmic loop algorithm execution

If you examine the sequence in Table reftab:tab1, you can see that their behavior has a logarithmic correlation. A logarithm is the power that must be raised to get another number. For example, log10100 = 2 because 102 = 100. Therefore, it is clear that the base 2 must be used for the proposed algorithm:

64/2 = 32

32/2 = 16

16/2 = 8

8/2 = 4

4/2 = 2

2/2 = 1

It can be calculated that log264 = 6, which means that the six (6) loop has been executed six (6) times (i.e. when k takes values {0, 1, 2, 3, 4, 5}). This conclusion confirms that the while loop of this algorithm is log2(n), and the computational cost is shown as:

 

O(1) + O(1) + O(log2(n) + 1) + O(log2(n)) + O(log2(n)) + O(1)

= O(4) + O(3log2(n))

O(4) + O(3log2(n)) ≈ O(log2(n))

  • Logarithmic complexity (nlog(n)): Algorithms O(nlog(n)) have an execution time that increases in proportion to the product of the input size n and the logarithm of n. This indicates that the execution time does not double if the input size is doubled, on the contrary, it increases less significantly due to the logarithmic factor. This type of complexity has a lower efficiency than O(n2) but higher than O(n).

2024 09 10 07h24 27

 

O(2 ∗ (n/2)) + O(1) ≈ O(nlog(n))

Analyzing the algorithm proposed above, mentioning the merge sort algorithm, the algorithm performs a similar division, but instead of sorting elements, it counts the possible divisions into subgroups. The complexity of this algorithm is O(nlog(n)) due to recursion and n operations are performed at each recursion level until the base case is reached.

Finally, in a summary graph, you can see, the behavior of the number of operations performed by the functions based on their computational complexity.

Example

An integration service is periodically executed to retrieve customer IDs associated with four or more companies registered with a parent company. The process performs individual queries for each company, accessing various databases that use different persistence technologies. As a result, an array of data containing the customer IDs is generated without checking or removing possible duplicates.

In this case, the initial approach would involve comparing each employee ID with all other elements in the array, resulting in a quadratic number of comparisons, i.e., O(n2):

2024 09 10 07h28 19

In a code review, the author of this algorithm will be advised to optimize the current approach due to its inefficiency. To solve the problems related to nested loops, a more efficient approach can be taken by using a HashSet. Here is how to use this object to improve performance, reducing complexity from O(n2) to O(n):

2024 09 10 07h33 23

Currently, in C# you can use an object called IEnumerable, which allows you to perform the same task in a single line of code. But in this approach, several clarifications must be made:

  • Previously, it was noted that a single line of code can be interpreted as having O(1) complex- ity. In this case, it is different because the Distinct function traverses the original collection and returns a new sequence containing only the unique elements, removing any duplicates using a HashSet, which, as mentioned earlier, results in O(n) complexity.
  • The HashSet also has a drawback: in the worst case, when collisions are frequent, the complexity can degrade to O(n2). However, this is extremely rare and typically depends on the quality of the hash function and the characteristics of the data in the collection.

The correct approach should be:

2024 09 10 07h34 06

Conclusions

In general, we can reach three important conclusions about computational complexity.

  • To evaluate and compare the efficiency of various algorithms, computational complexity is essential. Helps to understand how the execution time or resource usage (such as memory) of an algorithm increases with input size. This analysis is essential for choosing the most appropriate algorithm for a particular problem, especially when working with significant amounts of data.
  • Algorithms with lower computational complexity can improve system performance signifi- cantly. For example, the choice of an algorithm O(nlogn) instead of one O(n2) can have a significant impact on the amount of time required to process large amounts of data. Ef- ficient algorithms are essential to ensure that the system is fast and scalable in real-world applications such as search engines, image processing, and big data analytics.

Cuadro (1)

Figure 2: Operation vs Elements

 

  • Understanding computational complexity helps developers and data scientists to design and optimize algorithms. It allows for finding bottlenecks and performance improvements. By adapting the algorithm design to the specific needs of the problem and the constraints of the execution environment, computational complexity analysis allows informed trade-offs between execution time and the use of other resources, such as memory.

References

  • Roberto Flórez Algoritmia Básica, Second Edition, Universidad de Antioquia, 2011.
  • Thomas Mailund. Introduction to Computational Thinking: Problem Solving, Algorithms, Data Structures, and More, Apress, 2021.
]]>
https://blogs.perficient.com/2024/09/10/computational-complexity-theory/feed/ 0 368922
Increasing Threat of Cyberattacks is Causing Energy Companies to Bolster Security https://blogs.perficient.com/2024/08/30/increasing-threat-of-cyberattacks-is-causing-energy-companies-to-bolster-security/ https://blogs.perficient.com/2024/08/30/increasing-threat-of-cyberattacks-is-causing-energy-companies-to-bolster-security/#respond Fri, 30 Aug 2024 15:50:08 +0000 https://blogs.perficient.com/?p=368464

A major energy and utilities supplier has become the latest victim in a growing list of organizations targeted by cyberattacks. Without a quick response to an attack like this, energy companies can risk exposing customer data, cutting off energy supply, slowing or completely stopping operations, and more. 

According to the Department of Energy, the recent incident was responded to quickly, and had minimal lasting impact. However, these attacks are becoming increasingly frequent across industries, and the risks continue to grow. Let’s focus on one of the most common types of cybercrime: ransomware. 

Are Your Systems Susceptible to Malware? 

Ransomware attacks are pervasive, affecting various sectors including organizations like Colonial Pipeline, JBS Foods, and Kaseya. The most frequently targeted industries range from energy and finance to healthcare and entertainment. Malicious software, better known as malware, compromises network integrity by gaining access through phishing, stolen passwords, and other vulnerabilities. 

Ransomware-as-a-Service is a cybercrime business model made possible via modular business models with low barriers to entry, creating a wide market of perpetrators. These individuals are divided into developers who create the malware and affiliates who initiate the attacks, with profits split between them. 

It is crucial to be vigilant, with the most common defense being routine basic cybersecurity hygiene, such as implementing multi-factor authentication. Other tactics include adopting Zero Trust principles and preparing for potential attacks to minimize impact. While a good defense is wise, it is still essential to have a strong relationship between the government and private sector, with collaboration being of utmost importance. Companies must share information about breaches and their efforts to disrupt infrastructure with the support of law enforcement. 

Three Simple Ways to Prevent Cyberattacks 

Now that we have identified what makes malware like ransomware possible, let us address the best ways to avoid becoming a victim. We have broken the solution down into a few simple steps: 

  1. Be prepared with a recovery plan – Make it incredibly challenging to access and disrupt your system. If you make an attack economically unfeasible, you have already avoided the threat. The goal is to avoid paying the ransom for privileges that might not be returned or using keys provided by attackers to regain access. While restoring corrupted systems can be burdensome, it is better than the alternative. 
  1. Limit the scope of damage – By limiting privileged access roles, you reduce the number of entry points for attackers to acquire access to critical components of your business. If they can only gain access to pieces rather than the entire system, it will deter attackers from pursuing an escalated attack. 
  1. Challenge cybercriminals as much as possible – This step should not interfere with steps 1 or 2, but it is essential to create as much friction as possible for potential attacks. Make it an uphill battle for intruders attempting to gain remote access, emails, endpoints, or accounts. If they do manage to get in, ensure they cannot escalate their privileges by implementing robust detection and response capabilities. 

Perficient’s team of experts is well-versed in these incidents and what can be done to prevent them. If you would like to begin mounting more serious defenses, explore our energy industry expertise and browse the many technology partners with which we work to give companies confidence in their security, like Microsoft. 

]]>
https://blogs.perficient.com/2024/08/30/increasing-threat-of-cyberattacks-is-causing-energy-companies-to-bolster-security/feed/ 0 368464
Maximize Your PPCC24 Experience with Perficient: Insights, Innovation, and Impact https://blogs.perficient.com/2024/08/26/maximize-your-ppcc24-experience-with-perficient-insights-innovation-and-impact/ https://blogs.perficient.com/2024/08/26/maximize-your-ppcc24-experience-with-perficient-insights-innovation-and-impact/#respond Mon, 26 Aug 2024 17:12:43 +0000 https://blogs.perficient.com/?p=368082

The Power Platform Community Conference 2024 in Las Vegas is fast approaching, and it’s shaping up to be one of the most impactful events of the year for anyone involved in digital transformation. Whether you’re a seasoned professional or just getting started with Microsoft’s Power Platform, this conference offers unparalleled opportunities to learn, connect, and grow. At Perficient, we’re excited to share our expertise, showcase our success stories, and connect with you to explore how we can help you maximize your Power Platform investment. Here’s everything you need to know to make the most of this conference, from what to expect to why you should engage with Perficient.

What is the Power Platform Community Conference?

The Power Platform Community Conference (PPCC) is the premier event for professionals who use or are interested in Microsoft’s Power Platform. This annual gathering brings together thousands of developers, business leaders, and technology enthusiasts from around the world to explore the latest trends, tools, and best practices in Power Platform. PPCC 2024 is set to showcase cutting-edge AI innovations, building on the success of previous years. It offers more than 150 sessions and keynotes, along with 20 hands-on workshops, and opportunities to connect with and gain insights from Microsoft thought leaders, product experts and developers, MVPs, and peers.

Key Takeaways from Last Year’s Conference

The 2nd annual Power Platform Community Conference in 2023 was a major success, highlighting the growing momentum behind low-code development. Some key takeaways include:

  • Low-Code Momentum: The 2023 conference underscored the rapid expansion of the low-code market, with Power Platform playing a central role in enabling organizations to innovate quickly and efficiently.
  • AI-Powered Solutions: There was a significant focus on integrating AI with Power Platform, particularly through tools like AI Builder and Power Automate. These advancements are helping organizations automate more complex tasks, driving efficiency, and reducing manual work.
  • Community and Collaboration: The strength of the Power Platform community was a key theme, with thousands of professionals collaborating to share insights, solutions, and best practices.

What’s New for the 2024 Conference?

The 2024 conference will build on these themes, with an even stronger focus on AI-driven innovation. Microsoft plans to unveil several new AI features designed to help users automate more complex tasks and gain deeper insights from their data. The conference will highlight how generative AI advancements can be integrated seamlessly with existing Power Platform solutions to enhance productivity and efficiency.

This year, you can expect:

  • Showcasing AI Innovations: New AI capabilities in Copilot Studio, Power Automate, Power BI, and AI Builder that simplify the implementation of intelligent automation and analytics solutions.
  • Hands-On Labs and Networking: Continued opportunities to engage directly with the technology through hands-on labs and to connect with other professionals and experts in the field.
  • Expert-Led Sessions: Sessions led by industry experts focused on how AI is transforming the approach to digital transformation.

For more details on what to expect from this year’s conference, check out Microsoft’s announcement here.

Getting Registered

To register for the Power Platform Community Conference, visit the official conference registration page. Full conference passes start at $1,849 and will be raised to $1,899 after August 27th. You can add on one, two, or three full-day workshops for additional costs.

Once registered, take some time to plan your conference experience by reviewing the agenda and identifying which sessions align with your current projects or areas of interest.

Why Perficient Leads in Power Platform Solutions

At Perficient, our passion for Power Platform stems from its transformative impact across various industries. We’ve developed a proven track record, backed by 30+ certified experts and over 50 successful enterprise projects, delivering tangible results for our clients. Whether it’s implementing a Center of Excellence (COE) for a global auto manufacturer or building an automation program for a healthcare provider, our diverse industry experience allows us to craft tailored solutions that address unique business challenges.

We understand that every organization is at a different stage of its Power Platform journey. Whether you’re just starting or looking to optimize, our solutions and workshops are designed to align with your organization’s maturity level, ensuring you maximize your Power Platform investment.

Why Talk to Us at PPCC24

  1. Custom Solutions for Unique Challenges: We tailor our Power Platform solutions to meet your specific business needs, from app development to automation and data analytics.
  2. Deep Industry Insights: Our extensive experience across industries equips us with the insights needed to leverage Power Platform for addressing sector-specific challenges.
  3. Commitment to Long-Term Success: Beyond implementation, we offer ongoing support, maintenance, and optimization to ensure your Power Platform environment continues to deliver value as your business grows.

By connecting with Perficient at PPCC24, you’re not just getting a solution; you’re gaining a partner committed to your success.

We’re looking forward to the Power Platform Community Conference and hope to see you there. Be sure to visit us at booth #134, where you can learn more about our success stories, discuss your specific challenges, and discover how Perficient can help you harness the full potential of Power Platform. Let’s work together to turn your vision into reality.

For more information about our Power Platform capabilities, visit Perficient’s Power Platform page.

]]>
https://blogs.perficient.com/2024/08/26/maximize-your-ppcc24-experience-with-perficient-insights-innovation-and-impact/feed/ 0 368082
How to Navigate the VMware License Cost Increase https://blogs.perficient.com/2024/08/13/how-to-navigate-the-vmware-license-cost-increase/ https://blogs.perficient.com/2024/08/13/how-to-navigate-the-vmware-license-cost-increase/#respond Tue, 13 Aug 2024 12:28:41 +0000 https://blogs.perficient.com/?p=367246

VMware (Broadcom) has discontinued their VMware partner resell program. This announcement forces customers to move forward with one of three options:

  1. Buy directly from VMware,
  2. Migrate workloads to another hypervisor, or
  3. Make a platform change.

For many VMware customers, the price changes were abrupt, while others have the luxury of taking a little more time to explore their options.

 

 

The Cloud Advantage

As organizations reassess their IT strategies, the shift toward cloud architectures is becoming increasingly attractive. Cloud solutions, built specifically for the cloud environment, offer unparalleled flexibility, scalability, and cost efficiency. They allow businesses to take full advantage of modern infrastructure capabilities without being locked into the escalating costs of traditional on-premises solutions.

Making the Transition

At Perficient, we understand the complexities and challenges associated with such a significant transition. Our expertise in cloud consulting and implementation positions us as the ideal partner to help you navigate this critical shift. Our consultants have developed a comprehensive and flexible plan to assist you in maximizing the efficiency of your platform change.

Comprehensive Assessment and Strategy Development

Our team begins with a thorough assessment of your current IT infrastructure, evaluating the specific impact of the VMware cost increase on your operations. We then develop a tailored strategy that aligns with your business goals, ensuring a smooth and cost-effective transition to cloud solutions.

Migration Services

Moving from a VMware-based infrastructure to a cloud environment can be complex. Our migration services ensure a seamless transition with minimal disruption to your business operations. We employ best practices and proven methodologies to migrate your workloads efficiently and securely.

Ongoing Support and Operational Efficiency

Post migration, we provide ongoing support to ensure your cloud environment operates at peak efficiency. Our team continuously monitors and optimizes your infrastructure, helping you to maximize the return on your cloud investment.

Cost Management and Optimization

One of the key advantages of cloud migration is the potential for significant cost savings and licensing cost avoidance. Our cost management services help you to leverage cloud features to reduce expenses, such as auto-scaling, serverless computing, and efficient resource allocation.

Embracing the Cloud

Perficient stands ready to guide you through this transition, providing the expertise, tools, and support necessary to successfully navigate this change. Together, we can turn this challenge into a transformative opportunity for your business.

To learn more about how these changes might impact your organization and explore our detailed strategy for a smooth transition, visit our cloud page for further insights. Our team is here to help you every step of the way.

]]>
https://blogs.perficient.com/2024/08/13/how-to-navigate-the-vmware-license-cost-increase/feed/ 0 367246