Microsoft Articles / Blogs / Perficient Expert Digital Insights 2024-06-06T18:43:53Z https://blogs.perficient.com/feed/atom/ https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Greg Jeffers https://http418.dev <![CDATA[API’d Like to Talk to You: A Dive into the OpenAI Assistant API]]> https://blogs.perficient.com/?p=364074 2024-06-06T18:43:53Z 2024-06-06T18:43:53Z
Recently, I’ve had time to sit down and wade into the my own little area of the digital Wild West, that being AI integration. With the explosion of AI, I wanted to give my apps the ability to tap into that vast potential. 
 
While it seems like every tech giant, startup, and their mother is putting out an AI these days, I had to settle on one that I would develop against. I wanted something where I could create a custom model; has a robust REST API; and finally has a proven track record (at least as long a record as you can have in such a young field). Given these criteria I set out, OpenAI was the best vendor for this purpose, at the time. Specifically the custom assistant via platform.openai.com
 
Background aside, here’s what you’ll need to follow along:
• A funded account on platform.openai.com (I’ve gotten by on ~$2US for the past 8 months)
• A ready assistant inside something other than the default project (We’re only doing text, so you don’t need the latest & greatest GPT for a base. GPT-3 will do the job just as well and save you some coin in the process)
• A Project API key
 
While the API documentation (API Reference – OpenAI API) gives examples in Python, Node.js, and Curl commands, I’m a Microsoft stack sort of person, so I want to be able to converse with my AI via C# (like a reasonable person). I began the process by translating the curl commands to HttpClient calls. What follows are the calls needed to converse with your assistant. I will be using snippets along the way but will post the full file at the end of the article. So, let’s get into it!
 

Create the Conversation Thread

Threads are the conversation. A thread contains all the messages (see Adding Messages section) that are run (see Run section) in order to generate a new response. In addition to your AI’s directives, threads can be seeded with messages expected from the AI in order to provide a greater context. We’ll touch on that more when we get to messages. 
 
In order to begin our conversation with our digital pal, we’ll first need to let it know we want to talk. We do this by creating a thread and getting a thread ID back from the AI. This ID is important! It will be used in all subsequent calls, store it! The documentation for creating a thread can be found here: https://platform.openai.com/docs/api-reference/threads/createThread 
 

There are couple of headers to configure before loading up the URI and firing off the request. The first is Authorization header, this is simple Bearer token scheme with your project API key as your token. The second is a new header indicating we’re connecting to V2 of the Assistant Beta API. 

_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);

_client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
Next thing to do is load up the URI and fire off a POST request. This request doesn’t require anything in the body, so we’ll send along null: 
response = _client.PostAsync("https://api.openai.com/v1/threads", null).Result;

So far, so good. Nothing most developers haven’t done dozens of times over. The tricky part is that comes via the response. Unfortunately, each end point returns a different JSON model. I’ve created a series of models in my project to deserialize each response into a POCO, which as this point, I feel that was overkill. I could have done this via JObjects and saved myself a few dozen lines of code. 

var threadIdResponse = response.Content.ReadAsStringAsync().Result;

if (!response.IsSuccessStatusCode)
{
     var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
     throw new AiClientException(errorResponse?.Error.Message);
}
var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse);
_threadId = threadIdObj?.Id ?? string.Empty;
return _threadId;

Here we’ve got the response, and it’s time to check and parse what we got back. In my error trap, I’ve got an exception called AiClientException. This is a new exception I created in the project that simply wraps Exception for better delineation on the client. If we’ve got a successful response, we deserialize it into a Thread Response object: 

public class ThreadResponse
{
   public string Id { get; set; }
   public string Object { get; set; }
   public long CreatedAt { get; set; }
   public object AssistantId { get; set; }
   public string ThreadId { get; set; }
   public object RunId { get; set; }
   public string Role { get; set; }
   public List<AiContent> Content { get; set; }
   public List<object> FileIds { get; set; }
   public Metadata Metadata { get; set; }
}
As you can see quite a bit is returned that we won’t use. Of interest to us at this point is the Id field, this is the all-important thread ID. 
 
Now we’ve created an empty thread with our assistant. Next we need to load up a message for the AI to read, this is also where we can insert some seeded prompts. 
 

Adding Messages

Messages are, of course, the driver to this whole shebang. Without them, we’re just staring across the table with our AI assistant in silence. While normal conversation flow goes prompt -> response, like we see when using a vendor’s interface with an AI. Here we aren’t limited to such an immediate back and forth, we’re able to load up multiple user prompts, or seed a conversation with user prompts and assistant responses before sending them to the AI for a generated response. 
 
First thing to do is validation, as we get to this point in the process, there’s a number of pieces that need to already be in place in order to add messages:       
if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set");
if (string.IsNullOrEmpty(_threadId)) CreateThread(); 
if (string.IsNullOrEmpty(message)) throw new AiClientException("Message is empty");
Here, we’re checking that we have an API key (for the Authentication header); that we have a thread to put messages on, if not, make one; check that we have an actual message to add. Next we load up our headers just as we did before, but this time we need to serialize the message into an object.
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
_client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
var messageRequest = new AiRequestMessage { Role = "user", Content = message };
Here AiRequestMessage is another POCO class I created, simple to help deserialize the response. Not much to this one:
public class AiRequestMessage
{
   [JsonProperty("role")]
   public string Role { get; set; }
   [JsonProperty("content")]
   public string Content { get; set; }
}

Once the message object is created, we just need to stringify it, load it into our request and send it off. There is not much useful information returned from the request. A HTTP 200 return is an indication that the message was successfully added: 

var json = JsonConvert.SerializeObject(messageRequest);
var content = new StringContent(json, Encoding.UTF8,   "application/json");
response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/messages", content);
            
var threadIdResponse = response.Content.ReadAsStringAsync().Result;

if (!response.IsSuccessStatusCode)
{
   var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
   throw new AiClientException(errorResponse?.Error.Message);
}
As you can see, this is a rather simple call to the API. Once we get the response from the server, we check to see if it was successful, if so, we do nothing; if not, then we send throw an exception. 
 
Now that we know how to add user messages… Wait, “How do we know they’re from the user?” you might ask. It’s right here:
var messageRequest = new AiRequestMessage { Role = "user", Content = message };
In the role property, the AI recognizes two values: “user” and “assistant” and it doesn’t care who adds them to the list. So it becomes a simple matter to either add an argument or new function for assistant messages that simplifies modifies the above line so that a message is created like so (or a functional equivalent):
var messageRequest = new AiRequestMessage { Role = "assistant", Content = message };
Using this ability, we’re able to assemble (or even recall) a conversation before even going to the AI. 
 
For far, we’re created the container for the conversation (thread) and our side of the conversation (messages) on the OpenAI server. Now, we would like to hear back from the AI. This is where the Run phase of the process comes in. 
 

Run

So we’re ready to start conversing with our assistant, awesome! If you’ve ever worked with an AI before, you know the response times can be quite lengthy. How do we monitor this from our library? Personally, I went with short polling, as you’ll see below. I did this for the ease of implementation, but other methods are available including opening a stream with OpenAI’s server but that’s outside the scope of this post. 
 
As with the other requests, we’ll need to load up our headers:
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
_client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
Next we need a request body that consists of just the assistant ID, here again, I’ve created a POCO to aid in serialization/deserialization which is probably overdoing it for a single property:
var custAsst = new Assistant { assistant_id = _assistantId };
var json = JsonConvert.SerializeObject(custAsst);
var content = new StringContent(json, Encoding.UTF8, "application/json");
After loading that into our request body and sending off the request, it’s time to wait. AI responses can cause quite the wait for a response depending on the prompt. This could lead a single request to timeout if the AI is taking a particularly long time to respond. My solution to this was to use short polling seen here:
response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/ru  ns", content);
var responseContent = await response.Content.ReadAsStringAsync();
var responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
var runId = responseObj?.Id;
var runStatus = responseObj?.Status;
//if not completed, poll again
if (runId != null)
{
   while (runStatus != null && !FinalStatuses.Contains(runStatus))
   {
       await Task.Delay(1000);
       response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/runs/{runId}");
       responseContent = response.Content.ReadAsStringAsync().Result;
       responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
       runStatus = responseObj?.Status;
        }
    }
}
await GetResponse();
Here, I’ve got the completed states for the Run process (https://platform.openai.com/docs/api-reference/runs/object) which is checked on every poll until the job has finished. After we’ve received the completed indicator, we know it’s safe to retrieve the updated messages which should now include the response from the assistant. 
 

Get AI Response

In order to grab the latest message from the server, we need to call back to messages endpoint for the thread. This will return all the messages we sent to the server, with the latest response. First we load up our headers and fire off a GET request to the messages endpoint with our thread ID in the URI:
HttpResponseMessage response;
using (_client = new HttpClient())
{
   _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
    _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1");
    response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/messages");
}

The response that is returned from this request is more complex that we've seen up to this point and requires a bit more handling in order to extract the messages:

var responseContent = response.Content.ReadAsStringAsync().Result;
try
{
  var data = JsonConvert.DeserializeObject<ChatResponse>(responseContent);
  _messages.Clear();
  _messages = data?.Data.Select(x => new AiContent() { Type = x.Role, Text = x.Content[0].Text }).ToList() ?? new List<AiContent>();
}
catch (Exception ex)
{
  throw new AiClientException("Error retrieving messages");
}
I parse the response into a ChatRepsonse object that contains the messages as well as metadata. The messages are nested in a class within the ChatResponse class. In order to simplify the code for a blog post I’m just replacing the entire List of messages within the service with every response. Here is the ChatResponse class with its nest class for messages:
public class ChatResponse
{
   public List<Data> Data { get; set; }
   public string FirstId { get; set; }
   public string LastId { get; set; }
   public bool HasMore { get; set; }
}

public class Data
{
   public string Id { get; set; }
   public string Object { get; set; }
   public long CreatedAt { get; set; }
   public string AssistantId { get; set; }
   public string ThreadId { get; set; }
   public string RunId { get; set; }
   public string Role { get; set; }
   public List<AiContent> Content { get; set; }
   public List<object> FileIds { get; set; }
   public Metadata Metadata { get; set; }
}
In the ChatResponse class, you can see that the top-level fields supply a list of conversations, Data (typically there’s just one), as well as the ID for the first and last messages. (You could use the latest ID in order to grab the assistant response if that is a better fit for your use-case.)  While the Data class contains the metadata for the conversation, the messages are stored in Data’s Content property. This property is still not the end as the JSON is broken down into an object with the role and another class for the response text which I’ve called AiContent. 
public class AiContent
{
  public string Type { get; set; }
  public Text Text { get; set; }
}
public class Text
{
  public string Value { get; set; }
  public List<object> Annotations { get; set; }
}
Once you’ve fished the messages out of the response, you’re free to do with them as you will. My simple MVC client just dumps the new list of messages to the user. 
 

Furthering the Project

Besides points I mentioned above, there is definitely room for improvement with this code. I created these snippets from a POC I’ve been working on, so they very likely aren’t production-ready as-is. There are several areas I feel like this can be improved on. Areas such as
 
• Streaming between call and OpenAI – OpenAI offers a streaming response rather than HTTP. Going this route would remove the polling code from the project and provide a closer-to-realtime response to the library
 
• SignalR instead of HTTP Client – Used in conjunction with OpenAI’s streaming, this would provide partial responses as the assistant generates it
 
• Add file upload – As AIs get more complex, simple prompts may not longer be enough. Providing a file has the potential to provide the assistant a more comprehensive context

• Add photo generation – Who doesn’t like playing with the photo generator provided by most AIs?

 

Full File

using AiClients.Exceptions;
using AiClients.Interfaces;
using AiClients.Models;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;
using System.Net.Http.Headers;
using System.Text;
namespace CustomGptClient.Services
{
    public class AssitantService : IAiService
    {
        private string _threadId;
        private IConfiguration _config;
        private string _apiKey;
        private string _assistantId;
        private List<AiContent> _messages;
        private string _assistantName;
        private HttpClient _client;
        private List<string> FinalStatuses = new List<string> { "completed", "failed", "cancelled", "expired" };
        public AssitantService(IConfiguration configuration)
        {
            _config = configuration;
            _apiKey = _config.GetSection("OpenAI:ApiKey")?.Value ?? string.Empty;
            _assistantId = _config.GetSection("OpenAI:AssistantId")?.Value ?? string.Empty;
            _messages = new List<AiContent>();
        }

        private string CreateThread()
        {
            if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set");
            HttpResponseMessage response;
            using (var _client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
                response = _client.PostAsync("https://api.openai.com/v1/threads", null).Result;
            }
            var threadIdResponse = response.Content.ReadAsStringAsync().Result;
            if (!response.IsSuccessStatusCode)
            {
                var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
                throw new AiClientException(errorResponse?.Error.Message);
            }
            var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse);
            _threadId = threadIdObj?.Id ?? string.Empty;
            return _threadId;
        }
        public async Task AddMessage(string message)
        {
            if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set");
            if (string.IsNullOrEmpty(_threadId)) CreateThread(); 
            if (string.IsNullOrEmpty(message)) throw new AiClientException("Message is empty");
            HttpResponseMessage response;
            using (_client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1");
                var messageRequest = new AiRequestMessage { Role = "user", Content = message };
                var json = JsonConvert.SerializeObject(messageRequest);
                var content = new StringContent(json, Encoding.UTF8, "application/json");
                response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/messages", content);
            }
            var threadIdResponse = response.Content.ReadAsStringAsync().Result;
            if (!response.IsSuccessStatusCode)
            {
                var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
                throw new AiClientException(errorResponse?.Error.Message);
            }
            var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse);
            await CreateRun();
        }
        public async Task CreateRun()
        {
            HttpResponseMessage response;
            using (_client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
                var custAsst = new Assistant { assistant_id = _assistantId };
                var json = JsonConvert.SerializeObject(custAsst);
                var content = new StringContent(json, Encoding.UTF8, "application/json");
                response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/runs", content);
                var responseContent = await response.Content.ReadAsStringAsync();
                var responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
                var runId = responseObj?.Id;
                var runStatus = responseObj?.Status;
                //if not completed, poll again
                if (runId != null)
                {
                    while (runStatus != null && !FinalStatuses.Contains(runStatus))
                    {
                        await Task.Delay(1000);
                        response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/runs/{runId}");
                        responseContent = response.Content.ReadAsStringAsync().Result;
                        responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
                        runStatus = responseObj?.Status;
                    }
                }
            }
            await GetResponse();
        }
        public async Task GetResponse()
        {
            HttpResponseMessage response;
            using (_client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1");
                response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/messages");
            }
            var responseContent = response.Content.ReadAsStringAsync().Result;
            try
            {
                var data = JsonConvert.DeserializeObject<ChatResponse>(responseContent);
                _messages.Clear();
                _messages = data?.Data.Select(x => new AiContent() { Type = x.Role, Text = x.Content[0].Text }).ToList() ?? new List<AiContent>();
            }
            catch (Exception ex)
            {
                throw new AiClientException("Error retrieving messages");
            }
        }
}

 

]]>
0
Shashikant Bhoyar <![CDATA[Crafting a Secure User Login Page with Next.js and Optimizely Commerce API: A Step-by-Step Guide]]> https://blogs.perficient.com/?p=363028 2024-06-03T12:01:15Z 2024-05-21T08:28:24Z

In this blog, let’s dive into creating a secure user login page with Next.js and integrating it with Optimizely’s Commerce APIs. Here are the steps you can follow:

Setting Up Optimizely Configured Commerce Project:

  • First, ensure you have an Optimizely Configured Commerce project running on your local machine with the necessary APIs.
  • The hosted Optimizely Configured Commerce website URL is: http://b2b.local.com

Optimizely Configured Login API’s:

  1. API Generate Token (POST): http://b2b.local.com/identity/connect/token
    1. Learn How to Set Up a Single Sign-On (SSO) Client in the Optimizely Configured Admin Site:
    • Go to the Admin Console > Administration > Permissions > Single Sign On.
    • Picture1
    • Click + Client to add a new client.
      Picture2
    • Input any client ID for the Client ID
    • Input any client name for the Client Name
    • Choose “Resource Owner” under the Flow option.
    • Set Enabled to “Yes”.
    • Set Require Consent to “Yes”. This prompts an intermediary page for users to grant permission to the application.
    • Set Access Website API to “Yes”.
    • Set Allow Refresh Tokens to “Yes”.
    • Enter http://b2b.local.com/home/codecallback for Redirect URLs.
    • Set all Token Lifetime fields to 7200 (2 hours).
      Picture3
    • Click three dot (…) top right corner and select Set Client Secret. Note the secret as it will be used to request an access token to access the Website API.
      Picture4
    • Copy the Client Secret from above the popup to log in as the user.
    • Click Save.
    • Remember to restart the application after configuring SSO.

    We required 2 fields to use in below Next.js login implementation.

    • Client Name: b2blogin
    • Client Secret: 1cc03abc-2799-5a55-1558-753a882d8981
  1. API Login (POST): http://b2b.local.com/api/v1/sessions

Create Next.js project in VS Code:

Here are the steps to create a Next.js project using Visual Studio Code (VS Code):

  1. Install Node.js and npm: Ensure you have Node.js installed on your machine. You can download it from the official Node.js website and follow the installation instructions.
  2. Install Visual Studio Code (VS Code): Download and install Visual Studio Code from the official website. https://code.visualstudio.com/docs/setup/windows
  3. Open VS Code: Launch Visual Studio Code on your computer.
  4. Open the Terminal in VS Code: Open the integrated terminal in VS Code. You can do this by clicking on the Terminal menu and selecting “New Terminal.”
  5. Create a new Next.js project: In the terminal, navigate to the directory where you want to create your Next.js project using the cd For example:
    cd path/to/your/projects/folderThen, run the following command to create a new Next.js project:npx create-next-app project-nameReplace project-name with the desired name for your Next.js project.Project Name: b2bCommerce
  1. Navigate to the project directory: After the project is created, navigate to the project directory using the cd command:
    cd project-name
  1. Open the project in VS Code: Once you are inside the project directory, open it in Visual Studio Code.
    Picture5
  2. Start the development server: In VS Code’s integrated terminal, start the development server for your Next.js project by running:
    npm run devThis command will start the Next.js development server and open your project in the      default web browser.
  1. Verify the setup: After the development server starts, verify that your Next.js project is running correctly by checking the displayed output in the terminal and accessing the project in your web browser at http://localhost:3000.
    Picture6
    Default Home Page:
    Picture7
  1. Begin coding: You’re now ready to start coding your Next.js project in Visual Studio Code.

You can make changes to your project files, create new pages, components, styles, etc., and see the changes reflected in real-time in your browser while the development server is running.

To create a new login page in your Next.js project, follow these steps:

  1. Create a New Component for the Login Page: In your Next.js project, create a new component for the login page. You can create a folder named components if it doesn’t already exist, and then create a file for your login page component, for example, tsx if you’re using TypeScript.
    Picture8
  2. Design the Login Form: Inside your tsx component, design the login form using HTML elements and React components. Here’s a basic example:
    // components/LoginPage.tsx     import React, { useState } from ‘react’;return (<div className=”min-h-screen flex items-center justify-center bg-gray-100″><div className=”bg-white p-8 shadow-md rounded-lg w-96″><h1 className=”text-2xl font-semibold mb-4″>Login</h1><form onSubmit={handleSubmit}><label htmlFor=”username” className=”block mb-2″>Username:</label><inputtype=”text”id=”username”name=”username”value={formData.username}onChange={handleInputChange}className=”w-full border border-gray-300 rounded-md px-3 py-2 mb-4″/><label htmlFor=”password” className=”block mb-2″>Password:</label><inputtype=”password”id=”password”name=”password”value={formData.password}onChange={handleInputChange}className=”w-full border border-gray-300 rounded-md px-3 py-2 mb-4″/><button type=”submit” className=”w-full bg-blue-500 text-white py-2 rounded-md hover:bg-blue-600″>Login</button></form></div></div>);};export default LoginPage;
  • Handle Form Submission: Add state and event handling to your login form component to handle form submission. You can use React’s state and event handling hooks like useState and useEffect. Here’s an example of handling form submission and logging the form data to the console:
    // components/LoginPage.tsx”use client”import React, { useState } from ‘react’;const LoginPage = () => {const [formData, setFormData] = useState({username: ”,password: ”,});const handleInputChange = (e:any) => {const { name, value } = e.target;setFormData((prevData) => ({…prevData,[name]: value,}));};const handleSubmit = (e:any) => {e.preventDefault();console.log(formData); // You can handle API calls or authentication logic here};return (<div className=”min-h-screen flex items-center justify-center bg-gray-100″><div className=”bg-white p-8 shadow-md rounded-lg w-96″><h1 className=”text-2xl font-semibold mb-4″>Login</h1><form onSubmit={handleSubmit}><label htmlFor=”username” className=”block mb-2″>Username:</label><inputtype=”text”id=”username”name=”username”value={formData.username}onChange={handleInputChange}

    className=”w-full border border-gray-300 rounded-md px-3 py-2 mb-4″

    />

    <label htmlFor=”password” className=”block mb-2″>Password:</label>

    <input

    type=”password”

    id=”password”

    name=”password”

    value={formData.password}

    onChange={handleInputChange}

    className=”w-full border border-gray-300 rounded-md px-3 py-2 mb-4″

    />

    <button type=”submit” className=”w-full bg-blue-500 text-white py-2 rounded-md hover:bg-blue-600″>

    Login

    </button>

    </form>

    </div>

    </div>

    );

    };

    export default LoginPage;

  • Include the Login Page in Your Application: You can include the LoginPage component in your application by importing and rendering it in your desired page or layout component. Create new folder “login” under “app” folder. Create new “page.tsx” file under “login” folder. For example, in your app/login/page.tsx:
  • Picture10
    app/login/page.tsximport LoginPage from “@/components/Login”;const Login = () => {return (<div><LoginPage /></div>);};export default Login;
  • Test Your Login Page: Run your Next.js development server (npm run dev) and navigate to the login page in your browser (http://localhost:3000/login or the appropriate route you’ve defined).
    Picture11

Integrate Optimizely Configured Commerce Login API’s:

To integrate APIs http://sensi.local.com/identity/connect/token and http://sensi.local.com/api/v1/sessions into your LoginPage component’s handleSubmit method, follow these steps:

  • Import React Hooks: Ensure you have imported the necessary React Hooks at the top of your file:
    “use client”import React, { useState } from ‘react’;
  • Update handleSubmit Method: Modify the handleSubmit method to include API calls to obtain an access token and handle the login request.
  • In below code,
    • The form is submitted without any validation or authentication logic. It logs the form data to the console.
    • Adds validation checks before submitting the form. It checks if the username and password are provided and sets an error message accordingly. It then proceeds to authenticate the user by sending a request to an authentication API (http://b2b.local.com/identity/connect/token).
    • Constructs a params object with the necessary credentials and sends a POST request to the authentication endpoint.
    • It handles different scenarios such as successful authentication, failed token generation, and failed login requests. Depending on the response from the server, it updates the loginError state to display appropriate error messages or send session API.
    • Sends a POST request to the login endpoint (http://b2b.local.com/api/v1/sessions). with the loginData in JSON format and includes the access token in the authorization header.
    • If the login request is successful, processes the success response, such as displaying an alert for successful login or setting an error if the authentication fails.
      Here’s the updated handleSubmit method:
const handleSubmit = async (e:any) => {
        e.preventDefault();
        console.log(formData); // You can handle API calls or authentication logic here
        if (!formData.username && !formData.password) {
            setLoginError(‘Username and password are required.’);
            return;
        }
        if (!formData.username) {
            setLoginError(‘Username is required.’);
            return;
        }
        if (!formData.password) {
            setLoginError(‘Password is required.’);
            return;
        }
        const params = new URLSearchParams();
    params.append(‘grant_type’, ‘password’);
    params.append(‘scope’, ‘iscapi’);
    params.append(‘client_id’, ‘b2blogin’);
    params.append(‘client_secret’, ‘1cc03abc-2799-5a55-1558-753a882d8981’);
    params.append(‘username’, formData.username || ”);
    params.append(‘password’, formData.password || ”);
    try {
      const tokenGen = await fetch(“http://sensi.local.com/identity/connect/token”, {
        method: ‘post’,
        headers: {
          ‘content-type’: ‘application/x-www-form-urlencoded’,
        },
        body: params,
      });
      if (!tokenGen.ok) {
        const errorDataLogin = await tokenGen.json();
        if (errorDataLogin && errorDataLogin.error_description) {
          setLoginError(errorDataLogin.error_description);
        } else {
          setLoginError(‘Failed to obtain access token.’);
        }
      } else {
        const tokenData = await tokenGen.json();
        if (tokenData != null && tokenData.access_token !== ”) {
          setLoginError(”);
          const loginData = { userName: formData.username, password: formData.password };
          const loginRes = await fetch(“http://sensi.local.com/api/v1/sessions”, {
            method: ‘post’,
            headers: {
              ‘content-type’: ‘application/json’,
              ‘authorization’: ‘Bearer ‘ + tokenData.access_token,
            },
            body: JSON.stringify(loginData),
          });
          if (!loginRes.ok) {
            localStorage.removeItem(‘accessToken’);
            const errorDataLogin = await loginRes.json();
            if (errorDataLogin && errorDataLogin.message) {
              setLoginError(errorDataLogin.message);
            } else {
              setLoginError(‘Login request failed.’);
            }
          } else {
            const successDataLogin = await loginRes.json();
            if (successDataLogin && successDataLogin.isAuthenticated) {
              alert(‘Login successful’);
            } else {
              setLoginError(“Login Failed”);
            }
          }
        } else {
          setLoginError(‘Access token not received.’);
        }
      }
    } catch (error) {
      console.error(‘Error during login:’, error);
      setLoginError(‘An error occurred during login.’);
    }
   };
  • Add Error Handling: Ensure you handle errors appropriately, displaying error messages to the user if any API calls fail.
  • Result of Login Page: After added username and password, but it is throwing error issue.
    Picture12

    See CORS issue:-

    Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at ,x. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 400.Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://b2b.local.com/api/v1/sessions. (Reason: CORS request did not succeed). Status code: (null).Solution for CORS Issue:

    1. Go to Optimizely Admin – http://b2b.local.com/admin
    2. Go to Administration>Settings
    3. Search “CORS Origin”
    4. Enter http://localhost:3000 and Save button.
      Picture13
    5. Restart Website/application pool.

    Final Result of Login Page: Successful login.

    Picture14

Conclusion:

In conclusion, this blog detailed the process of creating a user login page using Next.js and integrating it with Optimizely’s Commerce APIs. The steps included setting up an Optimizely Configured Commerce project locally with running APIs, creating a Next.js project in Visual Studio Code, designing the login form component, handling form submission, and integrating Optimizely’s APIs for authentication.

The blog also addressed common issues such as CORS (Cross-Origin Resource Sharing) errors and provided solutions, including configuring CORS settings in Optimizely’s admin panel. After resolving the CORS issue, the final result was a successful login page implementation.

Overall, the blog serves as a comprehensive guide for developers looking to build a secure and functional user login page in a Next.js application integrated with Optimizely’s Commerce APIs. If any issue, then please contact me.

]]>
0
Christon Ramesh Jason <![CDATA[How to convert a CSV file to an Excel file]]> https://blogs.perficient.com/?p=363070 2024-05-17T08:17:20Z 2024-05-17T08:17:20Z

Converting CSV to Excel streamlines data manipulation and analysis, bridging the simplicity of CSV’s plain text structure with Excel’s powerful spreadsheet functionalities. This conversion ensures a seamless transition from comma-separated records to organized rows and columns, enhancing data accessibility and interpretation. Whether for data analysis, visualization, or collaboration, Excel’s versatile format accommodates diverse needs, offering features like formulas, charts, and conditional formatting.

  • Converted the CSV file with leading zero’s unchanged when changed to an Excel file.
  • Multiple CSV files are automatically converted to Excel files using code built in a loop.

By converting multiple CSV files to an Excel file using VBA code.

VBA CODE:

Sub Csv_to_Excel()
'
' Csv_to_Excel
Dim Pathname As String
Dim Filename As String
Dim WOextn As String
Dim Nam As String

Pathname = "<Specify the Souce Path>"
Filename = Dir(Pathname)

Do While Filename <> ""
    WOextn = Left(Filename, InStr(1, Filename, ".") - 1)
    Nam = Pathname & "" & Filename
    Debug.Print Nam
    Workbooks.Add
    ActiveWorkbook.Queries.Add Name:=WOextn, Formula:= _
        "let" & Chr(13) & "" & Chr(10) & "    Source = Csv.Document(File.Contents(" & Chr(34) & Nam & Chr(34) & "),[Delimiter="","", Columns=25, Encoding=1252, QuoteStyle=QuoteStyle.None])," & Chr(13) & "" & Chr(10) & "    #""Promoted Headers"" = Table.PromoteHeaders(Source, [PromoteAllScalars=true])" & Chr(13) & "" & Chr(10) & "in" & Chr(13) & "" & Chr(10) & "    #""Promoted Headers"""
    ActiveWorkbook.Worksheets.Add
    With ActiveSheet.ListObjects.Add(SourceType:=0, Source:= _
        "OLEDB;Provider=Microsoft.Mashup.OleDb.1;Data Source=$Workbook$;Location=" & WOextn & ";Extended Properties=""""" _
        , Destination:=Range("$A$1")).QueryTable
        .CommandType = xlCmdSql
        .CommandText = Array("SELECT * FROM [" & WOextn & "]")
        .RowNumbers = False
        .FillAdjacentFormulas = False
        .PreserveFormatting = True
        .RefreshOnFileOpen = False
        .BackgroundQuery = True
        .RefreshStyle = xlInsertDeleteCells
        .SavePassword = False
        .SaveData = True
        .AdjustColumnWidth = True
        .RefreshPeriod = 0
        .PreserveColumnInfo = True
        Debug.Print WOextn
        .ListObject.DisplayName = WOextn
        .Refresh BackgroundQuery:=False
    End With
    Application.CommandBars("Queries and Connections").Visible = False
    Range("C8").Select
    ActiveSheet.Name = WOextn
    ActiveWorkbook.SaveAs Filename:="<Specify the target path>" & WOextn & ".xlsx"
    ActiveWorkbook.Close
    Filename = Dir()
Loop
End Sub

Step By Step Procedure to Run the VBA code:

Step 1: Open a new Excel sheet.

Step 1

Step 2: Go to the Developer tab ribbon option.

Step 2

Step 3: Select the Visual Basic option in the Developer Tab.

Step 3

Step 4: Selecting the Visual Basic option opens a new window.

Step 4

Step 5: On the Project tab, right-click the VBA project. Click Module after choosing the Insert option.

Step 5

Step 6: The module option will show in the Project tab under the VBA Project, and the right-side code space will open.

Step 6

Step 7: Paste the VBA code in the code space.

Step 8

Step 8: Select Run or press F5 to run the code from here manually.

Step 9

All of the CSV files that are in the designated folder have been successfully converted into Excel files and loaded into the target folder once the VBA code has been executed.

 

 

 

]]>
0
Eric Walk http://www.ericwalk.com <![CDATA[ELT IS DEAD. LONG LIVE ZERO COPY.]]> https://blogs.perficient.com/?p=362146 2024-04-29T16:34:48Z 2024-04-29T16:31:26Z

Imagine a world where we can skip Extract and Load, just do our data Transformations connecting directly to sources no matter what data platform you use?

Salesforce has taken significant steps over the last 2 years with Data Cloud to streamline how you get data in and out of their platform and we’re excited to see other vendors follow their lead. They’ve gone to the next level today by announcing their more comprehensive Zero Copy Partner Network.

By using industry standards, like Apache Iceberg, as the base layer, it means it’s easy for ALL data ecosystems to interoperate with Salesforce. We can finally make progress in achieving the dream of every master data manager, a world where the golden record can be constructed from the actual source of truth directly, without needing to rely on copies.

This is also a massive step forward for our clients as they mature into real DataOps and continue beyond to full site reliability engineering operational patterns for their data estates. Fewer copies of data mean increased pipeline reliability, data trustability, and data velocity.

This new model is especially important for our clients when they choose a heterogeneous ecosystem combining tools from many partners (maybe using Adobe for DXP and marking automation, and Salesforce for sales and service) they struggle to build consistent predictive models that can power them all—their customers end up getting different personalization from different channels. When we can bring all the data together in the Lakehouse faster and simpler, it makes it possible to build one model that can be consumed by all platforms. This efficiency is critical to the practicality of adopting AI at scale.

Perficient is unique in our depth and history with Data + Intelligence, and our diversity of partners. Salesforce’s “better together” approach is aligned precisely with our normal way of working. If you use Snowflake, RedShift, Synapse, Databricks, or Big Query, we have the right experience to help you make better decisions faster with Salesforce Data Cloud.

]]>
0
Rajesh Ranga Rao <![CDATA[Azure SQL Server Performance Check Automation]]> https://blogs.perficient.com/?p=361522 2024-04-11T13:48:36Z 2024-04-11T13:37:29Z

On Operational projects that involves heavy data processing on a daily basis, there’s a need to monitor the DB performance. Over a period of time, the workload grows causing potential issues. While there are best practices to handle the processing by adopting DBA strategies (indexing, partitioning, collecting STATS, reorganizing tables/indexes, purging data, allocating bandwidth separately for ETL/DWH users, Peak time optimization, effective DEV query Re-writes etc.,), it is necessary to be aware of the DB performance and consistently monitor for further actions. 

If Admin access is not available to validate the performance on Azure, building Automations can help monitor the space and necessary steps before the DB causes Performance issues/failures. 

Regarding the DB performance monitoring, IICS Informatica Job can be created with a Data Task to execute DB (SQL Server) Metadata tables query to check for the performance and Emails can be triggered once Free space goes below the threshold percentage (ex., 20 %). 

IICS Mapping Design below (scheduled Hourly once). Email alerts would contain the Metric percent values. 

                        Iics Mapping Design Sql Server Performance Check Automation 1

Note : Email alerts will be triggered only if the Threshold limit exceeds. 

                                             

IICS ETL Design : 

                                                     

                     Iics Etl Design Sql Server Performance Check Automation 1

IICS ETL Code Details : 

 

  1. Data Task is used to get the Used space of the SQL Server performance (CPU, IO percent).

                                          Sql Server Performance Check Query1a

Query to check if Used space exceeds 80% . I Used space exceeds the Threshold limit (User can set this to a specific value like 80%), and send an Email alert. 

                                                            

                                         Sql Server Performance Check Query2

If Azure_SQL_Server_Performance_Info.dat has data (data populated when CPU/IO processing exceeds 80%) the Decision task is activated and Email alert is triggered. 

                                          Sql Server Performance Result Output 1                                          

Email Alert :  

                                            Sql Server Performance Email Alert

]]>
0
Parag Balapure <![CDATA[Create and Retrieve Secrets from the Azure Key Vault using an ASP.Net Core Application]]> https://blogs.perficient.com/?p=359885 2024-03-27T14:55:50Z 2024-03-27T14:55:50Z

As everyone knows, maintaining application-level security for passwords, certificates, API keys, and other data is a bit of a critical thing. It was necessary for my project to safeguard the SMTP password.

I wanted to know how to protect my SMTP password in some way. I then discovered the Azure Key vault concept and began to put it into practice. Azure Key Vault is a cloud service for storing and accessing secrets, which can be anything sensitive, like passwords, connection strings, etc.

Key Vault Details can be found at the following link: What is Azure Key Vault? | Microsoft Learn

A few procedures must be followed to establish an Azure Key Vault, grant access permissions to registered applications within Azure, and obtain the key via the Asp.net core application.

First and foremost, an Azure membership is required. You can have a trial or subscription as needed for practical purposes.

Create Azure Key Vault Secrets and Access Them Using Asp.net Core Web API

  1. Access Azure services by visiting azure.portal.com.
  2. Choose the Key Vaults service from the list.1
  3. Provide the access settings to the Azure Key Vault policy and create key vaults by filling out the necessary fields. Either choose a resource group or form a new resource group.3 4 5  7  9 1011 1213
  4. Within the SMTP-Cred key vault, create secrets. Numerous secrets can be produced within a single key vault for various purposes.14 15
  5. Return to Azure Service and choose “App Registration.”16
  6. Let’s look at registering an app.17
  7. Provide the application’s name.18
  8. Copy the tenantId, clientId, and secret value. These are necessities for access to Key Vault secrets. They will be utilized in an Asp.net Core application.1920
  9. Return to the key vault you created in step #3 and establish an access policy there. The procedure remains the same although another key vault name is visible in this image.32
  10. Look for and choose Secret Management and Template.34
  11. Find your previously registered app by searching for it, then select Create.35 36
  12. Create an Asp.Net Core Web API Application now by utilizing the template and install packages Azure.Identity and Azure.Security.KeyVault.Secrets from Tools->NuGet  Package Manager->Manage NuGet packages for solution. The images below show you how we can obtain the key vault secrets.373839

 

We may achieve this using the Key Vault policy. If not, the default credential policy is available for configuration.

 

]]>
0
Steve Holstad <![CDATA[All Aboard! Visualize Business Impact with the Enterprise Cloud Transit Map]]> https://blogs.perficient.com/?p=360117 2024-03-27T15:13:15Z 2024-03-22T17:33:43Z

Cloud modernization is the primary driver of digital transformation and impactful business value. Cloud platforms have evolved from core technology to disruptive ecosystems of strategic advantage.  Migration and modernization are vital to reach new markets, deliver innovative products, improve resiliency, reduce costs, and improve customer experiences.  But it’s easy (and common) to lose sight of your business mission as you navigate complicated technological challenges.

Sometimes, visualization can help.  We’ve created our Enterprise Cloud Transit Map as a simple blueprint for navigating complexities while staying focused on the win themes that really matter: creating competitive advantages, adding customer value, building a strong operational core, and growing your business:

Perficient Cloud Transit Map Large

The Tracks of Transformation: I Think I Can

Our map consists of five distinct tracks, each with its own set of ‘stations’ or focus areas, designed to guide your voyage towards the real value prop outcomes of cloud modernization:

  • Competitive Advantage
  • Business Growth
  • Innovation & Acceleration
  • Profitability
  • Operational Maturity
  • Foundational Core
  • Customer Value

Strategy & Architecture: This is where your journey begins. Aligning organizational goals with industry insights and crafting fit-for-purpose architecture lays the foundation. Business Alignment, Product Strategy, Scalability, and Interoperability are key stops for this line, as well as Centers of Excellence ensuring adoption success and leading to innovation and differentiated products.

Platform Foundation: The next path is getting your infrastructure, platform, connectivity, and governance set up for success across multicloud and hybrid cloud architectures, modernizing legacy IT, solving for resiliency issues, and setting the table for sustainability and cost optimization wins down the line.

Migrate & Modernize: The heart of transformation lies in scalability, cost optimization, and interoperability. This track is a deep dive into streamlining deployments, embracing cloud-native capabilities, and modernizing applications to deliver differentiated products more efficiently.

Data Insight: Data is the engine driving intelligent decision-making. This track emphasizes modernizing your data platform, ensuring regulatory compliance, and unlocking (and trusting) the potential of AI, setting the stage for insightful, data-driven decisions and truly impactful applications of emerging tech.

Cloud Operating Model: The maturity of your journey, focusing on developing team skills, optimizing costs, enabling new business objectives, and establishing a modern operational model. Success here aligns your cloud model to your existing organization while embracing transformative tools, technologies, and processes, with effective resource management and sustainable policies & governance.

On the Path to Business Impact

For IT and business leaders committed to modernizing their organizations, our Enterprise Cloud Transit Map is more than just a navigational guide; it’s a sanity check on delivering real-world business impacts and outcomes.  Understanding the key themes of the map helps you set a course to resiliency, performance, profitable growth, and competitive innovation.

All Aboard!

]]>
0
Edelberto Reyes <![CDATA[AI: Legal Aspects of Using it in Consulting Companies]]> https://blogs.perficient.com/?p=359777 2024-03-26T14:55:21Z 2024-03-20T20:08:23Z

Introduction 

Nowadays, everyone speaks of AI, it is not a subject related to IT people only; others that are involved in different fields, such as vendors, taxi drivers, journalists, scientists, teachers, students, and even politicians mention it in their speeches. Most of them use AI to generate content, for example “news articles, academic papers, social media posts, photos, and even chatbot chats(Cambridge University Press, 2023) 

However, we have to say that they do not know how the AI really works, where the data comes from or even are not aware of the regulations related to their common lives, especially data privacy, intellectual property, and others. 

Based on that, the following topics are going to be covered in this article: 

  • Understand the general view of legal aspects of AI uses 
  • Give an overview of data privacy and intellectual property concerns 

Overview of AI Legal Concerns 

It is common to hear about Chat GPT these days, which “attracted millions of users quickly after its launch” (Cambridge University Press, 2023), and how professional associations have applied it to improve operations and decision-making processes; but some companies are not aware of how this can impact their business in the short term use, there are legal issues that should be evaluated immediately. These problematic issues are the following: (Tenenbaum, 2023) 

  • Data privacy: Companies must know that AI tools “depend on vast amounts of data to train and improve their algorithms(Tenenbaum, 2023) and “they must ensure collected data is used by local, state or international privacy laws and regulations(Tenenbaum, 2023) 
  • Intellectual property: AI tools can produce new works of authorship, for instance, software, artistic works, articles, or white papers (Tenenbaum, 2023).  At this point, companies must consider if they have enough rights and licenses to use and release these works. 
  • Discrimination: AI systems can cause discrimination if the trained data is based on race, ethnicity, national origin, gender race, and so on.   Companies must identify and classify training data to avoid any biases in their algorithms. (Tenenbaum, 2023). 
  • Tort liability: Whether the AI system or tool “produces inaccurate, negligent, or biased results that harm members or other end users”, the company could be held responsible for any resulting damage, (Tenenbaum, 2023) and this can be a huge problem. Companies must be aware of the consequences and prevent them with reliable and accurate AI systems. 

Data privacy and intellectual property concerns 

If you as an employee use AI for your daily tasks and so on, this implies some responsibilities that we must talk about, because “the impact of AI is therefore truly enormous, and it has given rise to numerous legal and ethical issues that need to be explored, especially from a copyright perspective” (Cambridge University Press, 2023).   

You can hear that some software engineers start using AI tools such as Chat GPT (The most popular) and GitHub Copilot, which are good in a way because you can “spend less time creating boilerplate and repetitive code patterns, and more time on what matters: building great software(GitHub, Inc, 2023). But the project and its code are the property of the clients, and this is considered confidential information so you must be aware that those copilots need code to be trained, and “shares recommendations based on the project’s context and style conventions(GitHub, Inc, 2023). If you use those AI tools, you should be careful with the clauses you signed to comply with the contract.  

On the other hand, how the companies and employees know that the training data that uses the AI tool is free from any copyright violations (Lucchi, 2023). If you see only the situation as an individual, you can implement functionality in less time than a person without the tools, but the company must “ensure that they do not infringe any third-party copyright, patent, or trademark rights (Tenenbaum, 2023)when the product is released. 

In conclusion, several legal issues can be raised because of AI tools. Although the AI regulations and laws are in the process of being defined, using AI tools irresponsibly can cause more problems than solutions to your company and you as an employee can be fired because of your contract confidentiality clauses. Companies are starting to implement AI tools, but they must define policies to use them with responsibility and comply with laws and employees must verify with the legal and IT department, to ensure that AI tools being used do not violate company compliance rules. 

Finally, I would like to acknowledge the valuable work done by the Microsoft .NET Hub in the Capabilities Development dimension within our company, which has been instrumental in addressing key issues such as the use of AI in product development. We invite everyone to explore more about this fascinating topic and discover practical tips on our “AI Product Powered People” blog. Join us on this journey into the future of AI-powered technology and product development!  

References 
  • Cambridge University press. (2023, 08 23). Cambridge University press. Retrieved from Cambridge University press: https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/chatgpt-a-case-study-on-copyright-challenges-for-generative-artificial-intelligence-systems/CEDCE34DED599CC4EB201289BB161965 
  • European Parliament. (2023, 09 23). News European Parliament. Retrieved from News European Parliament: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence?&at_campaign=20226-Digital&at_medium=Google_Ads&at_platform=Search&at_creation=RSA&at_goal=TR_G&at_advertiser=Webcomm&at_audien 
  • GitHub, Inc. (2023, 09 30). Retrieved from https://github.com/features/copilot: https://github.com/features/copilot 
  • Lucchi, N. (2023). ChatGPT: A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems. Cambridge University Press. 
  • Petit, N. (2017). Law and Regulation of Artificial Intelligence and Robots – Conceptual Framework and Normative Implications. Elsevier Inc, 31. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2931339 
  • Springer Nature. (2023, 05 19). Springer link. Retrieved from https://link.springer.com/article/10.1007/s00146-022-01471-6 
  • Tenenbaum, J. (2023, 04 19). American Bar Association. Retrieved from American Bar Association: https://businesslawtoday.org/2023/04/ai-associations-five-key-legal-issues/ 

 

 

]]>
0
Ankit Jangle <![CDATA[Power Apps Solution Deployment Using Pipeline App: A Step-by-Step Guide]]> https://blogs.perficient.com/?p=352289 2024-03-07T10:50:07Z 2024-03-07T10:50:07Z

Microsoft’s Power Apps is a flexible platform that empowers users to effortlessly construct custom applications. As applications become more complex, the need for a well-organized and efficient deployment process becomes paramount. In this blog post, we’ll delve into the step-by-step process of implementing deployment pipelines in Power Apps, leveraging the Power Apps Deployment Pipeline App—an indispensable tool designed to streamline and optimize application deployment within the framework of Application Lifecycle Management (ALM).

The Power Apps Deployment Pipeline serves a pivotal role in this process. Acting as a systematic conduit, it ensures the seamless transition of applications across different stages, from development to testing and production. The pipeline enables version control, automated testing, collaboration facilitation, and efficient environment management by adhering to ALM principles. This strategic and controlled workflow enhances your applications’ reliability and promotes collaboration and agility across your development and IT teams within the broader context of the application’s lifecycle. Therefore, the Power Apps Deployment Pipeline plays a central role in orchestrating a well-coordinated and efficient deployment strategy for Power Apps applications, aligning seamlessly with ALM practices.

Prerequisites for Deploying the Power Apps Solution

Before we dive into the process, make sure you have the following prerequisites:

  1. Power Apps Deployment Pipeline App: Install the Power Apps Deployment Pipeline App from Microsoft AppSource.Install Power Platform pipeline App.
  2. Power Platform Environment: Ensure you can access a Power Platform environment where your apps are developed and deployed.Power Apps Environment

Steps for Power Apps Deployment 

1: Install Power Apps Deployment Pipeline App

  • Open Microsoft AppSource and locate the Power Apps Deployment Pipeline App.
  • Install the app in your Power Platform environment.Install Power Platform pipeline App.

2: Configure the Power Platform Environment

  • Launch the Power Apps Deployment Pipeline App within your Power Platform environment.
  • Configure the app settings, specifying your deployment pipeline’s source and target environments.

3: Create a Deployment Pipeline

  • In the Power Apps Deployment Pipeline App, navigate to the “Pipelines” section.
  • Create a new deployment pipeline, naming it according to your project or environment.Power Pipeline App

4: Add Deployment Stages

  • Define deployment stages corresponding to your environments (e.g., Development, Test, Production).Deployment Stages
  • Configure each stage with the necessary settings for solution import and deployment.Configure deployment Stage

5: Test the Deployment Steps

  • Steps for deploying solutions:
    • Go to the Power Platform environment.
    • Navigate to the “Solutions” area.
    • Locate the solution to be deployed and click on the pipeline icon.
    • Start the deployment process by selecting the appropriate pipeline.Run The Pipeline
  • Run the deployment steps to ensure each stage and manual deployment task executes as expected.
  • Monitor the deployment logs for any errors or warnings during the manual deployment.

6: Monitor and Optimize

  • Regularly monitor your deployment pipeline for successful and unsuccessful runs.Monitor
  • Optimize the pipeline based on feedback and evolving deployment requirements.

Implementing manual deployment pipelines for Power Apps using the Power Apps Deployment Pipeline App streamlines the deployment process and ensures consistency across environments. By following these step-by-step instructions, you’ll establish an efficient pipeline that reduces errors and accelerates the delivery of your Power Apps solutions. Embrace the power of deployment within Power Apps and take your application deployment to the next level. Explore more about pipeline.

]]>
0
Mike Campbell <![CDATA[Ready for Microsoft Copilot for Microsoft 365?]]> https://blogs.perficient.com/?p=358104 2024-03-05T17:43:26Z 2024-03-05T14:22:36Z

Organizations want to leverage the productivity enhancements Microsoft Copilot for Microsoft 365 may enable, but want to avoid unintentional over-exposure of organizational information while users are accessing these Copilot experiences.  Our Microsoft team is fielding many questions from customers about how to secure and govern Microsoft Copilot for Microsoft 365.  These organizations want to ensure maximum productivity benefit while minimizing their risk.   This article will describe the key considerations an organization should address.

Microsoft Copilot for Microsoft 365 context

First, a quick point of clarification.  Microsoft has released several instances of Copilot for use in different contexts.  At this writing Copilot instances include Microsoft Copilot (integrated in Bing and the Edge Browser), Microsoft Security Copilot, Github Copilot, and more.  In this article I address Microsoft Copilot for Microsoft 365, an instance of the Copilot technologies integrated with Microsoft 365 tenants and applications, via Microsoft Graph.  Microsoft Copilot for Microsoft 365 requires add-on licensing on top of other Microsoft 365 licensing.

Microsoft Copilot for Microsoft 365 is also extensible to non-Microsoft 365 sources of data.  Out of the box, “web grounding” is enabled at the tenant level and disabled at the user level (user can enable).  Web grounding allows Copilot to include web-based searches and the resulting information to be included in responses.  Additionally, via Copilot Studio, organizations can customize Microsoft 365 based experiences and can extend Copilot responses to include non-Microsoft 365 sources of information.

Microsoft 365 Security and Governance control the Microsoft Copilot experience

Here is the primary consideration your organization must understand and act upon in order to minimize unintentional over-exposure of your information via Microsoft Copilot for Microsoft 365:  By design, Microsoft Copilot for Microsoft 365 is accessing and including the information that your users already have access to in your tenant and within the bounds of existing Microsoft commitments.  Microsoft Copilot for Microsoft 365 is providing an additional interface for exposing this information, and is doing some of the heavy lifting for your users in finding, compiling, and contextualizing that information.  But, ultimately, it is exposing information that a user could have accessed on their own using their existing permissions, and given sufficient skill in using Microsoft 365 tools and applications for searching, querying, or accessing that information.  In this article I am not addressing any potential failure of Copilot to follow the published design parameters.  Monitoring and reporting on usage are advised to address this (unlikely) possibility.

Microsoft Copilot for Microsoft 365 is the latest, and possibly most advanced, tool for surfacing Microsoft 365 data to users.  In a sense, Microsoft Copilot is the next iteration of Microsoft Search and Microsoft Delve.  Each of these tools have some administrative controls that allow administrators to limit the information that is returned to a casual user of the tools.   However, using them in this way is somewhat like patching over a structural problem with a layer of drywall mud.  Your primary approach should be securing the underlying access controls and membership of your Microsoft 365 assets.  These assets include SharePoint Online sites, OneDrive for Business sites, Microsoft 365 Groups and Teams, Exchange Online mailboxes, and other Microsoft 365 assets.  Microsoft Purview sensitivity labels and their access controls can also be part of your solution to securing information and restricting access to the appropriate users.

The bottom line here is that there is no Microsoft Copilot for Microsoft 365 quick fix for information over-exposure.  Organizations who find that their existing Microsoft 365 usage and architecture has made information too widely available need to do the heavy lifting of properly adjusting permissions and memberships of the underlying assets, adjusting various Microsoft 365 workload settings and policies, and considering a well-planned crawl/walk/run approach for deployment of Microsoft Purview controls such as Sensitivity Labels (and others) to address additional scenarios.  Your organization should address information access controls at the foundational level first.  Once the foundation is secure then optimize controls around the specific access methods such as Microsoft Copilot.

Key Considerations for Microsoft 365 Readiness for Microsoft Copilot

The key considerations your organization should address when considering a Microsoft Copilot for Microsoft 365 deployment include:

Tier 1 Considerations

  • Microsoft 365 Groups (and Teams)
    • Are you overly using public groups and Teams?  Unless permissions are customized, all of the file content in public groups and teams is available to anyone in the organization.  Anyone in the organization can access this information via direct navigation to the underlying SharePoint Online site, or via Microsoft Search, Microsoft Delve, or Microsoft Copilot for Microsoft 365.  (This does not apply to Private Channels and Shared Channels.)
  • SharePoint Online Sites
    • Are your site permissions too broad?
    • Is the “Everyone” or “Everyone except External Users” group over-used in any sites?
  • SharePoint and OneDrive Sharing Permissions
    • Have you set the restrictions and defaults for sharing links appropriately at the global level?
    • Have you configured per-site sharing link controls appropriately for the site?
  • Web Grounding
  • Microsoft Purview
    • Consider using Container labels (sensitivity labels specifically for applying policy at the Group/Team/Site level) to enforce organizational standards at the Group/Team/Site level.
    • Do you have an Enterprise information taxonomy and classification system?  Have you implemented your taxonomy as Sensitivity Labels? There are legitimate tactical use cases for Sensitivity Labels but most organizations need a strategic “crawl, walk, run” multi-month or year rollout to achieve long-term effectiveness and user adoption.

Tier 2 Considerations

  • Purview Data Lifecycle
    • Have you implemented or are you implementing information lifecycle policies in Microsoft Purview?  To make quality output from Microsoft Copilot for Microsoft 365 more likely you should address information ROT (Redundant, Outdated, or Trivial) in your tenant while also preserving important and relevant records that may contribute to quality output.  Retention and deletion policies and labels will likely be part of the solution to the problem of ROT.  Appropriate deletion actions can also reduce the likelihood of over-exposure of older but still sensitive information.
  • User activity monitoring
    • Are you capturing and reviewing user activity in your tenant?  Specifically for Copilot, are you monitoring the Microsoft Copilot usage reports?  Have you enabled the telemetry capture that provides more detailed usage information?

The list above does not include all considerations for securing your Microsoft 365 tenant. For example, we did not address conditional access or multi-factor authentication scenarios.  However, the above considerations are most directly related to Microsoft Copilot consumption.

Conclusion

Microsoft 365 may add additional Copilot specific configuration and governance controls in the future.   However, the best approach is to ensure that your underlying Microsoft 365 assets are properly permissioned and configured.  As new Microsoft 365 features and controls are released, these actions will continue to pay dividends.

Our Perficient Microsoft team has extensive experience helping organizations like yours  to analyze current state, identify gaps, and take action to secure and govern their Microsoft tenant.   This work directly impacts the Microsoft Copilot for Microsoft 365 experience.  Our engagements range from road-mapping, to foundational security and governance implementations, to extended migration and/or enablement and support offerings and we are able to customize these engagements to your particular areas of concern.  We love partnering with customers to help them achieve the best possible Microsoft 365 service adoption and governance outcomes.

]]>
0
Elijah Weber <![CDATA[Building Re-Usable Pipeline Templates in GitHub Actions Workflows]]> https://blogs.perficient.com/?p=351131 2024-02-26T23:36:59Z 2024-02-26T12:31:24Z

Introduction To Pipeline Templates

In today’s agile software development landscape, teams rely heavily on robust workflows called “pipelines” to automate tasks and enhance productivity.  For DevOps teams who were historically familiar with Microsoft’s Azure DevOps CICD Automation platform,  one of the most powerful functionalities rolled out by the platform that allowed teams to drastically speed up the pipeline development process was:  “YAML Templates“.

Templates in Azure DevOps are reusable configuration files written in YAML, allowing us to enforce best practices and accelerate build and release automation for large groups of teams.  These templates facilitate faster onboarding, ease of maintenance through centralized updates and version control, and enforcement of built-in security measures and compliance standards.

For a lot of my clients that are not building in the Azure ecosystem, however,  there is a popular question – how do we accomplish this same template functionality in other toolchains?  One platform which has been growing rapidly in popularity in the DevOps automation space is GitHub Actions.   GitHub Actions distinguishes itself with seamless integration into the GitHub ecosystem, providing an intuitive CI/CD solution via YAML configurations within repositories. Its strength lies in a user-friendly approach, leveraging a rich marketplace for prebuilt actions and multiple built-in code security features.

In today’s blog,  I am going to dive in to show how we can implement the same templating functionality using GitHub Actions so that we can share common code,  best practices, and enforced security across multiple teams to provide a structured and versionable approach to define pipeline configurations, fostering reusability, consistency, and collaborative development practices for build and release automation across any stack.


Setting Up GitHub

GitHub offers a structured environment for collaboration within a company.  The top level structure for GitHub is an “Enterprise Account”.  Inside of an account, however,  a company can create multiple Organizations:  Organizations are a group construct that companies can use to arrange users & teams so that they can collaborate across many projects at once – Organizations offer sophisticated security and administrative features which allow companies to govern their teams and business units. Leveraging this structure, administrators can manage access controls to repositories, including regulating access to Starter Workflows. By configuring repository settings and access permissions, companies can ensure that these predefined workflows are accessible only to members within their Organization.

Typically, enterprises will have a single business unit or department that is responsible for their cloud environments (whether this team actually builds all cloud environments or is simply in charge of broader cloud foundations, security, and governance varies depending on the size and complexity of the company and their infrastructure requirements).   Setting up a single Organization for this business unit makes sense as it allows all the teams in that unit to share code and control the cloud best practices from a single location that other business units or organizations can reference.

To simulate this I am going to setup a new organization for a “Cloud OPs” department:

Setuporg

 

Once I have my organization setup, I could customize policies, security settings, or other settings to protect my department’s resources.   I won’t dive into this, but below are a couple of good articles of GitHub documentation which go through some common settings, roles, and other features that should be configured by your company when you have created a new GitHub org.

https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization

https://docs.github.com/en/organizations/managing-user-access-to-your-organizations-repositories

The next step after creating the org is to create a repository for hosting our templates:

Initialize Github Repo

Now that we have a repository, we are ready to start writing our YAML code!

Let’s Compare Azure DevOps Templates to GitHub Actions Starter Workflows:

With the setup of GitHub complete, let’s dive in and start converting an Azure DevOps template into a Github Starter Workflow so that we can store it in our repo.
Being a technical director in a cloud consulting team,  one of the most common use-cases I have for using pipeline templates is for sharing common Terraform automation.
By using Azure DevOps Pipeline templates to bundle common, repeatable Terraform automation steps, I am able to provide a standardized, efficient, and flexible approach for creating reusable infrastructure deployment pipelines across all the teams within my company.

Below is an example of a couple of common Terraform templates my team uses:

Starter Ado Templates

Any team automating infrastructure with Terraform is going to write automation to run these three processes.  So it is logical to write them once in a template fashion so that future teams can extend their pipelines from these base templates to expedite their development process.  Here are some very basic versions of what these templates can look like written for Azure DevOps pipelines:

###VALIDATE TEMPLATE###

parameters:
  - name: terraformPath
    type: string

stages:
  - stage: validate
    displayName: "Terraform validate"
    jobs:
      - job: validate
        displayName: "Terraform validate"

        variables:
          - name: terraformWorkingDirectory
            value: $(System.DefaultWorkingDirectory)/${{ parameters.terraformPath }}

        steps:
          - checkout: self

          - task: PowerShell@2
            displayName: "Terraform init"
            inputs:
              targetType: 'inline'
              pwsh: true
              workingDirectory: $(terraformWorkingDirectory)
              script: |
                terraform init -backend=false

          - task: PowerShell@2
            displayName: "Terraform fmt"
            inputs:
              targetType: 'inline'
              pwsh: true
              workingDirectory: $(terraformWorkingDirectory)
              script: |
                terraform fmt -check -write=false -recursive

          - task: PowerShell@2
            displayName: "Terraform validate"
            inputs:
              targetType: 'inline'
              pwsh: true
              workingDirectory: $(terraformWorkingDirectory)
              script: |
                terraform validate       
###PLAN TEMPLATE###

parameters:
  - name: condition
    type: string
  - name: dependsOnStage
    type: string
  - name: environment
    type: string
  - name: serviceConnection
    type: string
  - name: terraformPath
    type: string
  - name: terraformVarFile
    type: string

stages:
  - stage: plan
    displayName: "Terraform plan: ${{ parameters.environment }}"
    condition: and(succeeded(), ${{ parameters.condition }})
    dependsOn: ${{ parameters.dependsOnStage }}
    jobs:
      - job: plan
        displayName: "Terraform plan"

        steps:

          - checkout: self

          - task: AzureCLI@2
            displayName: "Terraform init"
            inputs:
              scriptType: bash
              scriptLocation: inlineScript
              azureSubscription: "${{ parameters.serviceConnection }}"
              addSpnToEnvironment: true
              workingDirectory: ${{ parameters.terraformPath }}
              inlineScript: |
                export ARM_CLIENT_ID=$servicePrincipalId
                export ARM_CLIENT_SECRET=$servicePrincipalKey                
                export ARM_SUBSCRIPTION_ID=$DEPLOYMENT_SUBSCRIPTION_ID
                export ARM_TENANT_ID=$tenantId

                terraform init \
                  -backend-config="subscription_id=$TFSTATE_SUBSCRIPTION_ID" \
                  -backend-config="resource_group_name=$TFSTATE_RESOURCE_GROUP_NAME" \
                  -backend-config="storage_account_name=$TFSTATE_STORAGE_ACCOUNT_NAME" \
                  -backend-config="container_name=$TFSTATE_CONTAINER_NAME" \
                  -backend-config="key=$TFSTATE_KEY"                  
            env:
              DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID)
              TFSTATE_CONTAINER_NAME: $(TfstateContainerName)
              TFSTATE_KEY: $(TfstateKey)
              TFSTATE_RESOURCE_GROUP_NAME: $(TfstateResourceGroupName)
              TFSTATE_STORAGE_ACCOUNT_NAME: $(TfstateStorageAccountName)
              TFSTATE_SUBSCRIPTION_ID: $(TfstateSubscriptionID)

          - task: AzureCLI@2
            displayName: "Terraform plan"
            inputs:
              scriptType: bash
              scriptLocation: inlineScript
              azureSubscription: "${{ parameters.serviceConnection }}"
              addSpnToEnvironment: true
              workingDirectory: ${{ parameters.terraformPath }}
              inlineScript: |
                export ARM_CLIENT_ID=$servicePrincipalId
                export ARM_CLIENT_SECRET=$servicePrincipalKey                
                export ARM_SUBSCRIPTION_ID=$DEPLOYMENT_SUBSCRIPTION_ID
                export ARM_TENANT_ID=$tenantId

                terraform workspace select $(${{ parameters.environment }}TfWorkspaceName)

                terraform plan -var-file '${{ parameters.terraformVarFile }}'
            env:
              DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID)
###APPLY TEMPLATE###


parameters:
  - name: condition
    type: string
  - name: dependsOnStage
    type: string
  - name: environment
    type: string
  - name: serviceConnection
    type: string
  - name: terraformPath
    type: string
  - name: terraformVarFile
    type: string

stages:
  - stage: apply
    displayName: "Terraform apply: ${{ parameters.environment }}"
    condition: and(succeeded(), ${{ parameters.condition }})
    dependsOn: ${{ parameters.dependsOnStage }}
    jobs:
      - deployment: apply
        displayName: "Terraform apply"
        environment: "fusion-terraform-${{ parameters.environment }}"
        strategy:
          runOnce:
            deploy:
              steps:
      
                - checkout: self
      
                - task: AzureCLI@2
                  displayName: "Terraform init"
                  inputs:
                    scriptType: bash
                    scriptLocation: inlineScript
                    azureSubscription: "${{ parameters.serviceConnection }}"
                    addSpnToEnvironment: true
                    workingDirectory: ${{ parameters.terraformPath }}
                    inlineScript: |
                      export ARM_CLIENT_ID=$servicePrincipalId
                      export ARM_CLIENT_SECRET=$servicePrincipalKey                
                      export ARM_SUBSCRIPTION_ID=$DEPLOYMENT_SUBSCRIPTION_ID
                      export ARM_TENANT_ID=$tenantId

                      terraform init \
                        -backend-config="subscription_id=$TFSTATE_SUBSCRIPTION_ID" \
                        -backend-config="resource_group_name=$TFSTATE_RESOURCE_GROUP_NAME" \
                        -backend-config="storage_account_name=$TFSTATE_STORAGE_ACCOUNT_NAME" \
                        -backend-config="container_name=$TFSTATE_CONTAINER_NAME" \
                        -backend-config="key=$TFSTATE_KEY"                  
                  env:
                    DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID)
                    TFSTATE_CONTAINER_NAME: $(TfstateContainerName)
                    TFSTATE_KEY: $(TfstateKey)
                    TFSTATE_RESOURCE_GROUP_NAME: $(TfstateResourceGroupName)
                    TFSTATE_STORAGE_ACCOUNT_NAME: $(TfstateStorageAccountName)
                    TFSTATE_SUBSCRIPTION_ID: $(TfstateSubscriptionID)
      
                - task: AzureCLI@2
                  displayName: "Terraform apply"
                  inputs:
                    scriptType: pscore
                    scriptLocation: inlineScript
                    azureSubscription: "${{ parameters.serviceConnection }}"
                    addSpnToEnvironment: true
                    workingDirectory: ${{ parameters.terraformPath }}
                    inlineScript: |
                      $env:ARM_CLIENT_ID=$env:servicePrincipalId
                      $env:ARM_CLIENT_SECRET=$env:servicePrincipalKey                
                      $env:ARM_SUBSCRIPTION_ID=$env:DEPLOYMENT_SUBSCRIPTION_ID
                      $env:ARM_TENANT_ID=$env:tenantId
      
                      terraform workspace select $(${{ parameters.environment }}TfWorkspaceName)
                    
                      terraform apply -var-file '${{ parameters.terraformVarFile }}' -auto-approve
                  env:
                    DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID)

The three stages of Terraform automation are bundled together into sets of repeatable steps – THEN, when any other teams want to use these templates in their own pipelines, they can simply call them using a “Resources” link in their code:

Yaml Templates Example

 

How do we do this same concept in GitHub Actions?

Github Actions offers the concept of “Starter Workflows” as their version of  “Pipeline Templates” which are offered by Azure DevOps.  Starter Workflows are preconfigured templates designed to expedite the setup of automated workflows within repositories, providing a foundation with default configurations and predefined actions to streamline the creation of CI/CD pipelines in GitHub Actions.

The good news is that both Azure DevOps YAML and GitHub Actions YAML use very similar syntax – so the conversion process from an Azure DevOps Pipeline Template to a Starter Workflow is not a major effort.  There are some features that are not yet supported in GitHub actions, so I have to take that into account as I start my conversion.  See this document for a breakdown of differences between the two platforms from a feature perspective:

https://learn.microsoft.com/en-us/dotnet/architecture/devops-for-aspnet-developers/actions-vs-pipelines

To demonstrate the concept,  I am going to convert the “Validate” template that I showed above into a GitHub Actions Starter Workflow and we can compare it to the original Azure DevOps version:

Comparison

Here is a list of what we had to change:

  1. Inputs:  What Azure DevOps refers to as “Parameters”,  GitHub Actions calls “Inputs”. Inputs allow us to pass in settings, variables, or feature flags into the template so that we can control the behavior.
  2. Stages: Azure DevOps offers a feature called “Stages” which are logical boundaries in a pipeline. We use Stages to mark separation of concerns (for example, Build, QA, and production).  Each stage acts as a parent for one or more Jobs.   GitHub, however,  does not support stages.  SO,  we have to remove the stage syntax and use declarative logic and efficient comments in our scripts to logically divide the workflow.
  3. Variables:  Azure DevOps provides the ability to define various string key-value pairs that you can then later use in your pipeline.  As can be seen our templates above, we use variables to define the file path to where our main.tf file is located in our Terraform code repository.  As that information is likely to be needed by multiple different steps in the automation, it is a logical use case to define the value once versus hard-coding it everywhere that it needs to be used.GitHub actions uses a different syntax for defining variables: https://docs.github.com/en/actions/learn-github-actions/variables#defining-environment-variables-for-a-single-workflow.
    However,  the concept is very similar in execution to how Azure DevOps defines their variables.  As can be seen in the comparison above we specify that we are creating environment variables and then proceed to pass in the key value pair to create.
  4. Shell Type:  Azure DevOps has multiple pre-built task types.   These are analgous to Github’s pre-built actions.   But in Azure DevOps,  the shell-type is generally specified by the type of task you select.  In GitHub Actions, however,  we need to specify the type of shell we should target for this workflow.  There are various shell types to choose from:
    latforms Shell Description
    All (windows + Linux) python Executes the python command
    All (windows + Linux) pwsh Default shell used on Windows, must be specified on other runner environment types
    All (windows + Linux) bash Default shell used on non-Windows platforms, must be specified on other runner environment types
    Linux / macOS sh The fallback behavior for non-Windows platforms if no shell is provided and bash is not found in the path.
    Windows cmd GitHub appends the extension .cmd to your script
    Windows PowerShell The PowerShell Desktop. GitHub appends the extension .ps1 to your script name.

    when we pass in the shell parameter we are telling the runner that will be executing our YAML scripts, what command-line tool it should use to ensure that we don’t run into any unexpected behavior.

  5. Secrets:  This last one is less obvious than the others because secrets in a pipeline are not actually included in the YAML code itself (putting secrets in your source code is an anti-pattern which you should never do to prevent Credential Leakage).   In Azure DevOps, there are a couple of options for providing secrets or credentials to your pipeline including Service Connections and Library Groups.
    We add the secrets into the library group and then choose the option to set the variable as a Secret:

Secrets

This will encrypt the variable with a one-way encryption so that it will never be visible.   These Library Groups can then be added into your pipeline as environment variables using the ADO variable syntax:

Variable Reference

In GitHub the portal offers a similar secrets option:  https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#about-secrets.   Go to the settings menu for the repo where your top-level pipeline will live (NOTE – this is usually the “Consumer” of the pipeline templates and not the repo where the templates live) and select the settings menu.  From there, you should see a sub-menu where secrets can be set!

Github Secrets

There are several types of secrets to choose from.   Typically Environment secrets are the easiest to work with because these secrets will get injected into your Runner runtime as environment variables that you can reference from your YAML code like so:

steps:
  - shell: bash
    env:
      SUPER_SECRET: ${{ secrets.SuperSecret }}
    run: |
      example-command "$SUPER_SECRET"

 

With that change,  our conversion is largely complete!  Let’s take a look at final result below.

name: terraform-plan-base-template

# The GitHub Actions version of paramters are called "inputs"
inputs:
  terraformPath: #This is the name of the input paramter - other pipelines planning to 
                 #use this template must pass in a value for this parameter as an input
    description: "The path to the terraform directory where all our first Terraform code file (usually main.tf) will be stored."
    required: true
    type: string #GitHub Actions support the following types:  
                 #string, choice, boolean, and environment


  jobs:
    Terraform_Validate: #This is the name of the job - this will show up in the UI of GitHub 
                        #when we instantiate a workflow using this template
      runs-on: ubuntu-latest #We have to specify the runtime for the actions workflow to ensure
                             #that the scripts we plan to use below will work as expected.
      
      # GitHub Actions uses "Environment Variables" instead of "Variables" like Azure DevOps.  
      #The syntax is slightly different but the concept is the same...
      env: 
        terraformWorkingDirectory: ${{GITHUB_WORKSPACE}}/${{ inputs.terraformPath }} #In this 
        #line we are combining a default environment variable with one of our input parameters
        #The list of GitHub defined default ENV variables available is located in this doc: 
        #https://docs.github.com/en/github-ae@latest/actions/learn-github-actions/variables#default-environment-variables

      steps:
        - uses: actions/checkout@v4 #This is a pre-built action provided by github.  
                                    #Source is located here: https://github.com/actions/checkout

        - name: "Terraform init"
          shell: pwsh
          working-directory: $terraformWorkingDirectory
          run: |
            terraform init -backend=false

        - name: "Terraform fmt"
          shell: pwsh
          working-directory: $terraformWorkingDirectory
          run: |
            terraform fmt -check -write=false -recursive

        - name: "Terraform validate"
          shell: pwsh
          working-directory: $terraformWorkingDirectory
          run: |
            terraform validate

 

 

How do use the starter workflows in other pipelines?

The next step is to start sharing these starter workflows with other teams and pipelines.   In my next blog,  I will show how we can accomplish sharing the templates for different organizations to consume!

Stay tuned and keep-on “Dev-Ops-ing”!

]]>
0
Bhuvaneswari Kuduva Premkumar <![CDATA[Generative AI Revolution: A Comparative Analysis]]> https://blogs.perficient.com/?p=353976 2024-01-19T16:26:12Z 2024-01-19T16:21:50Z

In the world of Generative Artificial Intelligence (AI), a new era of large language models has emerged with the remarkable capabilities. ChatGPT, Gemini, Bard and Copilot have made an impact in the way we interact with mobile device and web technologies. We will perform a comparative analysis to highlight the capabilities of each tool.

 

ChatGPTGeminiBardCopilot
Training DataWebWebWebWeb
Accuracy85%85%70%80%
Recall85%95%75%82%
Precision89%90%75%90%
F1 Score91%92%75%84%
MultilingualYesYesYesYes
InputsGPT-3.5: Text Only
GPT-4.0: Text and Images
Text, Images and Google DriveText and ImagesText and Images
Real Time DataGPT-3.5: No
GPT-4.0: Yes
YesYesYes
Mobile SDKhttps://github.com/skydoves/chatgpt-androidAPI Onlyhttps://www.gemini.com/mobileAPI Only
CostGPT-3.5
GPT-4.0
Gemini Pro
Gemini Pro Vision
UndisclosedUndisclosed

Calculation Metrics:

TP – True Positive

FP – False Positive

TN – True Negative

FN – False Negative

Accuracy = (TP +TN) / (TP + FP + TN + FN)

Recall = TP / (TP + FN)

Precision = TP / (TP + FP)

F1 Score = 2 * (Precision * Recall) / (Precision + Recall)

 

Our sample data set consists of 100 queries against Gemini AI. The above formula applied calculates the following scores:

Accuracy: (85 + 0) /100 = 85%

Recall: 85/ (85 + 5) = 94.44%

Precision: 85/ (85 + 10) = 89.47%

F1-Score: 2 * (0.894 * 0.944) / (0.894 + 0.944) = 91.8%

 

Recommended AI Tool:

I recommend Gemini based on its accuracy and consistency.  The ease of integration to the iOS and Android platforms and performance stands out amongst it’s competitors. We will illustrate how easy it is to integration Gemini with 10 easy steps.

Let’s Integrate Gemini into an Android Application!

  1. Download the Android Studio preview release Canary build (Jelly Fish| 2023.3.1).
  2. Create a new project: File -> New -> New Project
  3. Select the Phone and Tablet
    1. Under New Project -> Generate API Starter
    2. Click Next to Proceed
  4. Fill all the necessary details
    1. Enter the Project Name: My Application (or whatever you want to name your project)
    2. Enter the Package Name: (com.companyname.myapplication).
    3. Select the Location to save the project
    4. Select the Minimum SDK version: as API 26 (“Oreo”;Android 8.0)
    5. Select the Build Configuration Language as Kotlin DSL(build.gradle.kts)
    6. Click Finish to proceed 
  5. Create a starter app using the Gemini API
  6. To generate the API. Go to Gemini Studio.
  7. Click Get API Key -> click Create API Key in New Project or Create API Key in Existing Project in the Google AI studio
  8. Select the API key from the Prompt and paste in the Android Studio.
  9. Click Finish to proceed.
  10. Click the Run option in the Android Studio.

And you’re up and running with Generative AI in your Android app!

I typed in “Write a hello world code in java” and Gemini responded with code snippet. You can try out various queries to personalize your newly integrated Generative AI application to your needs.

Screenshot 2024 01 17 At 10.05.06 Pm

Alternatively, you can just Download the Sample app from the GitHub and add the API key to the local.properties to run the app.

It’s essential to recognize the remarkable capabilities of Generative AI tools on the market. Comparison of various AI metrics and architecture can give insight into performance, limitations and suitability for desired tasks. As the AI landscape continues to grow and evolve, we can anticipate even more groundbreaking innovations from AI tools. These innovations will disrupt and transform industries even further as time goes on.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
0