There are couple of headers to configure before loading up the URI and firing off the request. The first is Authorization header, this is simple Bearer token scheme with your project API key as your token. The second is a new header indicating we’re connecting to V2 of the Assistant Beta API.
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
response = _client.PostAsync("https://api.openai.com/v1/threads", null).Result;
So far, so good. Nothing most developers haven’t done dozens of times over. The tricky part is that comes via the response. Unfortunately, each end point returns a different JSON model. I’ve created a series of models in my project to deserialize each response into a POCO, which as this point, I feel that was overkill. I could have done this via JObjects and saved myself a few dozen lines of code.
var threadIdResponse = response.Content.ReadAsStringAsync().Result; if (!response.IsSuccessStatusCode) { var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse); throw new AiClientException(errorResponse?.Error.Message); } var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse); _threadId = threadIdObj?.Id ?? string.Empty; return _threadId;
Here we’ve got the response, and it’s time to check and parse what we got back. In my error trap, I’ve got an exception called AiClientException. This is a new exception I created in the project that simply wraps Exception for better delineation on the client. If we’ve got a successful response, we deserialize it into a Thread Response object:
public class ThreadResponse { public string Id { get; set; } public string Object { get; set; } public long CreatedAt { get; set; } public object AssistantId { get; set; } public string ThreadId { get; set; } public object RunId { get; set; } public string Role { get; set; } public List<AiContent> Content { get; set; } public List<object> FileIds { get; set; } public Metadata Metadata { get; set; } }
if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set"); if (string.IsNullOrEmpty(_threadId)) CreateThread(); if (string.IsNullOrEmpty(message)) throw new AiClientException("Message is empty");
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2"); var messageRequest = new AiRequestMessage { Role = "user", Content = message };
public class AiRequestMessage { [JsonProperty("role")] public string Role { get; set; } [JsonProperty("content")] public string Content { get; set; } }
Once the message object is created, we just need to stringify it, load it into our request and send it off. There is not much useful information returned from the request. A HTTP 200 return is an indication that the message was successfully added:
var json = JsonConvert.SerializeObject(messageRequest); var content = new StringContent(json, Encoding.UTF8, "application/json"); response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/messages", content); var threadIdResponse = response.Content.ReadAsStringAsync().Result; if (!response.IsSuccessStatusCode) { var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse); throw new AiClientException(errorResponse?.Error.Message); }
var messageRequest = new AiRequestMessage { Role = "user", Content = message };
var messageRequest = new AiRequestMessage { Role = "assistant", Content = message };
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
var custAsst = new Assistant { assistant_id = _assistantId }; var json = JsonConvert.SerializeObject(custAsst); var content = new StringContent(json, Encoding.UTF8, "application/json");
response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/ru ns", content); var responseContent = await response.Content.ReadAsStringAsync(); var responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent); var runId = responseObj?.Id; var runStatus = responseObj?.Status; //if not completed, poll again if (runId != null) { while (runStatus != null && !FinalStatuses.Contains(runStatus)) { await Task.Delay(1000); response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/runs/{runId}"); responseContent = response.Content.ReadAsStringAsync().Result; responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent); runStatus = responseObj?.Status; } } } await GetResponse();
HttpResponseMessage response; using (_client = new HttpClient()) { _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1"); response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/messages"); } The response that is returned from this request is more complex that we've seen up to this point and requires a bit more handling in order to extract the messages: var responseContent = response.Content.ReadAsStringAsync().Result; try { var data = JsonConvert.DeserializeObject<ChatResponse>(responseContent); _messages.Clear(); _messages = data?.Data.Select(x => new AiContent() { Type = x.Role, Text = x.Content[0].Text }).ToList() ?? new List<AiContent>(); } catch (Exception ex) { throw new AiClientException("Error retrieving messages"); }
public class ChatResponse { public List<Data> Data { get; set; } public string FirstId { get; set; } public string LastId { get; set; } public bool HasMore { get; set; } } public class Data { public string Id { get; set; } public string Object { get; set; } public long CreatedAt { get; set; } public string AssistantId { get; set; } public string ThreadId { get; set; } public string RunId { get; set; } public string Role { get; set; } public List<AiContent> Content { get; set; } public List<object> FileIds { get; set; } public Metadata Metadata { get; set; } }
public class AiContent { public string Type { get; set; } public Text Text { get; set; } } public class Text { public string Value { get; set; } public List<object> Annotations { get; set; } }
• Add photo generation – Who doesn’t like playing with the photo generator provided by most AIs?
using AiClients.Exceptions; using AiClients.Interfaces; using AiClients.Models; using Microsoft.Extensions.Configuration; using Newtonsoft.Json; using System.Net.Http.Headers; using System.Text; namespace CustomGptClient.Services { public class AssitantService : IAiService { private string _threadId; private IConfiguration _config; private string _apiKey; private string _assistantId; private List<AiContent> _messages; private string _assistantName; private HttpClient _client; private List<string> FinalStatuses = new List<string> { "completed", "failed", "cancelled", "expired" }; public AssitantService(IConfiguration configuration) { _config = configuration; _apiKey = _config.GetSection("OpenAI:ApiKey")?.Value ?? string.Empty; _assistantId = _config.GetSection("OpenAI:AssistantId")?.Value ?? string.Empty; _messages = new List<AiContent>(); } private string CreateThread() { if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set"); HttpResponseMessage response; using (var _client = new HttpClient()) { _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2"); response = _client.PostAsync("https://api.openai.com/v1/threads", null).Result; } var threadIdResponse = response.Content.ReadAsStringAsync().Result; if (!response.IsSuccessStatusCode) { var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse); throw new AiClientException(errorResponse?.Error.Message); } var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse); _threadId = threadIdObj?.Id ?? string.Empty; return _threadId; } public async Task AddMessage(string message) { if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set"); if (string.IsNullOrEmpty(_threadId)) CreateThread(); if (string.IsNullOrEmpty(message)) throw new AiClientException("Message is empty"); HttpResponseMessage response; using (_client = new HttpClient()) { _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1"); var messageRequest = new AiRequestMessage { Role = "user", Content = message }; var json = JsonConvert.SerializeObject(messageRequest); var content = new StringContent(json, Encoding.UTF8, "application/json"); response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/messages", content); } var threadIdResponse = response.Content.ReadAsStringAsync().Result; if (!response.IsSuccessStatusCode) { var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse); throw new AiClientException(errorResponse?.Error.Message); } var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse); await CreateRun(); } public async Task CreateRun() { HttpResponseMessage response; using (_client = new HttpClient()) { _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2"); var custAsst = new Assistant { assistant_id = _assistantId }; var json = JsonConvert.SerializeObject(custAsst); var content = new StringContent(json, Encoding.UTF8, "application/json"); response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/runs", content); var responseContent = await response.Content.ReadAsStringAsync(); var responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent); var runId = responseObj?.Id; var runStatus = responseObj?.Status; //if not completed, poll again if (runId != null) { while (runStatus != null && !FinalStatuses.Contains(runStatus)) { await Task.Delay(1000); response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/runs/{runId}"); responseContent = response.Content.ReadAsStringAsync().Result; responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent); runStatus = responseObj?.Status; } } } await GetResponse(); } public async Task GetResponse() { HttpResponseMessage response; using (_client = new HttpClient()) { _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey); _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1"); response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/messages"); } var responseContent = response.Content.ReadAsStringAsync().Result; try { var data = JsonConvert.DeserializeObject<ChatResponse>(responseContent); _messages.Clear(); _messages = data?.Data.Select(x => new AiContent() { Type = x.Role, Text = x.Content[0].Text }).ToList() ?? new List<AiContent>(); } catch (Exception ex) { throw new AiClientException("Error retrieving messages"); } } }
In this blog, let’s dive into creating a secure user login page with Next.js and integrating it with Optimizely’s Commerce APIs. Here are the steps you can follow:
We required 2 fields to use in below Next.js login implementation.
Here are the steps to create a Next.js project using Visual Studio Code (VS Code):
You can make changes to your project files, create new pages, components, styles, etc., and see the changes reflected in real-time in your browser while the development server is running.
className=”w-full border border-gray-300 rounded-md px-3 py-2 mb-4″
/>
<label htmlFor=”password” className=”block mb-2″>Password:</label>
<input
type=”password”
id=”password”
name=”password”
value={formData.password}
onChange={handleInputChange}
className=”w-full border border-gray-300 rounded-md px-3 py-2 mb-4″
/>
<button type=”submit” className=”w-full bg-blue-500 text-white py-2 rounded-md hover:bg-blue-600″>
Login
</button>
</form>
</div>
</div>
);
};
export default LoginPage;
To integrate APIs http://sensi.local.com/identity/connect/token and http://sensi.local.com/api/v1/sessions into your LoginPage component’s handleSubmit method, follow these steps:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at ,x. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 400.Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://b2b.local.com/api/v1/sessions. (Reason: CORS request did not succeed). Status code: (null).Solution for CORS Issue:
Final Result of Login Page: Successful login.
In conclusion, this blog detailed the process of creating a user login page using Next.js and integrating it with Optimizely’s Commerce APIs. The steps included setting up an Optimizely Configured Commerce project locally with running APIs, creating a Next.js project in Visual Studio Code, designing the login form component, handling form submission, and integrating Optimizely’s APIs for authentication.
The blog also addressed common issues such as CORS (Cross-Origin Resource Sharing) errors and provided solutions, including configuring CORS settings in Optimizely’s admin panel. After resolving the CORS issue, the final result was a successful login page implementation.
Overall, the blog serves as a comprehensive guide for developers looking to build a secure and functional user login page in a Next.js application integrated with Optimizely’s Commerce APIs. If any issue, then please contact me.
]]>Converting CSV to Excel streamlines data manipulation and analysis, bridging the simplicity of CSV’s plain text structure with Excel’s powerful spreadsheet functionalities. This conversion ensures a seamless transition from comma-separated records to organized rows and columns, enhancing data accessibility and interpretation. Whether for data analysis, visualization, or collaboration, Excel’s versatile format accommodates diverse needs, offering features like formulas, charts, and conditional formatting.
By converting multiple CSV files to an Excel file using VBA code.
Sub Csv_to_Excel() ' ' Csv_to_Excel Dim Pathname As String Dim Filename As String Dim WOextn As String Dim Nam As String Pathname = "<Specify the Souce Path>" Filename = Dir(Pathname) Do While Filename <> "" WOextn = Left(Filename, InStr(1, Filename, ".") - 1) Nam = Pathname & "" & Filename Debug.Print Nam Workbooks.Add ActiveWorkbook.Queries.Add Name:=WOextn, Formula:= _ "let" & Chr(13) & "" & Chr(10) & " Source = Csv.Document(File.Contents(" & Chr(34) & Nam & Chr(34) & "),[Delimiter="","", Columns=25, Encoding=1252, QuoteStyle=QuoteStyle.None])," & Chr(13) & "" & Chr(10) & " #""Promoted Headers"" = Table.PromoteHeaders(Source, [PromoteAllScalars=true])" & Chr(13) & "" & Chr(10) & "in" & Chr(13) & "" & Chr(10) & " #""Promoted Headers""" ActiveWorkbook.Worksheets.Add With ActiveSheet.ListObjects.Add(SourceType:=0, Source:= _ "OLEDB;Provider=Microsoft.Mashup.OleDb.1;Data Source=$Workbook$;Location=" & WOextn & ";Extended Properties=""""" _ , Destination:=Range("$A$1")).QueryTable .CommandType = xlCmdSql .CommandText = Array("SELECT * FROM [" & WOextn & "]") .RowNumbers = False .FillAdjacentFormulas = False .PreserveFormatting = True .RefreshOnFileOpen = False .BackgroundQuery = True .RefreshStyle = xlInsertDeleteCells .SavePassword = False .SaveData = True .AdjustColumnWidth = True .RefreshPeriod = 0 .PreserveColumnInfo = True Debug.Print WOextn .ListObject.DisplayName = WOextn .Refresh BackgroundQuery:=False End With Application.CommandBars("Queries and Connections").Visible = False Range("C8").Select ActiveSheet.Name = WOextn ActiveWorkbook.SaveAs Filename:="<Specify the target path>" & WOextn & ".xlsx" ActiveWorkbook.Close Filename = Dir() Loop End Sub
Step 1: Open a new Excel sheet.
Step 2: Go to the Developer tab ribbon option.
Step 3: Select the Visual Basic option in the Developer Tab.
Step 4: Selecting the Visual Basic option opens a new window.
Step 5: On the Project tab, right-click the VBA project. Click Module after choosing the Insert option.
Step 6: The module option will show in the Project tab under the VBA Project, and the right-side code space will open.
Step 7: Paste the VBA code in the code space.
Step 8: Select Run or press F5 to run the code from here manually.
All of the CSV files that are in the designated folder have been successfully converted into Excel files and loaded into the target folder once the VBA code has been executed.
]]>
Imagine a world where we can skip Extract and Load, just do our data Transformations connecting directly to sources no matter what data platform you use?
Salesforce has taken significant steps over the last 2 years with Data Cloud to streamline how you get data in and out of their platform and we’re excited to see other vendors follow their lead. They’ve gone to the next level today by announcing their more comprehensive Zero Copy Partner Network.
By using industry standards, like Apache Iceberg, as the base layer, it means it’s easy for ALL data ecosystems to interoperate with Salesforce. We can finally make progress in achieving the dream of every master data manager, a world where the golden record can be constructed from the actual source of truth directly, without needing to rely on copies.
This is also a massive step forward for our clients as they mature into real DataOps and continue beyond to full site reliability engineering operational patterns for their data estates. Fewer copies of data mean increased pipeline reliability, data trustability, and data velocity.
This new model is especially important for our clients when they choose a heterogeneous ecosystem combining tools from many partners (maybe using Adobe for DXP and marking automation, and Salesforce for sales and service) they struggle to build consistent predictive models that can power them all—their customers end up getting different personalization from different channels. When we can bring all the data together in the Lakehouse faster and simpler, it makes it possible to build one model that can be consumed by all platforms. This efficiency is critical to the practicality of adopting AI at scale.
Perficient is unique in our depth and history with Data + Intelligence, and our diversity of partners. Salesforce’s “better together” approach is aligned precisely with our normal way of working. If you use Snowflake, RedShift, Synapse, Databricks, or Big Query, we have the right experience to help you make better decisions faster with Salesforce Data Cloud.
]]>On Operational projects that involves heavy data processing on a daily basis, there’s a need to monitor the DB performance. Over a period of time, the workload grows causing potential issues. While there are best practices to handle the processing by adopting DBA strategies (indexing, partitioning, collecting STATS, reorganizing tables/indexes, purging data, allocating bandwidth separately for ETL/DWH users, Peak time optimization, effective DEV query Re-writes etc.,), it is necessary to be aware of the DB performance and consistently monitor for further actions.
If Admin access is not available to validate the performance on Azure, building Automations can help monitor the space and necessary steps before the DB causes Performance issues/failures.
Regarding the DB performance monitoring, IICS Informatica Job can be created with a Data Task to execute DB (SQL Server) Metadata tables query to check for the performance and Emails can be triggered once Free space goes below the threshold percentage (ex., 20 %).
IICS Mapping Design below (scheduled Hourly once). Email alerts would contain the Metric percent values.
Note : Email alerts will be triggered only if the Threshold limit exceeds.
IICS ETL Design :
IICS ETL Code Details :
Query to check if Used space exceeds 80% . I Used space exceeds the Threshold limit (User can set this to a specific value like 80%), and send an Email alert.
If Azure_SQL_Server_Performance_Info.dat has data (data populated when CPU/IO processing exceeds 80%) the Decision task is activated and Email alert is triggered.
Email Alert :
]]>
As everyone knows, maintaining application-level security for passwords, certificates, API keys, and other data is a bit of a critical thing. It was necessary for my project to safeguard the SMTP password.
I wanted to know how to protect my SMTP password in some way. I then discovered the Azure Key vault concept and began to put it into practice. Azure Key Vault is a cloud service for storing and accessing secrets, which can be anything sensitive, like passwords, connection strings, etc.
Key Vault Details can be found at the following link: What is Azure Key Vault? | Microsoft Learn
A few procedures must be followed to establish an Azure Key Vault, grant access permissions to registered applications within Azure, and obtain the key via the Asp.net core application.
First and foremost, an Azure membership is required. You can have a trial or subscription as needed for practical purposes.
We may achieve this using the Key Vault policy. If not, the default credential policy is available for configuration.
]]>
Cloud modernization is the primary driver of digital transformation and impactful business value. Cloud platforms have evolved from core technology to disruptive ecosystems of strategic advantage. Migration and modernization are vital to reach new markets, deliver innovative products, improve resiliency, reduce costs, and improve customer experiences. But it’s easy (and common) to lose sight of your business mission as you navigate complicated technological challenges.
Sometimes, visualization can help. We’ve created our Enterprise Cloud Transit Map as a simple blueprint for navigating complexities while staying focused on the win themes that really matter: creating competitive advantages, adding customer value, building a strong operational core, and growing your business:
The Tracks of Transformation: I Think I Can
Our map consists of five distinct tracks, each with its own set of ‘stations’ or focus areas, designed to guide your voyage towards the real value prop outcomes of cloud modernization:
Strategy & Architecture: This is where your journey begins. Aligning organizational goals with industry insights and crafting fit-for-purpose architecture lays the foundation. Business Alignment, Product Strategy, Scalability, and Interoperability are key stops for this line, as well as Centers of Excellence ensuring adoption success and leading to innovation and differentiated products.
Platform Foundation: The next path is getting your infrastructure, platform, connectivity, and governance set up for success across multicloud and hybrid cloud architectures, modernizing legacy IT, solving for resiliency issues, and setting the table for sustainability and cost optimization wins down the line.
Migrate & Modernize: The heart of transformation lies in scalability, cost optimization, and interoperability. This track is a deep dive into streamlining deployments, embracing cloud-native capabilities, and modernizing applications to deliver differentiated products more efficiently.
Data Insight: Data is the engine driving intelligent decision-making. This track emphasizes modernizing your data platform, ensuring regulatory compliance, and unlocking (and trusting) the potential of AI, setting the stage for insightful, data-driven decisions and truly impactful applications of emerging tech.
Cloud Operating Model: The maturity of your journey, focusing on developing team skills, optimizing costs, enabling new business objectives, and establishing a modern operational model. Success here aligns your cloud model to your existing organization while embracing transformative tools, technologies, and processes, with effective resource management and sustainable policies & governance.
On the Path to Business Impact
For IT and business leaders committed to modernizing their organizations, our Enterprise Cloud Transit Map is more than just a navigational guide; it’s a sanity check on delivering real-world business impacts and outcomes. Understanding the key themes of the map helps you set a course to resiliency, performance, profitable growth, and competitive innovation.
All Aboard!
]]>Nowadays, everyone speaks of AI, it is not a subject related to IT people only; others that are involved in different fields, such as vendors, taxi drivers, journalists, scientists, teachers, students, and even politicians mention it in their speeches. Most of them use AI to generate content, for example “news articles, academic papers, social media posts, photos, and even chatbot chats” (Cambridge University Press, 2023).
However, we have to say that they do not know how the AI really works, where the data comes from or even are not aware of the regulations related to their common lives, especially data privacy, intellectual property, and others.
Based on that, the following topics are going to be covered in this article:
It is common to hear about Chat GPT these days, which “attracted millions of users quickly after its launch” (Cambridge University Press, 2023), and how professional associations have applied it to improve operations and decision-making processes; but some companies are not aware of how this can impact their business in the short term use, there are legal issues that should be evaluated immediately. These problematic issues are the following: (Tenenbaum, 2023)
If you as an employee use AI for your daily tasks and so on, this implies some responsibilities that we must talk about, because “the impact of AI is therefore truly enormous, and it has given rise to numerous legal and ethical issues that need to be explored, especially from a copyright perspective” (Cambridge University Press, 2023).
You can hear that some software engineers start using AI tools such as Chat GPT (The most popular) and GitHub Copilot, which are good in a way because you can “spend less time creating boilerplate and repetitive code patterns, and more time on what matters: building great software” (GitHub, Inc, 2023). But the project and its code are the property of the clients, and this is considered confidential information so you must be aware that those copilots need code to be trained, and “shares recommendations based on the project’s context and style conventions” (GitHub, Inc, 2023). If you use those AI tools, you should be careful with the clauses you signed to comply with the contract.
On the other hand, how the companies and employees know that the training data that uses the AI tool is free from any copyright violations (Lucchi, 2023). If you see only the situation as an individual, you can implement functionality in less time than a person without the tools, but the company must “ensure that they do not infringe any third-party copyright, patent, or trademark rights” (Tenenbaum, 2023)when the product is released.
In conclusion, several legal issues can be raised because of AI tools. Although the AI regulations and laws are in the process of being defined, using AI tools irresponsibly can cause more problems than solutions to your company and you as an employee can be fired because of your contract confidentiality clauses. Companies are starting to implement AI tools, but they must define policies to use them with responsibility and comply with laws and employees must verify with the legal and IT department, to ensure that AI tools being used do not violate company compliance rules.
Finally, I would like to acknowledge the valuable work done by the Microsoft .NET Hub in the Capabilities Development dimension within our company, which has been instrumental in addressing key issues such as the use of AI in product development. We invite everyone to explore more about this fascinating topic and discover practical tips on our “AI Product Powered People” blog. Join us on this journey into the future of AI-powered technology and product development!
References
]]>
Microsoft’s Power Apps is a flexible platform that empowers users to effortlessly construct custom applications. As applications become more complex, the need for a well-organized and efficient deployment process becomes paramount. In this blog post, we’ll delve into the step-by-step process of implementing deployment pipelines in Power Apps, leveraging the Power Apps Deployment Pipeline App—an indispensable tool designed to streamline and optimize application deployment within the framework of Application Lifecycle Management (ALM).
The Power Apps Deployment Pipeline serves a pivotal role in this process. Acting as a systematic conduit, it ensures the seamless transition of applications across different stages, from development to testing and production. The pipeline enables version control, automated testing, collaboration facilitation, and efficient environment management by adhering to ALM principles. This strategic and controlled workflow enhances your applications’ reliability and promotes collaboration and agility across your development and IT teams within the broader context of the application’s lifecycle. Therefore, the Power Apps Deployment Pipeline plays a central role in orchestrating a well-coordinated and efficient deployment strategy for Power Apps applications, aligning seamlessly with ALM practices.
Before we dive into the process, make sure you have the following prerequisites:
Implementing manual deployment pipelines for Power Apps using the Power Apps Deployment Pipeline App streamlines the deployment process and ensures consistency across environments. By following these step-by-step instructions, you’ll establish an efficient pipeline that reduces errors and accelerates the delivery of your Power Apps solutions. Embrace the power of deployment within Power Apps and take your application deployment to the next level. Explore more about pipeline.
]]>Organizations want to leverage the productivity enhancements Microsoft Copilot for Microsoft 365 may enable, but want to avoid unintentional over-exposure of organizational information while users are accessing these Copilot experiences. Our Microsoft team is fielding many questions from customers about how to secure and govern Microsoft Copilot for Microsoft 365. These organizations want to ensure maximum productivity benefit while minimizing their risk. This article will describe the key considerations an organization should address.
First, a quick point of clarification. Microsoft has released several instances of Copilot for use in different contexts. At this writing Copilot instances include Microsoft Copilot (integrated in Bing and the Edge Browser), Microsoft Security Copilot, Github Copilot, and more. In this article I address Microsoft Copilot for Microsoft 365, an instance of the Copilot technologies integrated with Microsoft 365 tenants and applications, via Microsoft Graph. Microsoft Copilot for Microsoft 365 requires add-on licensing on top of other Microsoft 365 licensing.
Microsoft Copilot for Microsoft 365 is also extensible to non-Microsoft 365 sources of data. Out of the box, “web grounding” is enabled at the tenant level and disabled at the user level (user can enable). Web grounding allows Copilot to include web-based searches and the resulting information to be included in responses. Additionally, via Copilot Studio, organizations can customize Microsoft 365 based experiences and can extend Copilot responses to include non-Microsoft 365 sources of information.
Here is the primary consideration your organization must understand and act upon in order to minimize unintentional over-exposure of your information via Microsoft Copilot for Microsoft 365: By design, Microsoft Copilot for Microsoft 365 is accessing and including the information that your users already have access to in your tenant and within the bounds of existing Microsoft commitments. Microsoft Copilot for Microsoft 365 is providing an additional interface for exposing this information, and is doing some of the heavy lifting for your users in finding, compiling, and contextualizing that information. But, ultimately, it is exposing information that a user could have accessed on their own using their existing permissions, and given sufficient skill in using Microsoft 365 tools and applications for searching, querying, or accessing that information. In this article I am not addressing any potential failure of Copilot to follow the published design parameters. Monitoring and reporting on usage are advised to address this (unlikely) possibility.
Microsoft Copilot for Microsoft 365 is the latest, and possibly most advanced, tool for surfacing Microsoft 365 data to users. In a sense, Microsoft Copilot is the next iteration of Microsoft Search and Microsoft Delve. Each of these tools have some administrative controls that allow administrators to limit the information that is returned to a casual user of the tools. However, using them in this way is somewhat like patching over a structural problem with a layer of drywall mud. Your primary approach should be securing the underlying access controls and membership of your Microsoft 365 assets. These assets include SharePoint Online sites, OneDrive for Business sites, Microsoft 365 Groups and Teams, Exchange Online mailboxes, and other Microsoft 365 assets. Microsoft Purview sensitivity labels and their access controls can also be part of your solution to securing information and restricting access to the appropriate users.
The bottom line here is that there is no Microsoft Copilot for Microsoft 365 quick fix for information over-exposure. Organizations who find that their existing Microsoft 365 usage and architecture has made information too widely available need to do the heavy lifting of properly adjusting permissions and memberships of the underlying assets, adjusting various Microsoft 365 workload settings and policies, and considering a well-planned crawl/walk/run approach for deployment of Microsoft Purview controls such as Sensitivity Labels (and others) to address additional scenarios. Your organization should address information access controls at the foundational level first. Once the foundation is secure then optimize controls around the specific access methods such as Microsoft Copilot.
The key considerations your organization should address when considering a Microsoft Copilot for Microsoft 365 deployment include:
The list above does not include all considerations for securing your Microsoft 365 tenant. For example, we did not address conditional access or multi-factor authentication scenarios. However, the above considerations are most directly related to Microsoft Copilot consumption.
Microsoft 365 may add additional Copilot specific configuration and governance controls in the future. However, the best approach is to ensure that your underlying Microsoft 365 assets are properly permissioned and configured. As new Microsoft 365 features and controls are released, these actions will continue to pay dividends.
Our Perficient Microsoft team has extensive experience helping organizations like yours to analyze current state, identify gaps, and take action to secure and govern their Microsoft tenant. This work directly impacts the Microsoft Copilot for Microsoft 365 experience. Our engagements range from road-mapping, to foundational security and governance implementations, to extended migration and/or enablement and support offerings and we are able to customize these engagements to your particular areas of concern. We love partnering with customers to help them achieve the best possible Microsoft 365 service adoption and governance outcomes.
]]>In today’s agile software development landscape, teams rely heavily on robust workflows called “pipelines” to automate tasks and enhance productivity. For DevOps teams who were historically familiar with Microsoft’s Azure DevOps CICD Automation platform, one of the most powerful functionalities rolled out by the platform that allowed teams to drastically speed up the pipeline development process was: “YAML Templates“.
Templates in Azure DevOps are reusable configuration files written in YAML, allowing us to enforce best practices and accelerate build and release automation for large groups of teams. These templates facilitate faster onboarding, ease of maintenance through centralized updates and version control, and enforcement of built-in security measures and compliance standards.
For a lot of my clients that are not building in the Azure ecosystem, however, there is a popular question – how do we accomplish this same template functionality in other toolchains? One platform which has been growing rapidly in popularity in the DevOps automation space is GitHub Actions. GitHub Actions distinguishes itself with seamless integration into the GitHub ecosystem, providing an intuitive CI/CD solution via YAML configurations within repositories. Its strength lies in a user-friendly approach, leveraging a rich marketplace for prebuilt actions and multiple built-in code security features.
In today’s blog, I am going to dive in to show how we can implement the same templating functionality using GitHub Actions so that we can share common code, best practices, and enforced security across multiple teams to provide a structured and versionable approach to define pipeline configurations, fostering reusability, consistency, and collaborative development practices for build and release automation across any stack.
GitHub offers a structured environment for collaboration within a company. The top level structure for GitHub is an “Enterprise Account”. Inside of an account, however, a company can create multiple Organizations: Organizations are a group construct that companies can use to arrange users & teams so that they can collaborate across many projects at once – Organizations offer sophisticated security and administrative features which allow companies to govern their teams and business units. Leveraging this structure, administrators can manage access controls to repositories, including regulating access to Starter Workflows. By configuring repository settings and access permissions, companies can ensure that these predefined workflows are accessible only to members within their Organization.
Typically, enterprises will have a single business unit or department that is responsible for their cloud environments (whether this team actually builds all cloud environments or is simply in charge of broader cloud foundations, security, and governance varies depending on the size and complexity of the company and their infrastructure requirements). Setting up a single Organization for this business unit makes sense as it allows all the teams in that unit to share code and control the cloud best practices from a single location that other business units or organizations can reference.
To simulate this I am going to setup a new organization for a “Cloud OPs” department:
Once I have my organization setup, I could customize policies, security settings, or other settings to protect my department’s resources. I won’t dive into this, but below are a couple of good articles of GitHub documentation which go through some common settings, roles, and other features that should be configured by your company when you have created a new GitHub org.
https://docs.github.com/en/organizations/managing-user-access-to-your-organizations-repositories
The next step after creating the org is to create a repository for hosting our templates:
Now that we have a repository, we are ready to start writing our YAML code!
With the setup of GitHub complete, let’s dive in and start converting an Azure DevOps template into a Github Starter Workflow so that we can store it in our repo.
Being a technical director in a cloud consulting team, one of the most common use-cases I have for using pipeline templates is for sharing common Terraform automation.
By using Azure DevOps Pipeline templates to bundle common, repeatable Terraform automation steps, I am able to provide a standardized, efficient, and flexible approach for creating reusable infrastructure deployment pipelines across all the teams within my company.
Below is an example of a couple of common Terraform templates my team uses:
Any team automating infrastructure with Terraform is going to write automation to run these three processes. So it is logical to write them once in a template fashion so that future teams can extend their pipelines from these base templates to expedite their development process. Here are some very basic versions of what these templates can look like written for Azure DevOps pipelines:
###VALIDATE TEMPLATE### parameters: - name: terraformPath type: string stages: - stage: validate displayName: "Terraform validate" jobs: - job: validate displayName: "Terraform validate" variables: - name: terraformWorkingDirectory value: $(System.DefaultWorkingDirectory)/${{ parameters.terraformPath }} steps: - checkout: self - task: PowerShell@2 displayName: "Terraform init" inputs: targetType: 'inline' pwsh: true workingDirectory: $(terraformWorkingDirectory) script: | terraform init -backend=false - task: PowerShell@2 displayName: "Terraform fmt" inputs: targetType: 'inline' pwsh: true workingDirectory: $(terraformWorkingDirectory) script: | terraform fmt -check -write=false -recursive - task: PowerShell@2 displayName: "Terraform validate" inputs: targetType: 'inline' pwsh: true workingDirectory: $(terraformWorkingDirectory) script: | terraform validate
###PLAN TEMPLATE### parameters: - name: condition type: string - name: dependsOnStage type: string - name: environment type: string - name: serviceConnection type: string - name: terraformPath type: string - name: terraformVarFile type: string stages: - stage: plan displayName: "Terraform plan: ${{ parameters.environment }}" condition: and(succeeded(), ${{ parameters.condition }}) dependsOn: ${{ parameters.dependsOnStage }} jobs: - job: plan displayName: "Terraform plan" steps: - checkout: self - task: AzureCLI@2 displayName: "Terraform init" inputs: scriptType: bash scriptLocation: inlineScript azureSubscription: "${{ parameters.serviceConnection }}" addSpnToEnvironment: true workingDirectory: ${{ parameters.terraformPath }} inlineScript: | export ARM_CLIENT_ID=$servicePrincipalId export ARM_CLIENT_SECRET=$servicePrincipalKey export ARM_SUBSCRIPTION_ID=$DEPLOYMENT_SUBSCRIPTION_ID export ARM_TENANT_ID=$tenantId terraform init \ -backend-config="subscription_id=$TFSTATE_SUBSCRIPTION_ID" \ -backend-config="resource_group_name=$TFSTATE_RESOURCE_GROUP_NAME" \ -backend-config="storage_account_name=$TFSTATE_STORAGE_ACCOUNT_NAME" \ -backend-config="container_name=$TFSTATE_CONTAINER_NAME" \ -backend-config="key=$TFSTATE_KEY" env: DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID) TFSTATE_CONTAINER_NAME: $(TfstateContainerName) TFSTATE_KEY: $(TfstateKey) TFSTATE_RESOURCE_GROUP_NAME: $(TfstateResourceGroupName) TFSTATE_STORAGE_ACCOUNT_NAME: $(TfstateStorageAccountName) TFSTATE_SUBSCRIPTION_ID: $(TfstateSubscriptionID) - task: AzureCLI@2 displayName: "Terraform plan" inputs: scriptType: bash scriptLocation: inlineScript azureSubscription: "${{ parameters.serviceConnection }}" addSpnToEnvironment: true workingDirectory: ${{ parameters.terraformPath }} inlineScript: | export ARM_CLIENT_ID=$servicePrincipalId export ARM_CLIENT_SECRET=$servicePrincipalKey export ARM_SUBSCRIPTION_ID=$DEPLOYMENT_SUBSCRIPTION_ID export ARM_TENANT_ID=$tenantId terraform workspace select $(${{ parameters.environment }}TfWorkspaceName) terraform plan -var-file '${{ parameters.terraformVarFile }}' env: DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID)
###APPLY TEMPLATE### parameters: - name: condition type: string - name: dependsOnStage type: string - name: environment type: string - name: serviceConnection type: string - name: terraformPath type: string - name: terraformVarFile type: string stages: - stage: apply displayName: "Terraform apply: ${{ parameters.environment }}" condition: and(succeeded(), ${{ parameters.condition }}) dependsOn: ${{ parameters.dependsOnStage }} jobs: - deployment: apply displayName: "Terraform apply" environment: "fusion-terraform-${{ parameters.environment }}" strategy: runOnce: deploy: steps: - checkout: self - task: AzureCLI@2 displayName: "Terraform init" inputs: scriptType: bash scriptLocation: inlineScript azureSubscription: "${{ parameters.serviceConnection }}" addSpnToEnvironment: true workingDirectory: ${{ parameters.terraformPath }} inlineScript: | export ARM_CLIENT_ID=$servicePrincipalId export ARM_CLIENT_SECRET=$servicePrincipalKey export ARM_SUBSCRIPTION_ID=$DEPLOYMENT_SUBSCRIPTION_ID export ARM_TENANT_ID=$tenantId terraform init \ -backend-config="subscription_id=$TFSTATE_SUBSCRIPTION_ID" \ -backend-config="resource_group_name=$TFSTATE_RESOURCE_GROUP_NAME" \ -backend-config="storage_account_name=$TFSTATE_STORAGE_ACCOUNT_NAME" \ -backend-config="container_name=$TFSTATE_CONTAINER_NAME" \ -backend-config="key=$TFSTATE_KEY" env: DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID) TFSTATE_CONTAINER_NAME: $(TfstateContainerName) TFSTATE_KEY: $(TfstateKey) TFSTATE_RESOURCE_GROUP_NAME: $(TfstateResourceGroupName) TFSTATE_STORAGE_ACCOUNT_NAME: $(TfstateStorageAccountName) TFSTATE_SUBSCRIPTION_ID: $(TfstateSubscriptionID) - task: AzureCLI@2 displayName: "Terraform apply" inputs: scriptType: pscore scriptLocation: inlineScript azureSubscription: "${{ parameters.serviceConnection }}" addSpnToEnvironment: true workingDirectory: ${{ parameters.terraformPath }} inlineScript: | $env:ARM_CLIENT_ID=$env:servicePrincipalId $env:ARM_CLIENT_SECRET=$env:servicePrincipalKey $env:ARM_SUBSCRIPTION_ID=$env:DEPLOYMENT_SUBSCRIPTION_ID $env:ARM_TENANT_ID=$env:tenantId terraform workspace select $(${{ parameters.environment }}TfWorkspaceName) terraform apply -var-file '${{ parameters.terraformVarFile }}' -auto-approve env: DEPLOYMENT_SUBSCRIPTION_ID: $(${{ parameters.environment }}DeploymentSubscriptionID)
The three stages of Terraform automation are bundled together into sets of repeatable steps – THEN, when any other teams want to use these templates in their own pipelines, they can simply call them using a “Resources” link in their code:
Github Actions offers the concept of “Starter Workflows” as their version of “Pipeline Templates” which are offered by Azure DevOps. Starter Workflows are preconfigured templates designed to expedite the setup of automated workflows within repositories, providing a foundation with default configurations and predefined actions to streamline the creation of CI/CD pipelines in GitHub Actions.
The good news is that both Azure DevOps YAML and GitHub Actions YAML use very similar syntax – so the conversion process from an Azure DevOps Pipeline Template to a Starter Workflow is not a major effort. There are some features that are not yet supported in GitHub actions, so I have to take that into account as I start my conversion. See this document for a breakdown of differences between the two platforms from a feature perspective:
To demonstrate the concept, I am going to convert the “Validate” template that I showed above into a GitHub Actions Starter Workflow and we can compare it to the original Azure DevOps version:
Here is a list of what we had to change:
latforms | Shell | Description |
---|---|---|
All (windows + Linux) | python |
Executes the python command |
All (windows + Linux) | pwsh |
Default shell used on Windows, must be specified on other runner environment types |
All (windows + Linux) | bash |
Default shell used on non-Windows platforms, must be specified on other runner environment types |
Linux / macOS | sh |
The fallback behavior for non-Windows platforms if no shell is provided and bash is not found in the path. |
Windows | cmd |
GitHub appends the extension .cmd to your script |
Windows | PowerShell |
The PowerShell Desktop. GitHub appends the extension .ps1 to your script name. |
when we pass in the shell parameter we are telling the runner that will be executing our YAML scripts, what command-line tool it should use to ensure that we don’t run into any unexpected behavior.
This will encrypt the variable with a one-way encryption so that it will never be visible. These Library Groups can then be added into your pipeline as environment variables using the ADO variable syntax:
In GitHub the portal offers a similar secrets option: https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#about-secrets. Go to the settings menu for the repo where your top-level pipeline will live (NOTE – this is usually the “Consumer” of the pipeline templates and not the repo where the templates live) and select the settings menu. From there, you should see a sub-menu where secrets can be set!
There are several types of secrets to choose from. Typically Environment secrets are the easiest to work with because these secrets will get injected into your Runner runtime as environment variables that you can reference from your YAML code like so:
steps: - shell: bash env: SUPER_SECRET: ${{ secrets.SuperSecret }} run: | example-command "$SUPER_SECRET"
With that change, our conversion is largely complete! Let’s take a look at final result below.
name: terraform-plan-base-template # The GitHub Actions version of paramters are called "inputs" inputs: terraformPath: #This is the name of the input paramter - other pipelines planning to #use this template must pass in a value for this parameter as an input description: "The path to the terraform directory where all our first Terraform code file (usually main.tf) will be stored." required: true type: string #GitHub Actions support the following types: #string, choice, boolean, and environment jobs: Terraform_Validate: #This is the name of the job - this will show up in the UI of GitHub #when we instantiate a workflow using this template runs-on: ubuntu-latest #We have to specify the runtime for the actions workflow to ensure #that the scripts we plan to use below will work as expected. # GitHub Actions uses "Environment Variables" instead of "Variables" like Azure DevOps. #The syntax is slightly different but the concept is the same... env: terraformWorkingDirectory: ${{GITHUB_WORKSPACE}}/${{ inputs.terraformPath }} #In this #line we are combining a default environment variable with one of our input parameters #The list of GitHub defined default ENV variables available is located in this doc: #https://docs.github.com/en/github-ae@latest/actions/learn-github-actions/variables#default-environment-variables steps: - uses: actions/checkout@v4 #This is a pre-built action provided by github. #Source is located here: https://github.com/actions/checkout - name: "Terraform init" shell: pwsh working-directory: $terraformWorkingDirectory run: | terraform init -backend=false - name: "Terraform fmt" shell: pwsh working-directory: $terraformWorkingDirectory run: | terraform fmt -check -write=false -recursive - name: "Terraform validate" shell: pwsh working-directory: $terraformWorkingDirectory run: | terraform validate
The next step is to start sharing these starter workflows with other teams and pipelines. In my next blog, I will show how we can accomplish sharing the templates for different organizations to consume!
Stay tuned and keep-on “Dev-Ops-ing”!
]]>In the world of Generative Artificial Intelligence (AI), a new era of large language models has emerged with the remarkable capabilities. ChatGPT, Gemini, Bard and Copilot have made an impact in the way we interact with mobile device and web technologies. We will perform a comparative analysis to highlight the capabilities of each tool.
ChatGPT | Gemini | Bard | Copilot | |
---|---|---|---|---|
Training Data | Web | Web | Web | Web |
Accuracy | 85% | 85% | 70% | 80% |
Recall | 85% | 95% | 75% | 82% |
Precision | 89% | 90% | 75% | 90% |
F1 Score | 91% | 92% | 75% | 84% |
Multilingual | Yes | Yes | Yes | Yes |
Inputs | GPT-3.5: Text Only GPT-4.0: Text and Images | Text, Images and Google Drive | Text and Images | Text and Images |
Real Time Data | GPT-3.5: No GPT-4.0: Yes | Yes | Yes | Yes |
Mobile SDK | https://github.com/skydoves/chatgpt-android | API Only | https://www.gemini.com/mobile | API Only |
Cost | GPT-3.5 GPT-4.0 | Gemini Pro Gemini Pro Vision | Undisclosed | Undisclosed |
TP – True Positive
FP – False Positive
TN – True Negative
FN – False Negative
Accuracy = (TP +TN) / (TP + FP + TN + FN)
Recall = TP / (TP + FN)
Precision = TP / (TP + FP)
F1 Score = 2 * (Precision * Recall) / (Precision + Recall)
Our sample data set consists of 100 queries against Gemini AI. The above formula applied calculates the following scores:
Accuracy: (85 + 0) /100 = 85%
Recall: 85/ (85 + 5) = 94.44%
Precision: 85/ (85 + 10) = 89.47%
F1-Score: 2 * (0.894 * 0.944) / (0.894 + 0.944) = 91.8%
I recommend Gemini based on its accuracy and consistency. The ease of integration to the iOS and Android platforms and performance stands out amongst it’s competitors. We will illustrate how easy it is to integration Gemini with 10 easy steps.
And you’re up and running with Generative AI in your Android app!
I typed in “Write a hello world code in java” and Gemini responded with code snippet. You can try out various queries to personalize your newly integrated Generative AI application to your needs.
Alternatively, you can just Download the Sample app from the GitHub and add the API key to the local.properties to run the app.
It’s essential to recognize the remarkable capabilities of Generative AI tools on the market. Comparison of various AI metrics and architecture can give insight into performance, limitations and suitability for desired tasks. As the AI landscape continues to grow and evolve, we can anticipate even more groundbreaking innovations from AI tools. These innovations will disrupt and transform industries even further as time goes on.
For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!
]]>