Baskar Rao Dandlamudi, Author at Perficient Blogs https://blogs.perficient.com/author/brdandlamudi/ Expert Digital Insights Tue, 08 Nov 2022 17:20:46 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Baskar Rao Dandlamudi, Author at Perficient Blogs https://blogs.perficient.com/author/brdandlamudi/ 32 32 30508587 Understanding Cloud Native and What’s In It for Your Organization https://blogs.perficient.com/2022/09/21/understanding-cloud-native-and-whats-in-it-for-your-organization/ https://blogs.perficient.com/2022/09/21/understanding-cloud-native-and-whats-in-it-for-your-organization/#respond Wed, 21 Sep 2022 20:53:35 +0000 https://blogs.perficient.com/?p=318232

“Cloud Native” is the current buzzword we will hear everywhere in major digital transformation projects currently underway. But what does Cloud Native even mean? Is it worth doing it for your organization? These are questions that pop up when thinking about transforming our workloads in a Cloud Native way. So let us dive in.

What is Cloud Native?

According to Cloud Native Computing Foundation:

“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

In a nutshell, ‘Cloud Native’ refers to the processes and techniques for developing applications that are resilient, fault-tolerant, and change-tolerant.

Cornelia Davis sums it up nicely in her book Cloud Native Patterns when she says:

“Cloud is about where we run; cloud-native is more about how we run.”

What is in it for organizations?

There is an increased need for organizations today to be “digital first.” The pandemic has made organizations see the benefits of digital technology, and the investments made in the digital space are here to stay.

The touchpoints that customers use to interact with their data are different from before. Organizations are responsible for providing customers with a seamless experience across a wide variety of devices, whether mobile phones, digital kiosks, desktop websites, tablets, or automated call centers, you name it!

Traditional applications are not built to manage this influx of traffic. This is where the Cloud Native way of building software comes into the picture. To manage this level of traffic, your applications should be refactored so that they can be scaled automatically based on customer traffic.

Embracing this digital-first imperative leads to better customer experience, which in turn improves business outcomes. As the saying goes, “the customer is king”…

Infrastructure as Code (IaC) in Cloud Computing

A large component of those techniques involves managing and provisioning infrastructure through code instead of through manual processes. It acts as a ‘template’ that makes deploying new applications much faster.

Having the infrastructure provisioned in an automated way through code (IaC) lowers the cost and effort of by reducing the manual intervention any time they deploy a new app. Infrastructure as Code (IaC) allows the same application to be distributed across different regions that are version-controlled.

Containerizing Applications – How Kubernetes Changed the Game

Containers help to package application code with the operating systems and dependent libraries, which can be run in any environment. Due to the lightweight nature of containers, they can be scaled quickly compared to regular virtual machines.

Kubernetes is an open-source platform that helps developers deploy, manage, and schedule container applications. It became the preferred container orchestration platform as it was initially backed by Google. Later Kubernetes became the preferred platform for cloud-agnostic development so that applications can be deployed to any cloud platform like Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).

This led to the creation of the Cloud Native Computing Foundation, which managed the development of Kubernetes with contributions from member organizations.

But using containers and Kubernetes alone does not make your applications Cloud Native. It is all about the way to build your software to be resilient, fault-tolerant, and change-tolerant with consistency.

The Cloud Native Landscape

Cloud Native methodology is not only limited to application development. Cloud Native aspects apply to databases, caches, network software, infrastructure, observability, monitoring, security, etc. It is important to understand that the Cloud Native landscape is vast and always evolving.

For example, when Kubernetes was all the buzz, organizations had to skill their team members with Kubernetes from both developer and administrator perspectives. Platform teams were created for those who were used to managing and operating Kubernetes clusters on-premises and even in the cloud. It was a heavy investment.

Now organizations no longer have to do the same. Some of the management that used to fall on team members can be shifted to platforms like VMware Tanzu, OpenShift, or managed Kubernetes services from Azure Kubernetes, Amazon EKS, or Google Kubernetes Engine.

Organizations today also have fast lane access to Cloud Native by utilizing Serverless offerings and channeling their focus to developing applications to solve business problems. Serverless offerings like Azure Functions, AWS Lambdas, & Azure Container Apps can help orgs understand load patterns and get an overview of spending. Who wants to spend more than their current state? No one, right?

Choosing Serverless or Managed Kubernetes Services or Platform Services depends on the level of control you need on your teams with respect to the environments.

Is Cloud Native necessary for your organization?

Now that we have gone through the basic understanding of Cloud Native Landscape, the next question comes to mind: is it necessary for your organization? It is important to consider the following:

  1. Is there a requirement for hyper-scale applications?
  2. Are applications required to run with zero downtime?
  3. Is there a loss in business to competition if you face downtime?
  4. Is there a large volume of customer traffic to your applications and infrastructure?
  5. Do you need to frequently release application updates and changes to improve customer experience?

If you have answered “Yes” to all the questions, your organization can reap the benefits of Cloud Native Landscape.

As consumers demand more out of their technologies, how will you respond? Download our guide, “A Business Leader’s Guide to Key Trends in Cloud,” to learn more.

]]>
https://blogs.perficient.com/2022/09/21/understanding-cloud-native-and-whats-in-it-for-your-organization/feed/ 0 318232
Azure Application Insights – Cost Save with this Simple Optimization https://blogs.perficient.com/2022/08/02/azure-application-insights-cost-save-with-this-simple-optimization/ https://blogs.perficient.com/2022/08/02/azure-application-insights-cost-save-with-this-simple-optimization/#respond Tue, 02 Aug 2022 18:39:01 +0000 https://blogs.perficient.com/?p=315026

Let’s discuss Azure Application Insights and the simple optimization we can do to avoid surprise bills.

Azure Application Insights can be used to monitor and check our applications’ health, analyze logs and other metrics related to our application, and view other resources which are included in our Azure subscription. Many teams may have different objectives when using Application Insights with various features. Although we might not use all the features that are available with Application Insights, at a minimum, we use them for logging.

If you ask me, “Logging is an art.”

For application teams to understand issues in Production, how much logged information is needed? There is no single formula or concrete definition, but at a minimum, a log should be able to tell the operation, action, client details consuming the operation, and other request and response information with critical information masked. If we have this information, application teams should be able to replicate the issues in their test environments.

The logs often give an understanding of what would have happened when the execution failed. There is no point in logging exceptions with huge stack traces without supporting information on what happened before the exception.

In general, it is good to use different logging strategies based on the type of log.

  • Are we logging to capture crucial states of transactions?
  • Are we logging to capture transient errors like connectivity issues and authentication issues?
  • Are we logging information to capture application workflow steps?

Logging is often overlooked when we perform application migrations to Cloud. The next time when we are trying to perform an application migration, it is good to think about the Logging strategy that is in place.

For example, consider the following scenario. Let us say we have a job that processes information when records appear in the database. In general, the job would be polling the database for every 5 minutes to look for data that satisfies a specific condition. When new records are present, the job would go ahead and pull the new data and start processing. In this process, the job would emit information logs which are stored in the filesystem as text files or stored in the database. When all the process is completed, the job would update the status of the records in the database so that these records are not considered in the next iteration. If all goes well, this approach works well.

Now when we shift this application to Cloud and start using Azure Application Insights, these logs would be captured in the Application Insights. Now consider a scenario where the jobs fail to process the data. For example, let us assume there was bad data. The job in every run would be failing and unable to update the status of the records to complete. In every run, the job will be trying to process the bad data. The job would be continuously emitting logs with the error message for every run.

Appinsightserrors

We know that the job polls for new data every 5 minutes. These errors eventually land up in our Application Insights. So let us say there are around 10 records with bad data, and every 5 minutes, the job would be emitting logs with errors and other information logs in our Application Insights which are captured using the AppInsights SDK telemetry option.

As per Azure Monitor Pricing, “Log Analytics and Application Insights charge for data they ingest.”

Eventually, our Application Insights will be ingesting lots of exception messages, and there will be a lot of noise. If unnoticed, these logs fill the Application Insights, and we would end up paying for the noise.

So how do we avoid situations like these?

To avoid situations like these, it is always good to implement a Circuit Breaker Pattern along with a retry option. Let us say we set the threshold for the retry pattern as 3. When our jobs try to process records, and if they fail continuously 3 times, the job will mark the records as failed in the subsequent runs. This way, we can avoid sending data to our Application Insights, and we could save amounts that would reflect huge savings eventually.

We will not see the impact immediately by performing small optimizations like these, but overall, we will realize the benefits.

What are a few optimizations you have tried to save costs with your applications in Azure? Feel free to comment.

]]>
https://blogs.perficient.com/2022/08/02/azure-application-insights-cost-save-with-this-simple-optimization/feed/ 0 315026
4 Common Application Modernization Myths https://blogs.perficient.com/2022/06/27/4-common-application-modernization-myths/ https://blogs.perficient.com/2022/06/27/4-common-application-modernization-myths/#respond Mon, 27 Jun 2022 18:58:38 +0000 https://blogs.perficient.com/?p=311657

In my last post on the Application Modernization Journey, we discussed the three questions organizations should ask before starting App Modernization Programs: the “Why, How, and What”.

App Modernization Programs help us to realize the benefits when they are executed well keeping the “Why” in target during every phase of the program.

In this post, we are going to discuss four common application modernization myths:

  1. Lift and Shift is the best approach for App Modernization
  2. Shifting to cloud gives better performance
  3. After moving to the cloud, security is no longer a concern
  4. We will not face outages in Cloud

Myth #1: Lift and Shift is the best approach for App Modernization to Cloud

Many believe that Lift and Shift is the best approach for App Modernization efforts to Cloud. Though Lift and Shift works well in various scenarios, often we are lifting and shifting technical debt along with the applications. We might not realize the actual benefits with a Lift and Shift approach alone, and instead be left with more work later down the line.

For example, consider the case of moving from one home to another home. What do we do? Do we move everything that is in our home as is to the new home? No… right? We do a ‘cleanup’ to avoid carrying all old items which we no longer use. We shred old documents, donate the clothes and toys at the back of the closet, or get rid furniture that isn’t suited to the new home. This may take more time up front, but the results are a smoother move with only the most important things.

We can use the same analogy for application modernization. Instead of moving everything as it is, we can take certain actions to ‘cleanup’ before the move. We can inventory the list of applications to prioritize the most important applications as candidates for modernization. Then we may perform a refactoring initiative to change the application architectures in line with Cloud architecture before moving them to cloud.

Read More: Things to Consider with Lift-and-Shift Migrations in Cloud

Myth #2: Shifting to cloud gives better performance

One of the most common myths is that applications will always perform superiorly when shifted to the cloud. Although Cloud Platforms provide the flexibility to scale application resources dynamically, initial application architectures may not be prepared to manage the load from the level of scaling. This often causes teams to scramble to monitor infrastructure and produce mitigation plans to manage performance of applications by either scaling up additional resources or improving the application architectures.

Though we get reliable performance by increasing the size of resources, it is not always good overall. As a result, we might see surprise bills with heavy costs due to the additional resources that were provisioned to manage performance. This is where Azure Well-Architected Framework Performance Pillar comes into the picture.

Learn more: The Azure Well-Architected Framework: Performance Efficiency Pillar

Performance testing is no longer a “nice-to-have” feature for organizations. It’s important to outline the capacity and sizing of the servers that we plan to use in Cloud. When thinking about what we want to achieve from modernization, we can collect baseline performance metrics and produce agreed future-state metrics.

Organizations which prepare ahead are well positioned to embrace the cloud journey while making sure the costs are controlled. Scalability, Availability and Reliability are directly proportional to the cost. Having a clear target upfront helps plan and utilize the resources accordingly in order to avoid unnecessary costs.

In order to do so, we’ll need to consider priority of needs when it comes to an app’s Scalability, Availability and Reliability. For example, an application used in a retail setting would take prioritized considerations for dynamic scaling to make sure it can meet increased demands from customers in order to not lose business to competitors. It would also need to be both highly available and highly reliable with near real time data spread across cloud regions.

Whereas in a healthcare setting, an application needs to be available to provide better experience to patients and data is handled in a reliable and secure fashion. In this case, Availability and Reliability are given more importance.

Unless an application is mission critical, it is required to adjust the levers of Scalability, Availability and Reliability to keep the costs under control.

Myth #3: After moving to the cloud, security is no longer a concern

Many organizations consider moving to cloud the solution to improving security for their applications. While cloud providers do try to maintain robust security for their platforms, in the end it is on the consumers to implement security policies to protect their infrastructure and applications. Cloud Providers do all that is needed to provide the tools that are necessary to protect applications and services running on their platforms. Security is always a shared responsibility between providers and consumers.

In fact, organizations must invest more on security while using Cloud Providers. The entry points for vulnerabilities increase more with a distributed environment and upfront planning is needed to implement a secure Cloud Platform. Investing in the right tools for observability and monitoring is crucial and adopting Infrastructure as Code with right set of policies which validate infrastructure provisioning before creating them helps in addressing the security concerns. Having a Zero Trust mindset helps us in implementing robust secure cloud infrastructure.

Read More: Tackle Security Concerns for Application Modernization

Myth #4: We will not face outages in Cloud

The final application modernization myth many believe is that once we move our applications to Cloud, we no longer must worry about outages. Cloud Providers strive to do their best in preventing outages, but we do hear instances of outages with Cloud Providers – be it Azure, Amazon Web Services or Google Cloud.

It is tough to pin down responsibilities of outage solely to Cloud Providers. While Cloud Providers try to maintain their SLAs to provide uninterrupted services, as consumers it is also our responsibility to plan for potential outages. Though we might not be able to anticipate all types of outages, we can plan by including good Chaos Testing practices while building up Cloud Infrastructure and Applications.

Chaos Testing is a way to introduce simulated failures in our applications and observe the applications behavior and build applications to overcome these failures in a reliable way. This helps build self-reliant systems. We might also end up changing application architectures to make them available. Our applications might not be universally available if we do not make them ready to be highly available.

Let us consider this example scenario. We have a microservices which use in-memory cache to store the most requested data in order to avoid a round trip to database. Each microservice instance has its own copy of cache data. If one of the instances of microservice goes down, we would need to rebuild the cache again with proper data when the microservice instance comes up. We could avoid this by changing our application architecture to use a distributed cache, where the microservice would not need to rebuild its cache when it is up after a failure. The distributed cache can be spread across multiple regions of the cloud.

Making these types of decisions often involves cost to the bottom line but that is acceptable compared to the cost involved in support hours that we spend putting our personnel to bring our applications stable during outage situations.

Did you know? Perficient Earned Microsoft’s Modernization of Web Applications to Azure Advanced Specialization

Why Perficient?

No modernization journey is the same. We work with enterprises across industries to help establish a secure cloud foundation alongside our programmatic approach to assess, migrate, and modernize applications and data platforms at scale. As a Microsoft Gold Certified Partner, we combine our strong relationship with Microsoft and our years of experience on Azure to deliver you business solutions that help you achieve your goals.

Ready to advance your app innovation and modernization journey? Contact our team.

]]>
https://blogs.perficient.com/2022/06/27/4-common-application-modernization-myths/feed/ 0 311657
Are you Ready to Ride the App Modernization Journey? https://blogs.perficient.com/2022/05/20/are-you-ready-to-ride-the-app-modernization-journey/ https://blogs.perficient.com/2022/05/20/are-you-ready-to-ride-the-app-modernization-journey/#respond Fri, 20 May 2022 16:49:46 +0000 https://blogs.perficient.com/?p=309923

App Modernization is the most common buzzword we hear these days along with Digital Transformation. We have been hearing these words for a long time in the IT space. In fact, they go in a circular motion every now and then. But as an organization, the journey of Application (App) Modernization may look different than another organization. In this post, we will answer the three basic questions that every organization should consider.

  • Why should we consider App Modernization?
  • How do we go about with App Modernization?
  • What are the key benefits to be realized from App Modernization?

Why should we consider App Modernization?

Most organizations proceed with their modernization journey without understanding the “Why”. I would recommend watching this talk by Simon Sinek on “Start with Why”. Though this talk is centered around leadership, we can use the same questions for any transformation or modernization program.

For a majority of organizations, the IT transformation programs are carried over to support business initiatives. Whether it is to gain a considerable amount of market share, stay ahead of competitors, improve turnaround time to go to market strategies, reduce technical debt, stay relevant with current technologies, access to talent, improve operational margins, or reduce capital expenditures. We can keep on adding to this list.

But it is a common concern expressed in many organizations that the pace of IT transformations does not align or provide the expected outcomes at the end of transformations. In a 2021 McKinsey Study, you can see that often organizations will start with an understanding of their “Why” during the initial stages of transformation, but then they forget the “Why” after day one and miss those as part of the implementation stages.

What’s more, while much of a transformation’s value loss (55 percent) occurs during and after implemen­tation, a sizeable portion happens as early as day one (Exhibit 2)

It is important to prioritize the most strategic goals that align with the core mission and values of the organization. Once we finalize the strategic goals, we need to measure them at every phase of our modernization journey to make sure we are not deviating from our goals.

How do we go about with App Modernization?

Once we have a clear understanding of the “Why” and our goals prioritized, it is easier to execute to meet our targets and goals. The details of transformation or modernization programs no longer stay within the executive team. We need to communicate the key goals to every person who is part of the transformation program and make sure that everyone knows why they are undergoing the transformation program.

Having everyone aligned with program goals helps in the transformation journey. As an organization, it is important to choose a framework. For example, when it comes to performing a Cloud Transformation or Modernization with Microsoft Azure, we can choose the Cloud Adoption Framework. Cloud Adoption Framework is proven guidance and best practices that help us confidently adopt the cloud and achieve business outcomes.

Once we identify or choose a framework it is important for us to consider the areas of People, Process, and Technology.

Cloud Adoption Framework covers all the aspects of People, Processes, and Technology that are needed to be considered as part of the transformation program. We can apply the same steps to any Cloud Transformation Programs, whether it be Amazon Web Services (AWS) or Google Cloud Platform (GCP), as these are technology agnostic.

READ MORE: Perficient Earns Modernization of Web Applications to Azure Advanced Specialization

What are the key benefits to be realized from App Modernization?

What is the fun in spending budgets on the transformation program when we do not know the benefits at the end of the modernization program? It is important to list the benefits in terms of the “Why” by mapping the benefits against our strategic goals which we choose at the start of the modernization. As we execute the program, we need to ensure that we have the right levers and controls in place that would help us in measuring the benefits after the program.

Measuring benefits can be achieved if we are able to manage the benefits using a benefits management system. We need to have a comprehensive approach to benefits management. For example, if one of our strategic goals is to reduce operating expenses, we should be able to measure the amount of cost reduction which we have realized by implementing the program. If we do not measure the outcomes of benefits and have a system in place to measure the benefits the entire program does not realize its expected outcome.

Once we have a clear understanding of the “Why”, “How” and “What” organizations should be able to prepare themselves to execute successful modernization programs.

Perficient’s Application Modernization Expertise

The world’s leading brands partner with Perficient because we have the resources to scale major custom application development projects. We partner with leading technology companies to help Fortune 1000 clients across all industries, and we’ve been recognized by analysts as a top service provider for application modernization and migration.

Contact us today to get started on your app modernization journey.


References –

Start with why — how great leaders inspire action | Simon Sinek | TEDxPugetSound

https://www.mckinsey.com/business-functions/people-and-organizational-performance/our-insights/successful-transformations

https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/

https://www.forbes.com/sites/forbesagencycouncil/2022/03/21/the-importance-of-aligning-people-processes-and-technology-amid-transformation-initiatives/

https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/how-do-you-measure-success-in-digital-five-metrics-for-ceos

https://www.pmi.org/learning/library/benefits-management-accelerate-value-delivery-5959

 

]]>
https://blogs.perficient.com/2022/05/20/are-you-ready-to-ride-the-app-modernization-journey/feed/ 0 309923
Enabling OpenAPI Specifications for Azure Function https://blogs.perficient.com/2022/05/09/enabling-openapi-specifications-for-azure-function/ https://blogs.perficient.com/2022/05/09/enabling-openapi-specifications-for-azure-function/#comments Mon, 09 May 2022 18:03:44 +0000 https://blogs.perficient.com/?p=309314

What is an OpenAPI Document ?

According to Swagger – OpenAPI Document is a document (or set of documents) that defines or describes an API. An OpenAPI definition uses and conforms to the OpenAPI Specification.

“The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.

An OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases.”

In November 2021, Azure Functions provided support to add OpenAPI Document and definitions to functions triggered by HTTP method. This was made possible by the usage of NuGet Package Microsoft.Azure.WebJobs.Extensions.OpenApi . The package was initially created as a community-driven project which later got supported and maintained as an official project by the Microsoft team.

Why do we need an OpenAPI Document ?

When APIs expose or provide an OpenAPI document, it can be used to learn more about the API operations, as well as the request parameters and response parameters supported by the API. In short, it serves as a documentation for the API allowing third party integrators and other developers to consume the API methods with ease. This helps to reduce the conversations around the need to find how one can consume the APIs.

By using an OpenAPI Document, we can have the testing team and third party developers who are consuming our Function get an overview of methods which are supported in our function, the request schema, the response schema, the error message format and sample request and response message. This way external teams and testing team does not have to depend on the development team to initiate their portion of work. OpenAPI Document also provides a self explanatory document to understand the different operations supported by the API and there is no need for the development team to spend time additionally to write documentations explaining the methods of the API.

How do we add OpenAPI Document ?

Adding an OpenAPI Document specification to Azure Function is straightforward. We will start by creating an Azure Function which uses HTTP as trigger and use Authorization Level as Function.

In Visual Studio 2022 we can now choose the template as Http Trigger with OpenAPI. 

Createazurefunction

Visual Studio 2022 Azure Function Project Template

At the time of this writing,  it is no longer needed to manually add the Microsoft.Azure.WebJobs.Extensions.OpenApi NuGet package to the Functions project. The package is installed by default through the template. Now that we have created our first project using the default template, let us understand some key concepts.

[FunctionName("Function1")]
[OpenApiOperation(operationId: "Run", tags: new[] { "name" })]
[OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In = OpenApiSecurityLocationType.Query)]
[OpenApiParameter(name: "name", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **Name** parameter")]
[OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "text/plain", bodyType: typeof(string), Description = "The OK response")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req)
{
    _logger.LogInformation("C# HTTP trigger function processed a request.");

    string name = req.Query["name"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    string responseMessage = string.IsNullOrEmpty(name)
        ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
        : $"Hello, {name}. This HTTP triggered function executed successfully.";

    return new OkObjectResult(responseMessage);
}

If we notice the initial template generated code, we will find new attributes starting with OpenApi over our Function. These attributes control what gets generated as part of the OpenAPI Document Specification. For more details to under attributes you can refer the References section.

Below are the key attributes that we need to look at.

  • OpenApiOperation – This maps to “Operation Object” from the OpenAPI Specification.
  • OpenApiResponseWithBody – This maps to “Responses Object” from the OpenAPI Specification.
  • OpenApiParameter – This corresponds to “Parameter Object” from the OpenAPI Specification.
  • OpenApiSecurity – This corresponds to “Security Scheme Object” from the OpenAPI Sepcification.

Not often we will be creating functions that would pass parameters in the Query string of the Function Url.

There are situations where we have to send a request object in JSON Format to Function and we generally expect a response in JSON Format. The default template holds good for Function which are invoked using a parameter in Query string.

Now let us see how we can update the Function to suit our requirement to accept a request and return a response in JSON Format. Let us go ahead and build our fictional function.

As part of our Function we will go ahead and create two entities – BookRequest and BookResponse.

   [OpenApiExample(typeof(BookRequestExample))]
    public class BookRequest
    {
        /// <summary>The name of the book</summary>
        [OpenApiProperty]
        public string Name { get; set; }

        /// <summary>The description of the book</summary>
        [OpenApiProperty]
        public string Description { get; set; }
    }

    public class BookRequestExample : OpenApiExample<BookRequest>
    {
        public override IOpenApiExample<BookRequest> Build(NamingStrategy namingStrategy = null)
        {

           this.Examples.Add(
                OpenApiExampleResolver.Resolve(
                    "BookRequestExample",
                    new BookRequest()
                    {
                       Name = "Sample Book",
                       Description = "This is a great book on learning Azure Functions"
                    },
                    namingStrategy
                ));

            return this;
        }
    }
[OpenApiExample(typeof(BookResponseExample))] public class BookResponse { /// <summary> /// The name of the book /// </summary> [OpenApiProperty] public string Name { get; set; } /// <summary> /// The Id of the Book in Guid Format /// </summary> [OpenApiProperty] public Guid BookId { get; set; } /// <summary> /// The description of the book /// </summary> [OpenApiProperty] public string Description { get; set; } } public class BookResponseExample : OpenApiExample<BookResponse> { public override IOpenApiExample<BookResponse> Build(NamingStrategy namingStrategy = null) { this.Examples.Add( OpenApiExampleResolver.Resolve( "BookResponseExample", new BookResponse() { Name = "Sample Book", Description = "This is a great book on learning Azure Functions", BookId = new Guid() }, namingStrategy )); return this; } }

If we notice the above code we use two Attributes – OpenApiProperty and OpenApiExample. 

  • OpenApiProperty – This corresponds to Parameter Object from the OpenAPI Sepcification. Each field in our class will have the OpenApiProperty attribute.
  • OpenApiExample – We use the OpenApiExample attribute to map the sample example with the BookRequest and BookResponse.

The Example class should override the Build method. Using OpenApiExampleResolver we map the Example class with an Valid Example Response.

Our modified Function will look as shown in below code snippet. 

[FunctionName("Function1")]
[OpenApiOperation(operationId: "Run", tags: new[] { "run" })]
[OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In = OpenApiSecurityLocationType.Query)]        
[OpenApiRequestBody(contentType: "application/json; charset=utf-8", bodyType: typeof(BookRequest), Description = "Sample Book Request", Required = true)]
[OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "application/json; charset=utf-8", bodyType: typeof(BookResponse), Description = "The OK response")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequest req)
{
    _logger.LogInformation("C# HTTP trigger function processed a request.");
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var receivedRequest = JsonConvert.DeserializeObject<BookRequest>(requestBody);

    var receivedBook = new BookResponse();
    receivedBook.Name = receivedRequest.Name;
    receivedBook.Description = receivedRequest.Description;
    receivedBook.BookId = System.Guid.NewGuid();
    var result = JsonConvert.SerializeObject(receivedBook, Formatting.None);
    return new OkObjectResult(result);
}

We introduced a new attribute which is OpenApiRequestBody. This corresponds to the “Request Body Object” from OpenAPI Specification.

Let us go ahead and build and Function and run the function locally. On successful build we should be able to see the Azure Functions Console Window with the Url which can be used to invoke the Function.

Azurefunctionconsolewindow

We should be able to access the Swagger UI and OpenApiDocument using below Url.

RenderOpenApiDocument – http://localhost:7071/api/openapi/1.0 

RenderSwaggerUI – http://localhost:7071/api/swagger/ui

OpenAPI Document Swagger

This way we can expose the operations of our Function using OpenAPI Document. We can see the operation “run” and the request objects and response object fields along with sample example. We can expand the operation “run” and perform a test using the Swagger UI.

Openapidocumentoperationrun

The OpenAPI Document works perfectly fine when the Request and Response Objects are simple.

While writing this blog, I have noticed that the OpenAPI Document generation currently has certain issues when the request objects use nested classes or array of type BookRequest. I think this would be fixed in a future release from the Functions team.

The sample code for this Function can be found in below repository.

https://github.com/baskarmib/AzureFunctionOpenAPISample

References-

https://swagger.io/specification/

 

https://techcommunity.microsoft.com/t5/apps-on-azure-blog/general-availability-of-azure-functions-openapi-extension/ba-p/2931231

 

https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/attributes/

Why Perficient?

We’ve helped clients across industries develop strategic solutions and accelerate innovative cloud projects. As a certified Azure Direct Cloud Solution Provider (CSP), we help you drive innovation by providing support to keep your Microsoft Azure operations running.

Whether your IT team lacks certain skills or simply needs support with serving a large business, we’re here to help. Our expertise and comprehensive global support will help you make the most of Azure’s rich features.

Contact our team to learn more.

]]>
https://blogs.perficient.com/2022/05/09/enabling-openapi-specifications-for-azure-function/feed/ 1 309314