Azure Articles / Blogs / Perficient https://blogs.perficient.com/tag/azure/ Expert Digital Insights Tue, 10 Sep 2024 16:15:42 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Azure Articles / Blogs / Perficient https://blogs.perficient.com/tag/azure/ 32 32 30508587 Harnessing the Power of AWS Bedrock through CloudFormation https://blogs.perficient.com/2024/08/20/harnessing-the-power-of-aws-bedrock-through-cloudformation/ https://blogs.perficient.com/2024/08/20/harnessing-the-power-of-aws-bedrock-through-cloudformation/#comments Tue, 20 Aug 2024 09:38:41 +0000 https://blogs.perficient.com/?p=367196

The rapid advancement of artificial intelligence (AI) has led to the development of foundational models that form the bedrock of numerous AI applications. AWS Bedrock is Amazon Web Services’ comprehensive solution that leverages these models to provide robust AI and machine learning (ML) capabilities. This blog delves into the essentials of AI foundational models in AWS Bedrock, highlighting their significance and applications.

What are AI Foundational Models?

AI foundational models are pre-trained models designed to serve as the basis for various AI applications. These models are trained on extensive datasets and can be fine-tuned for specific tasks, such as natural language processing (NLP), image recognition, and more. The primary advantage of using these models is that they significantly reduce the time and computational resources required to develop AI applications from scratch.

AWS Bedrock: A Comprehensive AI Solution

AWS Bedrock provides a suite of foundational models that are easily accessible and deployable. These models are integrated into the AWS ecosystem, allowing users to leverage the power of AWS’s infrastructure and services. AWS Bedrock offers several key benefits:

  1. Scalability: AWS Bedrock models can scale to meet the demands of large and complex applications. The AWS infrastructure ensures that models can handle high volumes of data and traffic without compromising performance.
  2. Ease of Use: With AWS Bedrock, users can access pre-trained models via simple API calls. This ease of use allows developers to integrate AI capabilities into their applications quickly and efficiently.
  3. Cost-Effectiveness: Utilizing pre-trained models reduces the need for extensive computational resources and time-consuming training processes, leading to cost savings.

Key Components of AWS Bedrock

AWS Bedrock comprises several key components designed to facilitate the development and deployment of AI applications:

  1. Pre-trained Models: These models are the cornerstone of AWS Bedrock. They are trained on vast datasets and optimized for performance. Users can select models tailored to specific tasks, such as text analysis, image classification, and more.
  2. Model Customization: AWS Bedrock allows users to fine-tune pre-trained models to meet their specific needs. This customization ensures that the models can achieve high accuracy for specialized applications.
  3. Integration with AWS Services: Bedrock models seamlessly integrate with other AWS services, such as AWS Lambda, Amazon S3, and Amazon SageMaker. This integration simplifies the deployment and management of AI applications.

 

Amazon Bedrock supports a wide range of foundation models from industry-leading providers. We can choose the model that is best suited to achieving your unique goals.

Here are just a few of the popular ones:

11

Note: Account users with the correct IAM Permissions must manually enable access to available Bedrock foundation models (FMs) to use Bedrock. Once Model Access is granted for that particular region, we can use it to build and scale our application.

Using AWS Bedrock services requires specific IAM permissions to ensure that users and applications can interact with the service securely and effectively. Basic Bedrock Access, Model Training and Deployment, Inference and Usage, Data Management, Compute Resources Management, Security and Identity Management, Monitoring, and Logging are the types of IAM permissions that are typically needed.

The cost parameters for using AWS Bedrock include Compute, Storage, Data Transfer, and Model Usage costing depending on the Number of Input tokens for units per month. Understanding these parameters can help estimate the costs associated with deploying and running AI models using AWS Bedrock. For precise cost calculations, AWS provides the AWS Pricing Calculator and detailed pricing information on its official website.

Let’s try to implement any one of these foundation models (for example: Titan) using AWS CloudFormation service.

Amazon Titan in Amazon Bedrock

Amazon Bedrock exclusively offers the Amazon Titan series of models, which benefit from Amazon’s 25 years of innovation in AI and machine learning. Via a fully controlled API, these Titan foundation models (FMs) offer a range of high-performance choices for text, graphics, and multimodal information. Created by AWS, Titan models are pre-trained on extensive datasets, making them powerful and versatile for a wide range of applications while promoting responsible AI usage. They can be used as they are or customized privately with your data.

Titan models have three categories: embeddings, text generation, and image generation. Here, we will focus on the Amazon Titan Text generation models, which include Amazon Titan Text G1 – Premier, Amazon Titan Text G1 – Express, and Amazon Titan Text G1 – Lite. We will implement “Titan Text G1 – Premier” from the list above.

Amazon Titan Text G1 – Premier

Amazon Titan Text G1 – Premier is a large language model (LLM) for text generation that is integrated with Amazon Bedrock Knowledge Base and Amazon Bedrock Agents and is highly helpful for a variety of tasks, including summarizing, code generation, and answering open-ended and context-based questions, and also supports Custom Finetuning in preview.

ID – amazon.titan-text-premier-v1:0

Max tokens – 32,000

Language – English only

Use cases – 32k context window, Context-Based Question Answering, open-ended text generation, Knowledge Base support, Agent’s support, chain of thought, rewrite, brainstorming, summarizations, code generation, table creation, data formatting, paraphrasing, extraction, QnA, chat, Model Customization (preview).

Inference parameters – Temperature, Top P (Default: Temperature = 0.7, Top P = 0.9)

While implementing this using CloudFormation, we will first need to create a stack for it. Creating a stack template for AWS CloudFormation involves defining your AWS infrastructure and resources using either JSON or YAML format.

Let’s try to implement the Python AWS Lambda function that utilizes the AWS Bedrock’s Titan service to generate text based on an input prompt through a YAML-based cloud formation script.

22

The given AWS CloudFormation template defines resources for creating an IAM role and a Lambda function to invoke a model from AWS Bedrock i.e., providing text generation capabilities through the specified Titan model.

IAM Role

  • Allows Lambda to assume the role and invoke the Bedrock model.
  • Grants permission to invoke the specific Bedrock model (amazon.titan-embed-text-v1) and list available models.

Lambda Function

  • Python function that uses Boto3 to invoke the Bedrock model amazon.titan-text-premier-v1:0.
  • Sends a JSON payload to the model with a specified configuration for text generation. Returns the response from the model as the HTTP response.
  • If we check the Lambda function’s dashboard, then the “index.py” file, contains:

33
This AWS Lambda function interacts with the AWS Bedrock service to generate text based on an input prompt. It creates a client for the Bedrock runtime, invokes a specific text generation model with given configurations, processes the response to extract the generated text, and returns this text in an HTTP response. This setup allows for the automation of text generation tasks using AWS Bedrock’s capabilities.

Execution Results

44

As seen in the Response window for the input given as: “Hello, how are you?”, it has returned the output text as “Hello! I’m doing well, thank you. How can I assist you today?”.

In this manner, AWS Bedrock’s Amazon Titan Text G1 – Premier model is designed for a wide range of natural language processing (NLP) tasks due to its advanced capabilities and large context window.

]]>
https://blogs.perficient.com/2024/08/20/harnessing-the-power-of-aws-bedrock-through-cloudformation/feed/ 6 367196
5 Major Benefits of Azure Integration Services Over MuleSoft https://blogs.perficient.com/2024/08/12/5-major-benefits-of-azure-integration-services-over-mulesoft/ https://blogs.perficient.com/2024/08/12/5-major-benefits-of-azure-integration-services-over-mulesoft/#respond Mon, 12 Aug 2024 07:56:19 +0000 https://blogs.perficient.com/?p=366852

In the realm of enterprise integration, choosing the right platform is crucial for ensuring seamless connectivity between diverse applications and systems. Azure Integration Services (AIS) and MuleSoft are two prominent players in this field. Azure Integration Services is a cloud-based integration platform provided by Microsoft, while MuleSoft is an integration platform that allows developers to connect applications, data, and devices. While both offer robust capabilities, Azure Integration Services provides distinct advantages that can be pivotal for businesses looking to optimize their integration strategies. Here are five major benefits of AIS over MuleSoft.

1. Seamless Integration with Microsoft Ecosystem

One of the standout benefits of Azure Integration Services is its seamless integration with the Microsoft ecosystem. AIS is designed to work natively with other Microsoft products and services such as Azure, Office 365, Dynamics 365, and Power Platform. This native compatibility ensures a smoother and more efficient integration process, reducing the need for custom connectors and simplifying the overall integration architecture. This compatibility provides a sense of reassurance to the audience, knowing that their existing Microsoft investments will seamlessly integrate with AIS.

Integration capabilities don’t stop there – AIS can integrate with all kinds of other systems including SaaS platforms, existing on-premises API’s, commerce platforms, banking platforms, and more.  Additionally, AIS supports more than .Net – you can also integrate with Java, Node, Python and many other technologies.

2. Comprehensive and Unified Offering

Azure Integration Services offers a comprehensive and unified suite of integration tools, including Azure Logic Apps, Azure Service Bus, Azure API Management, and Azure Event Grid. This unified approach allows businesses to address a wide range of integration needs within a single platform, streamlining management and reducing the complexity associated with using multiple tools. This versatility and adaptability of AIS’s suite of tools instills confidence in the audience about the platform’s ability to meet their diverse integration needs.

3. Scalability and Performance

Azure Integration Services leverages the global infrastructure of Microsoft Azure, ensuring high scalability and performance for enterprise-grade integrations. AIS can handle large volumes of data and transactions with ease, providing reliable and fast performance across various integration scenarios. MuleSoft, although scalable, may require more effort to achieve the same level of performance, particularly when dealing with complex and high-volume integrations.

4. Cost-Effectiveness

Cost is a critical factor for many organizations when choosing an integration platform. Azure Integration Services offers a more cost-effective solution compared to MuleSoft, primarily due to its consumption-based pricing model. Businesses pay only for the resources they use, allowing for better cost control and budgeting. Additionally, AIS often incurs significantly lower licensing and maintenance costs, making it an attractive option for organizations looking to optimize their IT expenditure.

5. Enhanced Security and Compliance

Security and compliance are top priorities for any integration platform. Azure Integration Services benefits from Azure’s robust security features and compliance certifications. With AIS, businesses can leverage advanced security measures such as encryption, identity and access management, and threat protection, ensuring that their integrations are secure and compliant with industry standards. While MuleSoft also offers strong security features, AIS’s integration with Azure’s comprehensive security framework provides an added layer of protection and peace of mind.

Conclusion

Azure Integration Services stands out as a powerful and cost-effective integration platform, offering seamless integration with the Microsoft ecosystem, a comprehensive suite of tools, high scalability and performance, cost efficiency, and enhanced security and compliance. For businesses looking to streamline their integration processes and leverage the full potential of their existing Microsoft investments, AIS presents a compelling choice over MuleSoft.

 

Contact us to learn more about how we can help you maximize your investment in Azure!

]]>
https://blogs.perficient.com/2024/08/12/5-major-benefits-of-azure-integration-services-over-mulesoft/feed/ 0 366852
Seamless GitHub Integration with Azure Storage for Enhanced Cloud File Management https://blogs.perficient.com/2024/08/05/seamless-github-integration-azure-storage-enhanced-cloud-file-management/ https://blogs.perficient.com/2024/08/05/seamless-github-integration-azure-storage-enhanced-cloud-file-management/#respond Mon, 05 Aug 2024 10:01:23 +0000 https://blogs.perficient.com/?p=365506

In the modern digital landscape, efficient collaboration and streamlined workflows are proven elements of successful project management. Integrating GitHub repositories with Azure Storage proves to be a robust solution for the management of project files in the cloud. Whether you’re a developer, a project manager, or a technology enthusiast, understanding how to push files from a GitHub repository to an Azure Storage container can significantly enhance your productivity and simplify your development process. In this comprehensive guide, we’ll explore the steps required to achieve this seamless integration.
You must be wondering why, although the files already exist in the repository, we are sending them from a GitHub repository to an Azure Storage container. While GitHub repositories are excellent for version control and collaboration, they might not be optimized for certain types of file storage and access patterns. Comparatively, Azure Storage provides a scalable, high-performance solution specifically designed for storing various types of data, including large files, binaries, and media assets.

By transferring files from a GitHub repository to an Azure Storage container, you can leverage Azure’s robust infrastructure to enhance scalability and optimize performance, especially in the below scenarios:      

  • Large File Storage
  • High Availability and Redundancy
  • Access Control and Security
  • Performance Optimization

Understanding the Solution

Before we dive into the practical steps, let’s gain a clear understanding of the solution we’re implementing:

  1. GitHub Repository: This is where your project’s source code resides. By leveraging version control systems like Git and hosting platforms like GitHub, you can collaborate with team members, track changes, and maintain a centralized repository of your project files.
  2. Azure Storage: Azure Storage provides a scalable, secure, and highly available cloud storage solution. By creating a storage account and defining containers within it, you can store a variety of data types, including documents, images, videos, and more.
  3. Integration: We’ll establish a workflow to automatically push files from your GitHub repository to an Azure Storage container whenever changes are made. This integration automates deployment, ensuring synchronization between your Azure Storage container and GitHub repository. This not only unlocks new possibilities for efficient cloud-based file management but also streamlines the development process.

Prerequisites

  1. Basic Knowledge of Git and GitHub: Understanding the fundamentals of version control systems like Git and how to use GitHub for hosting repositories is essential. Users should be familiar with concepts such as commits, branches, and pull requests.

  2. Azure Account: Readers should have access to an Azure account to create a storage account and containers. If they don’t have an account, they’ll need to sign up for one.

  3. Azure Portal Access: Familiarity with navigating the Azure portal is helpful for creating and managing Azure resources, including storage accounts.

  4. GitHub Repository, Access to GitHub Repository Settings, and GitHub Actions Knowledge: Readers should have a GitHub account with a repository set up for deploying files to Azure Storage. Understanding how to access and modify repository settings, including adding secrets, is crucial for configuring the integration. Additionally, familiarity with GitHub Actions and creating workflows is essential for setting up the deployment pipeline efficiently.

  5. Azure CLI (Command-Line Interface) Installation: Readers should have the Azure CLI installed on their local machine or have access to a terminal where they can run Azure CLI commands. Instructions for installing the Azure CLI should be provided or linked to.

  6. Understanding of Deployment Pipelines: A general understanding of deployment pipelines and continuous integration/continuous deployment (CI/CD) concepts will help readers grasp the purpose and functionality of the integration.

  7. Environment Setup: Depending on the reader’s development environment (Windows, macOS, Linux), they may need to make adjustments to the provided instructions. For example, installing and configuring Azure CLI might differ slightly across different operating systems.

Let’s Start from Scratch and See Step-By-Step Process to Integrate GitHub Repositories with Azure Storage

Step 1: Set Up Azure Storage Account

  1. Sign in to Azure Portal: If you don’t have an Azure account, you’ll need to create one. Once you’re signed in, navigate to the Azure portal. – “portal.azure.com/#home”
         a. Create a Storage Account: In the Azure portal, click on “Create a resource” and search for “Storage account”. Click on “Storage account – blob, file, table, queue” from the search results. Then, click “Create”.
    Azure Storage

  2. Configure Storage Account Settings: Provide the required details such as subscription, resource group, storage account name, location, and performance tier. For this guide, choose the appropriate options based on your preferences and requirements.
    Name

  3. Retrieve Access Keys: Once the storage account is created, navigate to it in the Azure portal. Go to “Settings” > “Access keys” to retrieve the access keys. You’ll need these keys to authenticate when accessing your storage account programmatically.
    Note: Click on the show button to copy the Access key.

Access Key

Step 2: Set Up GitHub Repository

  1. Create a GitHub Account: If you don’t have a GitHub account, sign up for one at “github.com”

  2. Create a New Repository: Once logged in, click on the “+” icon in the top-right corner and select “New repository”. Give your repository a name, description, and choose whether it should be public or private. Click “Create repository”.
    GitHub

  3. Clone the Repository: After creating the repository, clone it to your local machine using Git. You can do this by running the following command in your terminal or command prompt:
    Command:

    git clone https://github.com/your-username/your-repository.gi

Note: Replace ‘your-username’ with your GitHub username and ‘your-repository’ with the name of your repository.

Clone Command Ss

Step 3: Push Files to GitHub Repository

  1. Add Files to Your Local Repository: Place the files you want to push to Azure Storage in your machine’s local repository directory.
    File Loaction

  2. Stage and Commit Changes: In your terminal or command prompt, navigate to the local repository directory and stage the changes by running:
        Command:

    git add .

     Then, commit the changes with a meaningful commit message:
       Command:

    git commit -m "Add files to be pushed to Azure Storage
  3. Push Changes to GitHub: Finally, push the committed changes to your GitHub repository by running:
         Command: 

    git push origin main

      Note: Replace `main` with the name of your branch if it’s different.

Verify Files in GitHub: Check in your GitHub account file has been uploaded.

File Uploaded In Github

Step 4: Push Files from GitHub to Azure Storage

  1. Install Azure CLI: If you haven’t already, install the Azure CLI on your local machine.
      Note: You can find installation instructions –  

    https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
  2. Authenticate with Azure CLI: Open your terminal or command prompt and login to your Azure account using the Azure CLI:
     Command:  

    az login

    Follow the prompts to complete the login process.

    Azure Cli Login Command

  3. Upload Files to Azure Storage: Use the Azure CLI to upload the files from your GitHub repository to your Azure Storage container:
       Command:

    az storage blob upload-batch --source <local-path> --destination <container-name> --account-name <storage-account-name> --account-key <storage-account-key>

Note: Replace `<storage-account-name>` and `<storage-account-key>` with the name and access key of your Azure Storage account, respectively. Replace `<container-name>` and `<local-path>` with your container name and the local path to your repository directory, respectively.

Azure Cli File Upload Command

Step 5: Verify Deployment

Once the workflow is complete, navigate to your Azure Storage container. You should see the files from your GitHub repository synchronized to the container. Verify the integrity of the files and ensure that the deployment meets your expectations.
Azure Container File Uploaded
Conclusion

By following these steps, you’ve successfully set up a seamless integration between your GitHub repository and Azure Storage container. This integration automates pushing files from your repository to the cloud, enabling efficient collaboration and simplified project management. Embrace the power of automation, leverage the capabilities of GitHub Actions and Azure Storage, and unlock new possibilities for your development workflow. Happy coding

]]>
https://blogs.perficient.com/2024/08/05/seamless-github-integration-azure-storage-enhanced-cloud-file-management/feed/ 0 365506
Unlocking the Power of Azure Integration Services for the Financial Services Industry https://blogs.perficient.com/2024/08/04/microsoft-azure-integration-services-financial-services-industry/ https://blogs.perficient.com/2024/08/04/microsoft-azure-integration-services-financial-services-industry/#respond Sun, 04 Aug 2024 07:34:06 +0000 https://blogs.perficient.com/?p=366844

In today’s rapidly evolving digital landscape, financial services organizations are increasingly relying on cutting-edge technologies to stay competitive and deliver exceptional services to their clients. Microsoft’s Azure Integration Services, a suite of tools designed to seamlessly connect applications, data, and processes, is emerging as a game-changer for the financial services industry.

This blog post delves into the myriad benefits of Azure Integration Services and highlights high-impact examples that demonstrate its transformative potential for financial services organizations.

The Benefits of Azure Integration Services

Enhanced Connectivity and Interoperability

Azure Integration Services offer a robust framework for connecting disparate systems, enabling financial organizations to integrate on-premises, cloud-based, and third-party applications seamlessly. This connectivity enhances interoperability, allowing for streamlined operations and improved data flow across various platforms.  Additionally, Azure offers best-in-class capabilities to support hybrid scenarios with stringent requirements for private networking & threat detection – all of which are critical in today’s cloud world.

Scalability and Flexibility

Financial organizations often face fluctuating demands and need a flexible infrastructure that can scale accordingly. Azure Integration Services provide the scalability required to handle varying workloads, ensuring businesses adapt quickly to changing market conditions without compromising performance.

Improved Security and Compliance

With stringent regulatory requirements in the financial sector, security and compliance are paramount. Azure Integration Services leverage Azure’s robust security features (including multi-factor authentication, encryption, role-based access control, and private networking) to ensure that data is protected and compliance standards are met.

Cost Efficiency

Financial organizations can reduce IT overhead costs by integrating existing systems and leveraging cloud-based services. Azure Integration Services minimize the need for extensive physical hardware and maintenance, resulting in significant cost savings.

Streamlined Business Processes

Automation is a key benefit of Azure Integration Services. By automating repetitive tasks and processes, financial organizations can increase efficiency, reduce errors, and allow employees to focus on more strategic activities that add value to the business.

 

High-Impact Examples in the Financial Services Industry

Real-Time Fraud Detection and Prevention

Fraud detection is critical in the financial industry. Azure Integration Services can connect various data sources and use machine learning models to analyze transactions in real-time. For example, a bank can integrate its transaction processing system with Azure Machine Learning to instantly identify and flag suspicious activities, reducing fraud risk.

Customer Relationship Management (CRM) Enhancement

Financial organizations can enhance their CRM systems by integrating them with Azure Logic Apps, Azure Functions, and Azure Service Bus. With Azure, organizations can integrate just as seamlessly with Microsoft technology (such as Dynamics) as it can with non-Microsoft technology (Salesforce).  This integration allows for real-time updates and data synchronization across customer touchpoints, providing a unified view of customer interactions. As a result, financial advisors can offer more personalized services and improve customer satisfaction.

Regulatory Reporting and Compliance Automation

Compliance reporting is often a resource-intensive process. Azure Integration Services can automate data collection and reporting from multiple sources, ensuring accuracy and timeliness. For instance, an investment firm can integrate its trading platforms with Azure Logic Apps to automate the generation and submission of compliance reports to regulatory bodies.  In addition, Azure provides security & compliance dashboards to ensure the environment itself remains secure and minimizes the threat of breaches & unauthorized access.

Seamless Payment Processing

Financial organizations can offer seamless and secure payment processing services by integrating payment gateways with Azure API Management. This integration ensures that payment data is transmitted securely and efficiently, enhancing the customer experience and reducing transaction times.  API Management benefits your products & customers as much as it benefits your development teams.  Implementing API Management provides full lifecycle support for your API’s, API discovery, and a developer portal to streamline both development and operational needs.

Enhanced Risk Management

Risk management is a critical aspect of financial services. Azure Integration Services can integrate risk assessment tools with core banking systems to provide real-time insights into potential risks. For example, a lending institution can use Azure Functions to analyze borrower data and assess credit risk more accurately, leading to better-informed lending decisions.

 

Conclusion

Azure Integration Services offers a powerful suite of capabilities that enable financial organizations to enhance connectivity, scalability, security, and efficiency. By leveraging these services, organizations can drive innovation, improve customer experiences, and maintain a competitive edge in the market. The high-impact examples highlighted in this post demonstrate the transformative potential of Azure Integration Services in the financial services industry, making it an indispensable asset for forward-thinking organizations.

By embracing Azure Integration Services, financial institutions can navigate the complexities of the digital era with confidence and agility, positioning themselves for sustained success and growth.

Contact us to learn more!

Learn more about our Financial services capabilities with our Financial Services Lookbook

Learn more about our Azure solutions & capabilities here.

]]>
https://blogs.perficient.com/2024/08/04/microsoft-azure-integration-services-financial-services-industry/feed/ 0 366844
Understanding Azure OpenAI Parameters https://blogs.perficient.com/2024/07/26/understanding-azure-openai-parameters/ https://blogs.perficient.com/2024/07/26/understanding-azure-openai-parameters/#respond Fri, 26 Jul 2024 05:56:16 +0000 https://blogs.perficient.com/?p=366224

Azure OpenAI Service offers powerful tools for utilizing OpenAI’s advanced generative AI models. These models are capable of producing human-like text, images, and even code, are revolutionizing various industries. By understanding and optimizing various parameters, you can significantly enhance the performance and precision of these models for specific applications. This blog explores the key parameters available in Azure OpenAI, how they influence model behavior, and best practices for tuning them to suit your needs.

What are Parameters in Azure OpenAI?

In Azure OpenAI, parameters are settings that allow you to control and fine-tune the behavior and output of the AI models. By adjusting these parameters, such as temperature, max tokens, and sampling methods, you can influence how deterministic, creative, or diverse the generated responses are. This customization enables the models to better meet specific needs and use cases, enhancing their performance and relevance for various tasks.

Azure OpenAI Parameters

1. Model Selection 

Azure OpenAI offers different models, each with unique capabilities and performance characteristics. Selecting the right model is crucial for achieving the desired results. The primary models include:

  • GPT-3/4: Versatile and powerful, suitable for a wide range of tasks.
  • DALL-E: Specialized in generating images from textual descriptions.
  • TTS (Text-to-Speech): Converts written text into natural-sounding speech, ideal for applications like voice assistants, audiobooks, and accessibility features.
  • Whisper: Advanced speech-to-text model that accurately transcribes spoken language into written text, suitable for tasks like transcription, voice commands, and real-time speech recognition.
  • Embedding Models: Create vector representations of text, capturing the meaning and context of words and phrases to enable tasks such as semantic search, text classification, and recommendation systems.

2. Temperature

The Temperature parameter regulates the randomness of the model’s responses. A higher value leads to more random outputs, while a lower value ensures the output is more deterministic and focused.

  • Low Temperature (0-0.3): This produces more focused and predictable results, making it ideal for tasks requiring precise answers.
  • Medium Temperature (0.4-0.7): Balances creativity and accuracy. Suitable for general-purpose tasks.
  • High Temperature (0.8-1.0): This temperature generates diverse and creative responses. It is useful for brainstorming or creative writing.

3. Max Tokens

Max Tokens defines the maximum length of the generated response. One token generally represents a single word or part of a word.

  • Short Responses (10-50 tokens): Suitable for concise answers or single-sentence responses.
  • Medium Responses (50-150 tokens): Ideal for paragraphs or detailed explanations.
  • Long Responses (150+ tokens): Best for comprehensive articles or in-depth content.

4.  Top-p (Nucleus Sampling)

Top-p(or nucleus sampling) controls the diversity of the output by considering only the most probable tokens whose cumulative probability is above a certain threshold. It ranges from 0 to 1.

  • Low Top-p (0-0.3): Limits the model to the most likely tokens, producing very deterministic responses.
  • Medium Top-p (0.4-0.7): Balances between diversity and probability, providing varied but sensible outputs.
  • High Top-p (0.8-1.0): Allows for more diverse and creative responses, suitable for tasks requiring a wide range of possibilities.

5. Frequency Penalty

Frequency Penalty discourages the model from repeating the same tokens. It ranges from 0 to 1, with higher values reducing repetition.

  • Low Penalty (0-0.3): Minimal impact on repetition, useful for tasks where repeating key phrases is important.
  • Medium Penalty (0.4-0.7): Balances repetition and variety, suitable for most general tasks.
  • High Penalty (0.8-1.0): Strongly discourages repetition, ideal for creative writing or brainstorming.

6. Presence Penalty

Presence Penalty affects the model’s likelihood of introducing new topics or ideas. It ranges from 0 to 1, with higher values encouraging more novel content.

  • Low Penalty (0-0.3): This keeps the content focused on existing topics, which is useful for detailed analysis or follow-up questions.
  • Medium Penalty (0.4-0.7): This penalty encourages a moderate level of new ideas, suitable for balanced content generation.
  • High Penalty (0.8-1.0): Promotes the introduction of new topics, ideal for creative brainstorming or exploratory writing.

Best Practices for Azure OpenAI Parameter Tuning

  1. Understand the Task: Clearly define the purpose of your task and select parameters that align with your goals.
  2. Experiment and Iterate: Start with default values and gradually adjust parameters based on the performance and desired output.
  3. Balance Trade-offs: When tuning parameters, consider the trade-offs between creativity, accuracy, and computational cost.
  4. Use Multiple Parameters: Combine different parameters to fine-tune the model’s behavior for specific use cases.
  5. Monitor and Evaluate: Continuously monitor the model’s performance and adjust as needed to maintain optimal results.

Optimizing Azure OpenAI parameters is essential for tailoring the model’s behavior to meet specific needs. By understanding and effectively tuning these parameters, you can harness the full potential of Azure OpenAI and achieve superior results for a wide range of applications. Whether generating content, developing code, or exploring new ideas, the right parameters will help you get the most out of your AI models.

]]>
https://blogs.perficient.com/2024/07/26/understanding-azure-openai-parameters/feed/ 0 366224
Andrew Hammond Brings Value and Expertise to Perficient’s Microsoft Practice https://blogs.perficient.com/2024/07/03/andrew-hammond-brings-value-and-expertise-to-perficients-microsoft-practice/ https://blogs.perficient.com/2024/07/03/andrew-hammond-brings-value-and-expertise-to-perficients-microsoft-practice/#respond Wed, 03 Jul 2024 15:27:52 +0000 https://blogs.perficient.com/?p=364181

As the digital world continues to evolve, organizations must pivot how they do business to meet consumer needs. Through Perficient’s People Promise, and our award-winning Growth for Everyone programming, we’re providing resources to further enable the success of our colleagues and the industries they support.  

We recently sat down with Andrew Hammond, senior solutions architect, Microsoft, to learn more about his role working in the Microsoft business unit (BU) and expertise with Microsoft Azure. Continue reading to discover more about Andrew’s experience at Perficient and to catch a glimpse of his family’s spooktacular Halloween traditions! 

What is your role? Describe a typical day in the life.   

I do a little bit of everything, and I actively work on sales pursuits and on my own projects. I also assist others on their projects. I’m in Perficient’s Microsoft BU, and work with the Azure team specifically. The Microsoft BU includes Azure, Modern Work, and Dynamics. 

My responsibilities reflect the projects I’m assigned. That could include attending client meetings, deploying things in Microsoft Azure, and writing code. In addition to working on client projects, I also help my fellow colleagues work through problems and find alternative methods to deploy things. I work with our sales teams and directors to create presentations and resource plans for sales pursuits. I also meet with new or existing clients to discuss opportunities and projects.  

I love to learn new things to stay challenged and engaged. While I think I know a lot about Azure, it seems like every day there’s something new I’m learning and picking up. We do a lot of infrastructure-as-code, and I really enjoy that.  

Read More: Learn About Perficient’s Microsoft Azure Consulting Solutions  

What are your proudest accomplishments, personally and professionally? Any milestone moments at Perficient?   

There was a new service in Azure, the Azure VMware solution, and I was the first one at Perficient to lead a project on this and be successful. I created pursuit artifacts that we can use for Azure VMware projects in the future. 

What has your experience at Perficient taught you?   

One of the things that I’ve learned is that you need to speak up and not be afraid to propose new ideas or alternative solutions. Clients come to the table and want something done a certain way, but there could be a better option. Instead of only doing what is asked, suggest alternatives and better ways to do things. Within the cloud space, whether it’s Azure, AWS, Google, or any of the clouds we work with, there’s always evolving tech and new ways to do things.   

 

 

What motivates you in your daily work?   

I hold myself to a high standard, and I also hold my work to that same standard. Making sure that I’m delivering the best I can for my customers, and ensuring the highest quality, is what drives me forward. I take pride in that. 

What advice would you give to colleagues who are starting their career with Perficient?   

Andrew and his family exploring Arkansas.

Always be open to learning new things and continuing to grow. Don’t be afraid to go out and get certifications because they help you continue your education. Also, don’t be afraid of new things. A lot of the technologies that I work with were new to me as I had not deployed them before. I went out and learned them, and through this, I was able to deploy them for customers and make the project successful.  

When I came to Perficient, I did not have any certifications. Now, I have six Microsoft certifications and completed SAFe 6 Agilist certification.  

Learn More:  Discover How Perficient Prioritizes Growth for Everyone  

Whether big or small, how do you make a difference for our clients, colleagues, communities, or teams?   

The first project I worked on at Perficient was with a team of relatively new colleagues. Through that experience, we all created a strong bond and became friends. After completing the project, we continued to stay in touch. One of my fellow colleagues ran into a few challenges, and I acted as a pseudo-mentor by providing advice. It was a nice opportunity to share my knowledge and experience, while helping my peer grow and overcome problems. It was a great experience for both of us. I’m always willing to help. 

What are you passionate about outside of work?   

I’m passionate about my family. I’m married and have a daughter. We spend as much time as we can together and are very close.  We like to travel, but always seemed to take beach vacations. We have started going to random places to find something new, including Wisconsin, Michigan, Arizona, and Tennessee. It’s exciting to travel all over the place to see what’s out there.  

My favorite place to visit is Gatlinburg, Tennessee. We loved going hiking in the mountains and seeing the wildlife and bears. It’s very nice and laid back, so we enjoyed it.  

What’s one thing you wish your colleagues knew about you? 

One of the things I like to do is create Halloween displays in my yard. My family does a lot of digital decorations with projectors, screens in our yards, and physical objects. We built a large display last year and have a goal to continue growing it.

Read More: Check Out Andrew’s Spooktacular Halloween House Feature  

My parents work at a campground in the summer, and they have a Halloween party every year. People decorate their house or campsite with Halloween decorations. It’s a competition with three categories: campsites, RV sites, and cabins. My family does a large display there too. It’s always fun to set up.   

SEE MORE PEOPLE OF PERFICIENT 

It’s no secret our success is because of our people. No matter the technology or time zone, our colleagues are committed to delivering innovative, end-to-end digital solutions for the world’s biggest brands, and we bring a collaborative spirit to every interaction. We’re always seeking the best and brightest to work with us. Join our team and experience a culture that challenges, champions, and celebrates our people.  

Visit our Careers  page   to see career opportunities and more!  

Go inside Life at Perficient   and connect with us on LinkedIn ,YouTube ,Twitter ,Facebook ,TikTok , andInstagram. 

]]>
https://blogs.perficient.com/2024/07/03/andrew-hammond-brings-value-and-expertise-to-perficients-microsoft-practice/feed/ 0 364181
Getting Started with Azure DevOps Boards and Repos https://blogs.perficient.com/2024/05/15/getting-started-with-azure-devops-boards-and-repos/ https://blogs.perficient.com/2024/05/15/getting-started-with-azure-devops-boards-and-repos/#respond Wed, 15 May 2024 06:23:28 +0000 https://blogs.perficient.com/?p=362697

The previous blog post explored the initial steps of setting up Azure DevOps, creating projects, and navigating the Overview section. Now, let’s delve deeper into the other core sections of the Azure DevOps interface, Boards, and Repos, each playing a crucial role in your development lifecycle.

Read the first part of the blog here: A Beginner’s Guide to Azure DevOps

Navigating the Azure DevOps Interface

The Azure DevOps interface is divided into sections mainly – Overview, Repos, Pipelines, Boards, Test Plans, and Artifacts. Let’s continue learning about the different sections.

Boards

Azure DevOps Boards enable teams to organize, visualize, and track their work effectively throughout the development lifecycle. It provides a set of features that help teams organize their tasks, collaborate effectively, and deliver high-quality software.Azure Boards

Work Items

Work Items are the building blocks of Azure DevOps Boards. They represent tasks, issues, or ideas that need to be tracked and managed. Work Items can be of different types:

  • User Stories: Capture user requirements and functionalities planned for your application.
  • Tasks: Break down user stories into smaller, actionable items for developers to complete.
  • Bugs/Issues: Track and manage software defects discovered during development or testing.
  • Epics: Group large or complex user stories that span multiple development cycles.

As a tester, you can use Work Items to track your testing activities, log bugs and issues, and collaborate with your team members. You can assign work items to team members, link to other Work Items, and schedule for specific iterations or sprints.

Boards

Boards provide a visual representation of Work Items and their status. Azure DevOps Boards offer display using Kanban methodology. Kanban Boards are ideal for managing work in progress. They provide a visual representation of Work Items moving through different stages of completion, such as To Do, In Progress, and Done. Testers can use Kanban Boards to track the progress of their testing activities and ensure that all tasks are completed on time.

Backlogs

Backlogs are lists of all Work Items that are yet to be scheduled or planned. Backlog serve as a central repository for user stories, bugs, and other tasks. Azure DevOps Boards offer Product Backlogs and Sprint Backlogs to help teams manage their work effectively.

  • Product Backlog: Product Backlog contains a list of all Work Items that you need to be complete for the project. Testers can use the Product Backlog to prioritize their testing activities and ensure that critical tasks are addressed first.
  • Sprint Backlog: Sprint Backlog contains a list of Work Items that you are planning for a specific sprint. Testers can use the Sprint Backlog to track their testing tasks for each sprint and monitor their progress.

Sprint

Sprint is a time-boxed iteration in which a set of Work Items is completed. Sprints are used in Agile methodologies to deliver incremental value to the customer. Testers can use Sprints to plan their testing activities for each iteration and ensure that all tasks are completed within the specified timeframe.

Queries

Queries allow teams to create custom queries to filter and sort Work Items based on specific criteria. Testers can use Queries to create lists of Work Items for testing, identify bugs and issues, and track the progress of their testing activities.

  • Pre-defined Queries: Azure DevOps offers pre-built queries for common use cases, such as viewing open bugs or tasks assigned to a specific developer.
  • Custom Queries: You can create custom queries with specific filters to target work items based on various attributes like work item type, priority, or creation date.

Delivery Plans

Delivery Plans provide a timeline view of Work Items across multiple teams and projects. Testers can use Delivery Plans to track the progress of their testing activities in relation to the overall project timeline and identify any dependencies or bottlenecks.

Analytics View

Analytics View provides a set of interactive reports and dashboards that help teams track and visualize their progress. Testers can use Analytics View to gain insights into their testing activities, identify trends and patterns, and make data-driven decisions to improve their testing process. Analytics views let you create filtered views of simple board data for Power BI reporting

Repos

Azure DevOps Repos stands as the cornerstone of your codebase management within Azure DevOps. Integrated seamlessly with Git and industry-standard version control system (VCS).Azure Repos

Files

Files are the individual units of your codebase in Azure DevOps Repos. They contain the actual code, configuration, or documentation for your project. Testers can view, edit, and manage files directly within Azure DevOps, making it easy to update test scripts, documentation, or configuration files as needed.

Commits

Commits represent a snapshot of changes made to one or more files in your repository. Each commit has a unique identifier and includes a message describing the changes. Testers use commits to track changes, review the history of the codebase, and understand the reason for specific changes.

Pushes

Pushes are the action of uploading commits from your local machine to the Azure DevOps repository. When you complete a set of changes and are ready to share them with the team, you push your commits to the repository. This makes the changes available to other team members for review and integration.

Branches

Branches are separate lines of development within the repository. They allow testers to work on features or fixes without affecting the main codebase. You can create branches from the main branch, make changes, and merge them back when the work is complete. Branches help isolate changes, enabling parallel development and easier collaboration. This allows parallel development and experimentation.

Tags

Tags are a way to bookmark specific commits, such as a release or a milestone. Testers can use tags to identify important points in the project’s history, making it easier to reference specific versions of the codebase. You can use Tags in conjunction with releases to track which code was deployed to a particular environment.

Pull Requests

Pull Requests (PRs) are a mechanism for code review and collaboration in Azure DevOps Repos. Once developer completes their work on a branch, they submit a pull request (PR) for review and merging. This initiates a collaborative process:

  • Code Review: Other developers can review the changes proposed in the PR, providing feedback and suggestions.
  • Merging: After a successful review, you can merge the changes in the branch back into the main codebase, integrating the work into the project.

Advanced Settings

Azure DevOps Repos offers a range of advanced settings to customize the behavior of your repository. Testers can configure settings related to permissions, branch policies, merge strategies, and more.

These components work together seamlessly in Repos. Developers make changes to files, commit them with descriptive messages, and push them to the repository. Branches allow for the isolated development on features, while PRs facilitate collaboration and code review. Finally, advanced settings provide granular control over access and code quality.

Conclusion

This concludes our second part of the exploration of the core sections within the Azure DevOps interface. By understanding the functionalities of each section, you can leverage the platform’s full potential to streamline your software development lifecycle, foster collaboration, and deliver high-quality applications effectively.

Stay tuned for the next posts where we will delve into other core sections of Azure DevOps.

]]>
https://blogs.perficient.com/2024/05/15/getting-started-with-azure-devops-boards-and-repos/feed/ 0 362697
A Beginner’s Guide to Azure DevOps https://blogs.perficient.com/2024/05/14/a-beginners-guide-to-azure-devops/ https://blogs.perficient.com/2024/05/14/a-beginners-guide-to-azure-devops/#respond Tue, 14 May 2024 05:01:33 +0000 https://blogs.perficient.com/?p=362555

Introduction

Why Azure DevOps is Ideal for Testing

Azure DevOps offers a comprehensive suite of tools that cater to the diverse needs of testing teams. Here’s why Azure DevOps stands out for testing:

  1. Test Planning and Management: Azure DevOps provides tools for creating and managing test plans. Teams can define test suites, test configurations, and test cases, and track the progress of testing activities. This helps in organizing testing efforts and ensuring comprehensive test coverage.
  2. Automated Testing: Azure DevOps supports automated testing through integration with popular testing frameworks like Selenium, JUnit, and NUnit. Teams can automate test execution as part of their CI/CD pipelines, enabling faster and more reliable testing.
  3. Continuous Integration and Deployment (CI/CD): Azure DevOps offers robust CI/CD capabilities, allowing teams to automate build, test, and deployment processes. This helps in delivering high-quality software faster and more frequently.
  4. Collaboration and Communication: Azure DevOps promotes collaboration among team members by providing shared dashboards, real-time updates, and integration with communication tools like Microsoft Teams. This enhances team coordination and visibility into testing activities.
  5. Reporting and Analytics: Azure DevOps provides detailed reporting and analytics capabilities. It allows teams to gain insights into testing performance, identify bottlenecks, and make data-driven decisions to improve testing efficiency.
  6. Integration with Azure Services: Azure DevOps integrates seamlessly with other Azure services, such as Azure Boards, Azure Repos, and Azure Pipelines. This enables teams to leverage the full power of the Azure ecosystem in their testing projects.
  7. Security and Compliance: Azure DevOps prioritizes security and compliance, ensuring that testing data is secure and meets regulatory requirements. This is crucial for organizations operating in regulated industries.
  8. Scalability and Flexibility: Azure DevOps is highly scalable and flexible, making it suitable for teams of all sizes. Whether you’re a small team or a large enterprise, Azure DevOps can accommodate your testing needs.

Setting Up Azure DevOps

To get started with Azure DevOps, follow these steps:

  1. Sign Up for Azure DevOps: Go to the Azure DevOps website (https://azure.microsoft.com/en-us/products/devops) and sign up for an account. You can use your Microsoft account or create a new one.
  2. Create an Organization: In Azure DevOps, an organization is a logical container for projects and teams. It serves as the top-level management unit within Azure DevOps, allowing you to group related projects and resources together. When you sign up for Azure DevOps, you are required to create an organization, which acts as a central hub for all your software development activities. After signing up, we will create an organization. Give your organization a name and select a region for your data.Azure Devops Organization
  3. Access Your Azure DevOps Account: After creating your organization, click on the Organization, it will take you to the Azure DevOps dashboard. This is where you will manage your projects. You can access your organization’s Azure DevOps URL by “https://dev.azure.com/<organization-name>”.Azure Devops Organization Home

Creating Projects

In Azure DevOps, a project is a container for all the work done within a specific team or for a product. It provides a structure for organizing your work items, repositories, pipelines, and other resources related to your software development process. With your Azure DevOps account set up, you can now create a new project:

  1. Navigate to the Projects Page: Click on the “Projects” tab in the Azure DevOps dashboard to access the Projects page.
  2. Create a New Project: Click on the “New Project” button to create a new project. Enter a name and description for your project, choose visibility (Public or Private), choose a version control system (Git or Team Foundation Version Control), and select a process template (Basic, Agile, Scrum, or CMMI).Azure Devops Create Project
  3. Configure Project Settings: Once your project is created, you can configure its settings, such as adding team members, setting up work items, and configuring boards.Azure Project Settings

Navigating the Azure DevOps Interface

The Azure DevOps interface is divided into several main sections:

Overview

The Overview section in Azure DevOps provides a high-level summary of your project, including key metrics, recent activities, and important information. It serves as a central hub for team members to quickly access relevant project information and stay up-to-date with project progress.Azure Overview

  1. Summary: The Summary tab provides a snapshot of your project’s status and progress. It includes information such as the number of work items completed, in progress, and planned. The Summary tab also displays recent activities, such as code changes, build results, and work item updates. It helps team members stay informed about the latest developments in the project.
  2. Dashboard: The Dashboard tab allows you to create customizable dashboards. It provides a visual representation of your project’s metrics and KPIs. You can add various widgets to your dashboard, such as charts, graphs, and work item queries, to track progress, monitor trends, and visualize data. Dashboards can be shared with team members, providing transparency and enabling collaboration. Dashboards update in real time.
  3. Wiki: The Wiki tab provides a collaborative space for team members to create and share project documentation, meeting notes, and other information. The Wiki supports Markdown formatting, making it easy to create well-formatted and organized content. Team members can collaborate on wiki pages, track changes, and refer back to previous versions, ensuring that project documentation is up-to-date and accessible.

Additional Resources

Conclusion

Azure DevOps provides a powerful set of tools for managing the entire software development lifecycle. In this guide, we covered the basics of setting up Azure DevOps, creating projects, and navigating the interface. We covered one section in this blog and will discuss the remaining sections in the upcoming blog   As you become more familiar with Azure DevOps, you can explore its advanced features to further streamline your development process.

]]>
https://blogs.perficient.com/2024/05/14/a-beginners-guide-to-azure-devops/feed/ 0 362555
Azure SQL Server Performance Check Automation https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/ https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/#respond Thu, 11 Apr 2024 13:37:29 +0000 https://blogs.perficient.com/?p=361522

On Operational projects that involves heavy data processing on a daily basis, there’s a need to monitor the DB performance. Over a period of time, the workload grows causing potential issues. While there are best practices to handle the processing by adopting DBA strategies (indexing, partitioning, collecting STATS, reorganizing tables/indexes, purging data, allocating bandwidth separately for ETL/DWH users, Peak time optimization, effective DEV query Re-writes etc.,), it is necessary to be aware of the DB performance and consistently monitor for further actions. 

If Admin access is not available to validate the performance on Azure, building Automations can help monitor the space and necessary steps before the DB causes Performance issues/failures. 

Regarding the DB performance monitoring, IICS Informatica Job can be created with a Data Task to execute DB (SQL Server) Metadata tables query to check for the performance and Emails can be triggered once Free space goes below the threshold percentage (ex., 20 %). 

IICS Mapping Design below (scheduled Hourly once). Email alerts would contain the Metric percent values. 

                        Iics Mapping Design Sql Server Performance Check Automation 1

Note : Email alerts will be triggered only if the Threshold limit exceeds. 

                                             

IICS ETL Design : 

                                                     

                     Iics Etl Design Sql Server Performance Check Automation 1

IICS ETL Code Details : 

 

  1. Data Task is used to get the Used space of the SQL Server performance (CPU, IO percent).

                                          Sql Server Performance Check Query1a

Query to check if Used space exceeds 80% . I Used space exceeds the Threshold limit (User can set this to a specific value like 80%), and send an Email alert. 

                                                            

                                         Sql Server Performance Check Query2

If Azure_SQL_Server_Performance_Info.dat has data (data populated when CPU/IO processing exceeds 80%) the Decision task is activated and Email alert is triggered. 

                                          Sql Server Performance Result Output 1                                          

Email Alert :  

                                            Sql Server Performance Email Alert

]]>
https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/feed/ 0 361522
Read Azure Eventhub data to DataFrame – Python https://blogs.perficient.com/2024/01/08/read-azure-eventhub-data-to-dataframe-python/ https://blogs.perficient.com/2024/01/08/read-azure-eventhub-data-to-dataframe-python/#respond Tue, 09 Jan 2024 03:28:13 +0000 https://blogs.perficient.com/?p=353253

Reading Azure EventHub Data into DataFrame using Python in Databricks

Azure EventHubs offer a powerful service for processing large amounts of data. In this guide, we’ll explore how to efficiently read data from Azure EventHub and convert it into a DataFrame using Python in Databricks. This walkthrough simplifies the interaction between Azure EventHubs and the ease of working with DataFrames.

Prerequisites:

Before diving into the code, ensure you have the necessary setup and permissions:

  • Basic knowledge of setting up EventHubs, Key Vaults, and managing secrets.
  • Azure EventHub instance created (in this example, we’ll use “myehub”).
  • Access to Azure Key Vault to securely store and access the required credentials.
  • Basic knowledge of Scala, Apache Spark and Databricks Notebooks.

MyehubSec

 1. Setting Up the Configuration:

td_scope = "kv-test-01-dev"
namespace_name = "contosoehubns"
shared_access_key_name = "test"
eventhub = "myehub"
shared_access_key = dbutils.secrets.get(scope=td_scope, key="KEY")

# Construct the connection string
connection = f"Endpoint=sb://{namespace_name}.servicebus.windows.net/;SharedAccessKeyName={shared_access_key_name};SharedAccessKey={shared_access_key};EntityPath={eventhub}"

# Define the consumer group
consumer_group = "$Default"

Firstly, this script initializes the configuration for accessing Azure EventHub within a Databricks environment. It sets parameters like the scope, namespace, access key details, and the eventhub itself. Additionally, it constructs the connection string necessary for interfacing with the EventHub service, enabling seamless data consumption.

2. Read EventHub Data

Utilize the Azure SDK for Python (azure-eventhub) to read data from the EventHub. Further, define a function (read_event) to process incoming events and print the data and associated metadata.

pip install azure-eventhub
from azure.eventhub import EventHubConsumerClient

def read_event(partition_context, event):
    event_data = event.body_as_str()
    enqueued_time = event.enqueued_time
    partition_id = partition_context.partition_id
    
    # Process data or perform operations here
    print(event_data)
    print(enqueued_time)
    
    partition_context.update_checkpoint(event)

# Create an EventHub consumer client
client = EventHubConsumerClient.from_connection_string(connection, consumer_group, eventhub_name="data01")

with client:
    # Start receiving events
    client.receive(on_event=read_event, starting_position="-1")

The function, read_event, is called for each event received from the EventHub. It extracts information from the event, such as event_data (the content of the event), enqueued_time (the time the event was enqueued), and partition_id (ID of the partition from which the event was received). In this example, it simply prints out the event data and enqueued time, but this is where you’d typically process or analyze the data as needed for your application.

Here, an instance of EventHubConsumerClient is created using the from_connection_string method. It requires parameters like connection (which contains the connection string to the Azure EventHub), consumer_group (the consumer group name), and eventhub_name (the name of the EventHub).

Finally, the client.receive method initiates the event consumption process. The on_event parameter specifies the function (read_event in this case) that will be called for each received event. The starting_position parameter specifies from which point in the event stream the client should start consuming events (“-1” indicates starting from the most recent events).

Also, The output from this cell would give continues events until stopped.

3. Transform to DataFrame

To convert the received data into a DataFrame, employ the capabilities of Pandas within Databricks. Initialize a DataFrame with the received event data within the read_event function and perform transformations as needed.

import pandas as pd

# Inside read_event function
data = pd.DataFrame({
    'Event_Data': [event_data],
    'Enqueued_Time': [enqueued_time],
    'Partition_ID': [partition_id]
})

# Further data manipulation or operations can be performed here
# For example:
# aggregated_data = data.groupby('Some_Column').mean()

# Display the DataFrame in Databricks
display(data)

Therefore here, we’ve outlined the process of reading Azure EventHub data using Python in Databricks. The Azure EventHub Python SDK provides the necessary tools to consume and process incoming data, and by leveraging Pandas DataFrames, you can efficiently handle and manipulate this data within the Databricks environment.

Experiment with various transformations and analysis techniques on the DataFrame to derive meaningful insights from the ingested data.

Check out this link for guidance on reading “Azure EventHub data into a DataFrame using Scala in Databricks”, along with a concise overview of setting up EventHubs, KeyVaults, and managing secrets.

Read Azure Eventhub data to DataFrame – scala

]]>
https://blogs.perficient.com/2024/01/08/read-azure-eventhub-data-to-dataframe-python/feed/ 0 353253
White Label Your Mobile Apps with Azure https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/ https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/#respond Thu, 21 Dec 2023 15:44:28 +0000 https://blogs.perficient.com/?p=338661

Enterprises and organizations that manage products with overlapping feature sets often confront a unique challenge. Their core dilemma involves creating multiple branded mobile applications that share a common codebase while enabling each app to provide a distinct user experience with minimal development overhead. As a leader in custom mobile solutions, Perficient excels in white labeling mobile applications using the power and flexibility of Azure DevOps.

Tackling the White Label Challenge

Consider a scenario where your application has gained popularity, and multiple clients desire a version that reflects their own brand identity. They want their logos, color schemes, and occasionally distinct features, yet they expect the underlying functionality to be consistent. How do you meet these demands without spawning a myriad of codebases that are a nightmare to maintain? This post outlines a strategy and best practices for white labeling applications with Azure DevOps to meet this challenge head-on.

Developing a Strategy for White Label Success

White labeling transcends merely changing logos and color palettes; it requires strategic planning and an architectural approach that incorporates flexibility.

1. Application Theming

White labeling starts with theming. Brands are recognizable through their colors, icons, and fonts, making these elements pivotal in your design. Begin by conducting a thorough audit of your current style elements. Organize these elements into variables and store them centrally, setting the stage for smooth thematic transitions.

2. Establishing Your Default Configuration

Choosing a ‘default’ configuration is crucial. It sets the baseline for development and validation. The default can reflect one of your existing branded applications and acts as a unified starting point for addressing issues, whether related to implementation or theming.

3. Embracing Remote/Cloud Configurations

Tools like the Azure App Configuration SDK or Firebase Remote Configuration allow you to modify app settings without altering the code directly. Azure’s Pipeline Library also helps manage build-time settings, supporting flexible brand-specific configurations.

Using remote configurations decouples operational aspects from app logic. This approach not only supports white labeling but also streamlines the development and customization cycle.

Note: You can add your Brand from the step 2. Adding Your “Brand” Configuration to Your Build into your build artifacts, and reference the correct values in your remote configurations for your brand.

Coordinating White Labeled Mobile Apps with Azure Pipelines

With your application ready for theming and remote configuration, use Azure Pipelines to automate the build and release of your branded app artifacts. The structure of your build stages and jobs will depend on your particular needs. Here’s a pattern you can follow to organize jobs and stages for clarity and parallelization:

1. Setting Up Your Build Stage by Platforms

Organize your pipeline by platform, not brand, to reduce duplication and simplify the build process. Start with stages for iOS, Android, and other target platforms, ensuring these build successfully with your default configuration before moving to parallel build jobs.

Run unit tests side by side with this stage to catch issues sooner.

2. Adding Your “Brand” Configuration to Your Build

Keep a master list of your brands to spawn related build jobs. This could be part of a YAML template or a file in your repository. Pass the brand value to child jobs with an input variable in your YAML template to make sure the right brand configuration is used across the pipeline.

Here’s an example of triggering Android build jobs for different brands using YAML loops:

stages:
    - stage: Build
      jobs:
          - job: BuildAndroid
            strategy:
                matrix:
                    BrandA:
                        BrandName: 'BrandA'
                    BrandB:
                        BrandName: 'BrandB'
            steps:
                - template: templates/build-android.yml
                  parameters:
                      brandName: $(BrandName)

3. Creating a YAML Job to “Re-Brand” the Default Configuration

Replace static files specific to each brand using path-based scripts. Swap out the default logo at src/img/logo.png with the brand-specific logo at src/Configurations/Foo/img/logo.png during the build process for every brand apart from the default.

An example YAML snippet for this step would be:

jobs:
    - job: RebrandAssets
      displayName: 'Rebrand Assets'
      pool:
          vmImage: 'ubuntu-latest'
      steps:
          - script: |
                cp -R src/Configurations/$(BrandName)/img/logo.png src/img/logo.png
            displayName: 'Replacing the logo with a brand-specific one'

4. Publishing Your Branded Artifacts for Distribution

Once the pipeline jobs for each brand are complete, publish the artifacts to Azure Artifacts, app stores, or other channels. Ensure this process is repeatable for any configured brand to lessen the complexity of managing multiple releases.

In Azure, decide whether to categorize your published artifacts by platform or brand based on what suits your team better. Regardless of choice, stay consistent. Here’s how you might use YAML to publish artifacts:

- stage: Publish
  jobs:
      - job: PublishArtifacts
        pool:
            vmImage: 'ubuntu-latest'
        steps:
            - task: PublishBuildArtifacts@1
              inputs:
                  PathtoPublish: '$(Build.ArtifactStagingDirectory)'
                  ArtifactName: 'drop-$(BrandName)'
                  publishLocation: 'Container'

By implementing these steps and harnessing Azure Pipelines, you can skillfully manage and disseminate white-labeled mobile applications from a single codebase, making sure each brand maintains its identity while upholding a high standard of quality and consistency.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/feed/ 0 338661
How to use Azure Blob Data and Store it in Azure Cognitive Search along with Vectors https://blogs.perficient.com/2023/12/12/how-to-use-azure-blob-data-and-store-it-in-azure-cognitive-search-along-with-vectors/ https://blogs.perficient.com/2023/12/12/how-to-use-azure-blob-data-and-store-it-in-azure-cognitive-search-along-with-vectors/#respond Wed, 13 Dec 2023 05:38:33 +0000 https://blogs.perficient.com/?p=351216

Introduction

In the previous blog post, we showed you how to scrap a website, extract its content using Python, and store it in Azure Blob Storage. In this blog post, we will show you how to use the Azure Blob data and store it in Azure Cognitive Search (ACS) along with vectors. We will use some popular libraries such as OpenAI Embeddings and Azure Search to create and upload the vectors to ACS. We will also show you how to use the vectors for semantic search and natural language applications.

By following this blog post, you will learn how to: 

  • Read the data from Azure Blob Storage using the BlobServiceClient class. 
  • Create the vectors that ACS will use to search through the documents using the OpenAI Embeddings class. 
  • Read the Data from Azure Blob Storage.
  • Load the data along with vectors to ACS using the AzureSearch class. 

Read Data from Azure Blob Storage:

The first step is to read the data from Azure Blob Storage, which is a cloud service that provides scalable and secure storage for any type of data. Azure Blob Storage allows you to access and manage your data from anywhere, using any platform or device. 

To read the data from Azure Blob Storage, you need to have an Azure account and a storage account. You also need to install the Azure Storage, which is a library that provides a simple way to interact with Azure Blob Storage using Python. 

To install the Azure Storage SDK for Python, you can use the following command:

pip install azure-storage-blob

To read the data from Azure Blob Storage, you need to import the BlobServiceClient class and create a connection object that represents the storage account. You also need to get the account URL, the credential, and the container name from the Azure portal. You can store these values in a .env file and load them using the dotenv module. 

For example, if you want to create a connection object and a container client, you can use: 

STORAGEACCOUNTURL = os.getenv("STORAGE_ACCOUNT_URL") 

STORAGEACCOUNTKEY = os.getenv("STORAGE_ACCOUNT_KEY") 

CONTAINERNAME = os.getenv("CONTAINER_NAME") 

blob_service_client_instance = BlobServiceClient(account_url=account_url, credential=credential) 
container_client = blob_service_client_instance.get_container_client(container=container) 
blob_list = container_client.list_blobs()

Load the Documents and the Vectors to ACS:

The final step is to load the documents and the vectors to ACS, which is a cloud service that provides a scalable and secure search engine for any type of data. ACS allows you to index and query your data using natural language and semantic search capabilities. 

To load the documents and the vectors to ACS, you need to have an Azure account and a search service. You also need to install the Azure Search library, which is a library that provides a simple way to interact with ACS using Python. 

To install the Azure Search library, you can use the following command: 

pip install azure-search

To load the documents and the vectors to ACS, you need to import the AzureSearch class and create a vector store object that represents the search service. You also need to get the search endpoint, the search key, and the index name from the Azure portal. You can store these values in a .env file and load them using the dotenv module. 

For example, if you want to create a vector store object and an index name, you can use: 

from azure_search import AzureSearch 
from dotenv import load_dotenv 
import os 

# Load the environment variables 
load_dotenv() 

# Get the search endpoint, the search key, and the index name 
vector_store_address : str = os.getenv("VECTOR_STORE_ADDRESS") 
vector_store_password : str = os.getenv("VECTOR_STORE_PASSWORD") 
index_name : str = os.getenv("INDEX_NAME") 

# Create a vector store object 
vector_store: AzureSearch = AzureSearch( 
    azure_search_endpoint=vector_store_address, 
    azure_search_key=vector_store_password, 
    index_name=index_name, 
    embedding_function=embeddings.embed_query, 
)

Then, you can load the documents and the vectors to ACS using the add_documents method. This method takes a list of documents as input and uploads them to ACS along with their vectors. A document is an object that contains the page content and the metadata of the web page.

For example, if you want to load the documents and the vectors to ACS using the stored in blob storage, you can use below code snippet by utilizing container_client and blob_list from above: 

def loadDocumentsACS(index_name,container_client,blob_list): 
    docs=[] 
    for blob in blob_list: 
        # Read the blobs and parse them as JSON  
        blob_client = container_client.get_blob_client(blob.name) 
        streamdownloader = blob_client.download_blob() 
        fileReader = json.loads(streamdownloader.readall()) 

        # Process the data and creating the document list 
        text = fileReader["content"] + "\n author: " + fileReader["author"] + "\n date: " + fileReader["date"] 
        metafileReader = {'source': fileReader["url"],"author":fileReader["author"],"date":fileReader["date"],"category":fileReader["category"],"title":fileReader["title"]} 
         if fileReader['content'] != "": 
            doc = Document(page_content=text, metadata=metafileReader) 
        else: 
            pass 
        docs.append(doc) 
     #Loading the documents to ACS 
   vector_store.add_documents(documents=docs)

You can verify whether your data has been indexed or not in the indexes of the Azure Cognitive Search (ACS) service on the Azure portal. Refer to the screenshot below for clarification.

Lbog IndexedConclusion:  

This blog post has guided you through the process of merging Azure Blob data with Azure Cognitive Search, enhancing your search capabilities with vectors. This integration simplifies data retrieval and empowers you to navigate semantic search and natural language applications with ease. As you explore these technologies, the synergy of Azure Blob Storage, OpenAI Embeddings, and Azure Cognitive Search promises a more enriched and streamlined data experience. Stay tuned for the next part, where we step into utilizing vectors and generating responses and performing vector search on user queries. 

References:

]]>
https://blogs.perficient.com/2023/12/12/how-to-use-azure-blob-data-and-store-it-in-azure-cognitive-search-along-with-vectors/feed/ 0 351216