Skip to main content

Platforms and Technology

Harnessing the Power of AWS Bedrock through CloudFormation

Programmer Working With Program Code

The rapid advancement of artificial intelligence (AI) has led to the development of foundational models that form the bedrock of numerous AI applications. AWS Bedrock is Amazon Web Services’ comprehensive solution that leverages these models to provide robust AI and machine learning (ML) capabilities. This blog delves into the essentials of AI foundational models in AWS Bedrock, highlighting their significance and applications.

What are AI Foundational Models?

AI foundational models are pre-trained models designed to serve as the basis for various AI applications. These models are trained on extensive datasets and can be fine-tuned for specific tasks, such as natural language processing (NLP), image recognition, and more. The primary advantage of using these models is that they significantly reduce the time and computational resources required to develop AI applications from scratch.

AWS Bedrock: A Comprehensive AI Solution

AWS Bedrock provides a suite of foundational models that are easily accessible and deployable. These models are integrated into the AWS ecosystem, allowing users to leverage the power of AWS’s infrastructure and services. AWS Bedrock offers several key benefits:

  1. Scalability: AWS Bedrock models can scale to meet the demands of large and complex applications. The AWS infrastructure ensures that models can handle high volumes of data and traffic without compromising performance.
  2. Ease of Use: With AWS Bedrock, users can access pre-trained models via simple API calls. This ease of use allows developers to integrate AI capabilities into their applications quickly and efficiently.
  3. Cost-Effectiveness: Utilizing pre-trained models reduces the need for extensive computational resources and time-consuming training processes, leading to cost savings.

Key Components of AWS Bedrock

AWS Bedrock comprises several key components designed to facilitate the development and deployment of AI applications:

  1. Pre-trained Models: These models are the cornerstone of AWS Bedrock. They are trained on vast datasets and optimized for performance. Users can select models tailored to specific tasks, such as text analysis, image classification, and more.
  2. Model Customization: AWS Bedrock allows users to fine-tune pre-trained models to meet their specific needs. This customization ensures that the models can achieve high accuracy for specialized applications.
  3. Integration with AWS Services: Bedrock models seamlessly integrate with other AWS services, such as AWS Lambda, Amazon S3, and Amazon SageMaker. This integration simplifies the deployment and management of AI applications.

 

Amazon Bedrock supports a wide range of foundation models from industry-leading providers. We can choose the model that is best suited to achieving your unique goals.

Here are just a few of the popular ones:

11

Note: Account users with the correct IAM Permissions must manually enable access to available Bedrock foundation models (FMs) to use Bedrock. Once Model Access is granted for that particular region, we can use it to build and scale our application.

Using AWS Bedrock services requires specific IAM permissions to ensure that users and applications can interact with the service securely and effectively. Basic Bedrock Access, Model Training and Deployment, Inference and Usage, Data Management, Compute Resources Management, Security and Identity Management, Monitoring, and Logging are the types of IAM permissions that are typically needed.

The cost parameters for using AWS Bedrock include Compute, Storage, Data Transfer, and Model Usage costing depending on the Number of Input tokens for units per month. Understanding these parameters can help estimate the costs associated with deploying and running AI models using AWS Bedrock. For precise cost calculations, AWS provides the AWS Pricing Calculator and detailed pricing information on its official website.

Let’s try to implement any one of these foundation models (for example: Titan) using AWS CloudFormation service.

Amazon Titan in Amazon Bedrock

Amazon Bedrock exclusively offers the Amazon Titan series of models, which benefit from Amazon’s 25 years of innovation in AI and machine learning. Via a fully controlled API, these Titan foundation models (FMs) offer a range of high-performance choices for text, graphics, and multimodal information. Created by AWS, Titan models are pre-trained on extensive datasets, making them powerful and versatile for a wide range of applications while promoting responsible AI usage. They can be used as they are or customized privately with your data.

Titan models have three categories: embeddings, text generation, and image generation. Here, we will focus on the Amazon Titan Text generation models, which include Amazon Titan Text G1 – Premier, Amazon Titan Text G1 – Express, and Amazon Titan Text G1 – Lite. We will implement “Titan Text G1 – Premier” from the list above.

Amazon Titan Text G1 – Premier

Amazon Titan Text G1 – Premier is a large language model (LLM) for text generation that is integrated with Amazon Bedrock Knowledge Base and Amazon Bedrock Agents and is highly helpful for a variety of tasks, including summarizing, code generation, and answering open-ended and context-based questions, and also supports Custom Finetuning in preview.

ID – amazon.titan-text-premier-v1:0

Max tokens – 32,000

Language – English only

Use cases – 32k context window, Context-Based Question Answering, open-ended text generation, Knowledge Base support, Agent’s support, chain of thought, rewrite, brainstorming, summarizations, code generation, table creation, data formatting, paraphrasing, extraction, QnA, chat, Model Customization (preview).

Inference parameters – Temperature, Top P (Default: Temperature = 0.7, Top P = 0.9)

While implementing this using CloudFormation, we will first need to create a stack for it. Creating a stack template for AWS CloudFormation involves defining your AWS infrastructure and resources using either JSON or YAML format.

Let’s try to implement the Python AWS Lambda function that utilizes the AWS Bedrock’s Titan service to generate text based on an input prompt through a YAML-based cloud formation script.

22

The given AWS CloudFormation template defines resources for creating an IAM role and a Lambda function to invoke a model from AWS Bedrock i.e., providing text generation capabilities through the specified Titan model.

IAM Role

  • Allows Lambda to assume the role and invoke the Bedrock model.
  • Grants permission to invoke the specific Bedrock model (amazon.titan-embed-text-v1) and list available models.

Lambda Function

  • Python function that uses Boto3 to invoke the Bedrock model amazon.titan-text-premier-v1:0.
  • Sends a JSON payload to the model with a specified configuration for text generation. Returns the response from the model as the HTTP response.
  • If we check the Lambda function’s dashboard, then the “index.py” file, contains:

33
This AWS Lambda function interacts with the AWS Bedrock service to generate text based on an input prompt. It creates a client for the Bedrock runtime, invokes a specific text generation model with given configurations, processes the response to extract the generated text, and returns this text in an HTTP response. This setup allows for the automation of text generation tasks using AWS Bedrock’s capabilities.

Execution Results

44

As seen in the Response window for the input given as: “Hello, how are you?”, it has returned the output text as “Hello! I’m doing well, thank you. How can I assist you today?”.

In this manner, AWS Bedrock’s Amazon Titan Text G1 – Premier model is designed for a wide range of natural language processing (NLP) tasks due to its advanced capabilities and large context window.

Thoughts on “Harnessing the Power of AWS Bedrock through CloudFormation”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Ajinkya Gadge

Ajinkya Gadge has over 5 years of IT experience in Cloud Domain Projects and looks forward to writing more blogs in upcoming years.

More from this Author

Follow Us