Skip to main content

Generative AI

API’d Like to Talk to You: A Dive into the OpenAI Assistant API

Conversationalrobot
Recently, I’ve had time to sit down and wade into the my own little area of the digital Wild West, that being AI integration. With the explosion of AI, I wanted to give my apps the ability to tap into that vast potential. 
 
While it seems like every tech giant, startup, and their mother is putting out an AI these days, I had to settle on one that I would develop against. I wanted something where I could create a custom model; has a robust REST API; and finally has a proven track record (at least as long a record as you can have in such a young field). Given these criteria I set out, OpenAI was the best vendor for this purpose, at the time. Specifically the custom assistant via platform.openai.com
 
Background aside, here’s what you’ll need to follow along:
• A funded account on platform.openai.com (I’ve gotten by on ~$2US for the past 8 months)
• A ready assistant inside something other than the default project (We’re only doing text, so you don’t need the latest & greatest GPT for a base. GPT-3 will do the job just as well and save you some coin in the process)
• A Project API key
 
While the API documentation (API Reference – OpenAI API) gives examples in Python, Node.js, and Curl commands, I’m a Microsoft stack sort of person, so I want to be able to converse with my AI via C# (like a reasonable person). I began the process by translating the curl commands to HttpClient calls. What follows are the calls needed to converse with your assistant. I will be using snippets along the way but will post the full file at the end of the article. So, let’s get into it!
 

Create the Conversation Thread

Threads are the conversation. A thread contains all the messages (see Adding Messages section) that are run (see Run section) in order to generate a new response. In addition to your AI’s directives, threads can be seeded with messages expected from the AI in order to provide a greater context. We’ll touch on that more when we get to messages. 
 
In order to begin our conversation with our digital pal, we’ll first need to let it know we want to talk. We do this by creating a thread and getting a thread ID back from the AI. This ID is important! It will be used in all subsequent calls, store it! The documentation for creating a thread can be found here: https://platform.openai.com/docs/api-reference/threads/createThread 
 

There are couple of headers to configure before loading up the URI and firing off the request. The first is Authorization header, this is simple Bearer token scheme with your project API key as your token. The second is a new header indicating we’re connecting to V2 of the Assistant Beta API. 

_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);

_client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
Next thing to do is load up the URI and fire off a POST request. This request doesn’t require anything in the body, so we’ll send along null: 
response = _client.PostAsync("https://api.openai.com/v1/threads", null).Result;

So far, so good. Nothing most developers haven’t done dozens of times over. The tricky part is that comes via the response. Unfortunately, each end point returns a different JSON model. I’ve created a series of models in my project to deserialize each response into a POCO, which as this point, I feel that was overkill. I could have done this via JObjects and saved myself a few dozen lines of code. 

var threadIdResponse = response.Content.ReadAsStringAsync().Result;

if (!response.IsSuccessStatusCode)
{
     var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
     throw new AiClientException(errorResponse?.Error.Message);
}
var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse);
_threadId = threadIdObj?.Id ?? string.Empty;
return _threadId;

Here we’ve got the response, and it’s time to check and parse what we got back. In my error trap, I’ve got an exception called AiClientException. This is a new exception I created in the project that simply wraps Exception for better delineation on the client. If we’ve got a successful response, we deserialize it into a Thread Response object: 

public class ThreadResponse
{
   public string Id { get; set; }
   public string Object { get; set; }
   public long CreatedAt { get; set; }
   public object AssistantId { get; set; }
   public string ThreadId { get; set; }
   public object RunId { get; set; }
   public string Role { get; set; }
   public List<AiContent> Content { get; set; }
   public List<object> FileIds { get; set; }
   public Metadata Metadata { get; set; }
}
As you can see quite a bit is returned that we won’t use. Of interest to us at this point is the Id field, this is the all-important thread ID. 
 
Now we’ve created an empty thread with our assistant. Next we need to load up a message for the AI to read, this is also where we can insert some seeded prompts. 
 

Adding Messages

Messages are, of course, the driver to this whole shebang. Without them, we’re just staring across the table with our AI assistant in silence. While normal conversation flow goes prompt -> response, like we see when using a vendor’s interface with an AI. Here we aren’t limited to such an immediate back and forth, we’re able to load up multiple user prompts, or seed a conversation with user prompts and assistant responses before sending them to the AI for a generated response. 
 
First thing to do is validation, as we get to this point in the process, there’s a number of pieces that need to already be in place in order to add messages:       
if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set");
if (string.IsNullOrEmpty(_threadId)) CreateThread(); 
if (string.IsNullOrEmpty(message)) throw new AiClientException("Message is empty");
Here, we’re checking that we have an API key (for the Authentication header); that we have a thread to put messages on, if not, make one; check that we have an actual message to add. Next we load up our headers just as we did before, but this time we need to serialize the message into an object.
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
_client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
var messageRequest = new AiRequestMessage { Role = "user", Content = message };
Here AiRequestMessage is another POCO class I created, simple to help deserialize the response. Not much to this one:
public class AiRequestMessage
{
   [JsonProperty("role")]
   public string Role { get; set; }
   [JsonProperty("content")]
   public string Content { get; set; }
}

Once the message object is created, we just need to stringify it, load it into our request and send it off. There is not much useful information returned from the request. A HTTP 200 return is an indication that the message was successfully added: 

var json = JsonConvert.SerializeObject(messageRequest);
var content = new StringContent(json, Encoding.UTF8,   "application/json");
response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/messages", content);
            
var threadIdResponse = response.Content.ReadAsStringAsync().Result;

if (!response.IsSuccessStatusCode)
{
   var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
   throw new AiClientException(errorResponse?.Error.Message);
}
As you can see, this is a rather simple call to the API. Once we get the response from the server, we check to see if it was successful, if so, we do nothing; if not, then we send throw an exception. 
 
Now that we know how to add user messages… Wait, “How do we know they’re from the user?” you might ask. It’s right here:
var messageRequest = new AiRequestMessage { Role = "user", Content = message };
In the role property, the AI recognizes two values: “user” and “assistant” and it doesn’t care who adds them to the list. So it becomes a simple matter to either add an argument or new function for assistant messages that simplifies modifies the above line so that a message is created like so (or a functional equivalent):
var messageRequest = new AiRequestMessage { Role = "assistant", Content = message };
Using this ability, we’re able to assemble (or even recall) a conversation before even going to the AI. 
 
For far, we’re created the container for the conversation (thread) and our side of the conversation (messages) on the OpenAI server. Now, we would like to hear back from the AI. This is where the Run phase of the process comes in. 
 

Run

So we’re ready to start conversing with our assistant, awesome! If you’ve ever worked with an AI before, you know the response times can be quite lengthy. How do we monitor this from our library? Personally, I went with short polling, as you’ll see below. I did this for the ease of implementation, but other methods are available including opening a stream with OpenAI’s server but that’s outside the scope of this post. 
 
As with the other requests, we’ll need to load up our headers:
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
_client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
Next we need a request body that consists of just the assistant ID, here again, I’ve created a POCO to aid in serialization/deserialization which is probably overdoing it for a single property:
var custAsst = new Assistant { assistant_id = _assistantId };
var json = JsonConvert.SerializeObject(custAsst);
var content = new StringContent(json, Encoding.UTF8, "application/json");
After loading that into our request body and sending off the request, it’s time to wait. AI responses can cause quite the wait for a response depending on the prompt. This could lead a single request to timeout if the AI is taking a particularly long time to respond. My solution to this was to use short polling seen here:
response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/ru  ns", content);
var responseContent = await response.Content.ReadAsStringAsync();
var responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
var runId = responseObj?.Id;
var runStatus = responseObj?.Status;
//if not completed, poll again
if (runId != null)
{
   while (runStatus != null && !FinalStatuses.Contains(runStatus))
   {
       await Task.Delay(1000);
       response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/runs/{runId}");
       responseContent = response.Content.ReadAsStringAsync().Result;
       responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
       runStatus = responseObj?.Status;
        }
    }
}
await GetResponse();
Here, I’ve got the completed states for the Run process (https://platform.openai.com/docs/api-reference/runs/object) which is checked on every poll until the job has finished. After we’ve received the completed indicator, we know it’s safe to retrieve the updated messages which should now include the response from the assistant. 
 

Get AI Response

In order to grab the latest message from the server, we need to call back to messages endpoint for the thread. This will return all the messages we sent to the server, with the latest response. First we load up our headers and fire off a GET request to the messages endpoint with our thread ID in the URI:
HttpResponseMessage response;
using (_client = new HttpClient())
{
   _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
    _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1");
    response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/messages");
}

The response that is returned from this request is more complex that we've seen up to this point and requires a bit more handling in order to extract the messages:

var responseContent = response.Content.ReadAsStringAsync().Result;
try
{
  var data = JsonConvert.DeserializeObject<ChatResponse>(responseContent);
  _messages.Clear();
  _messages = data?.Data.Select(x => new AiContent() { Type = x.Role, Text = x.Content[0].Text }).ToList() ?? new List<AiContent>();
}
catch (Exception ex)
{
  throw new AiClientException("Error retrieving messages");
}
I parse the response into a ChatRepsonse object that contains the messages as well as metadata. The messages are nested in a class within the ChatResponse class. In order to simplify the code for a blog post I’m just replacing the entire List of messages within the service with every response. Here is the ChatResponse class with its nest class for messages:
public class ChatResponse
{
   public List<Data> Data { get; set; }
   public string FirstId { get; set; }
   public string LastId { get; set; }
   public bool HasMore { get; set; }
}

public class Data
{
   public string Id { get; set; }
   public string Object { get; set; }
   public long CreatedAt { get; set; }
   public string AssistantId { get; set; }
   public string ThreadId { get; set; }
   public string RunId { get; set; }
   public string Role { get; set; }
   public List<AiContent> Content { get; set; }
   public List<object> FileIds { get; set; }
   public Metadata Metadata { get; set; }
}
In the ChatResponse class, you can see that the top-level fields supply a list of conversations, Data (typically there’s just one), as well as the ID for the first and last messages. (You could use the latest ID in order to grab the assistant response if that is a better fit for your use-case.)  While the Data class contains the metadata for the conversation, the messages are stored in Data’s Content property. This property is still not the end as the JSON is broken down into an object with the role and another class for the response text which I’ve called AiContent. 
public class AiContent
{
  public string Type { get; set; }
  public Text Text { get; set; }
}
public class Text
{
  public string Value { get; set; }
  public List<object> Annotations { get; set; }
}
Once you’ve fished the messages out of the response, you’re free to do with them as you will. My simple MVC client just dumps the new list of messages to the user. 
 

Furthering the Project

Besides points I mentioned above, there is definitely room for improvement with this code. I created these snippets from a POC I’ve been working on, so they very likely aren’t production-ready as-is. There are several areas I feel like this can be improved on. Areas such as
 
• Streaming between call and OpenAI – OpenAI offers a streaming response rather than HTTP. Going this route would remove the polling code from the project and provide a closer-to-realtime response to the library
 
• SignalR instead of HTTP Client – Used in conjunction with OpenAI’s streaming, this would provide partial responses as the assistant generates it
 
• Add file upload – As AIs get more complex, simple prompts may not longer be enough. Providing a file has the potential to provide the assistant a more comprehensive context

• Add photo generation – Who doesn’t like playing with the photo generator provided by most AIs?

 

Full File

using AiClients.Exceptions;
using AiClients.Interfaces;
using AiClients.Models;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;
using System.Net.Http.Headers;
using System.Text;
namespace CustomGptClient.Services
{
    public class AssitantService : IAiService
    {
        private string _threadId;
        private IConfiguration _config;
        private string _apiKey;
        private string _assistantId;
        private List<AiContent> _messages;
        private string _assistantName;
        private HttpClient _client;
        private List<string> FinalStatuses = new List<string> { "completed", "failed", "cancelled", "expired" };
        public AssitantService(IConfiguration configuration)
        {
            _config = configuration;
            _apiKey = _config.GetSection("OpenAI:ApiKey")?.Value ?? string.Empty;
            _assistantId = _config.GetSection("OpenAI:AssistantId")?.Value ?? string.Empty;
            _messages = new List<AiContent>();
        }

        private string CreateThread()
        {
            if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set");
            HttpResponseMessage response;
            using (var _client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
                response = _client.PostAsync("https://api.openai.com/v1/threads", null).Result;
            }
            var threadIdResponse = response.Content.ReadAsStringAsync().Result;
            if (!response.IsSuccessStatusCode)
            {
                var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
                throw new AiClientException(errorResponse?.Error.Message);
            }
            var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse);
            _threadId = threadIdObj?.Id ?? string.Empty;
            return _threadId;
        }
        public async Task AddMessage(string message)
        {
            if (string.IsNullOrEmpty(_apiKey)) throw new AiClientException("OpenAI ApiKey is not set");
            if (string.IsNullOrEmpty(_threadId)) CreateThread(); 
            if (string.IsNullOrEmpty(message)) throw new AiClientException("Message is empty");
            HttpResponseMessage response;
            using (_client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1");
                var messageRequest = new AiRequestMessage { Role = "user", Content = message };
                var json = JsonConvert.SerializeObject(messageRequest);
                var content = new StringContent(json, Encoding.UTF8, "application/json");
                response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/messages", content);
            }
            var threadIdResponse = response.Content.ReadAsStringAsync().Result;
            if (!response.IsSuccessStatusCode)
            {
                var errorResponse = JsonConvert.DeserializeObject<ErrorResponse>(threadIdResponse);
                throw new AiClientException(errorResponse?.Error.Message);
            }
            var threadIdObj = JsonConvert.DeserializeObject<ThreadResponse>(threadIdResponse);
            await CreateRun();
        }
        public async Task CreateRun()
        {
            HttpResponseMessage response;
            using (_client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v2");
                var custAsst = new Assistant { assistant_id = _assistantId };
                var json = JsonConvert.SerializeObject(custAsst);
                var content = new StringContent(json, Encoding.UTF8, "application/json");
                response = await _client.PostAsync($"https://api.openai.com/v1/threads/{_threadId}/runs", content);
                var responseContent = await response.Content.ReadAsStringAsync();
                var responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
                var runId = responseObj?.Id;
                var runStatus = responseObj?.Status;
                //if not completed, poll again
                if (runId != null)
                {
                    while (runStatus != null && !FinalStatuses.Contains(runStatus))
                    {
                        await Task.Delay(1000);
                        response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/runs/{runId}");
                        responseContent = response.Content.ReadAsStringAsync().Result;
                        responseObj = JsonConvert.DeserializeObject<RunResponse>(responseContent);
                        runStatus = responseObj?.Status;
                    }
                }
            }
            await GetResponse();
        }
        public async Task GetResponse()
        {
            HttpResponseMessage response;
            using (_client = new HttpClient())
            {
                _client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", _apiKey);
                _client.DefaultRequestHeaders.Add("OpenAI-Beta", "assistants=v1");
                response = await _client.GetAsync($"https://api.openai.com/v1/threads/{_threadId}/messages");
            }
            var responseContent = response.Content.ReadAsStringAsync().Result;
            try
            {
                var data = JsonConvert.DeserializeObject<ChatResponse>(responseContent);
                _messages.Clear();
                _messages = data?.Data.Select(x => new AiContent() { Type = x.Role, Text = x.Content[0].Text }).ToList() ?? new List<AiContent>();
            }
            catch (Exception ex)
            {
                throw new AiClientException("Error retrieving messages");
            }
        }
}

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Greg Jeffers, Senior Technical Consultant

Greg Jeffers is a Senior Technical Consultant with Perficient. He has been developing software on the Microsoft stack for 20+ years and working with Optimizely for 5. Having been in several roles across multiple industries, Greg brings a holistic approach to development. He is passionate about finding the right balance of people and processes to make users feel comfortable in the application while being performant.

More from this Author

Follow Us