Skip to content

Commit

Permalink
Prepare 2.0.0-beta.13 release (Part 1) (#229)
Browse files Browse the repository at this point in the history
- Refactored `ModerationResult` by merging `ModerationCategories` and `ModerationCategoryScores` into individual `ModerationCategory` properties, each with `Flagged` and `Score` properties.
- Renamed type `OpenAIFileInfo` to `OpenAIFile` and `OpenAIFileInfoCollection` to `OpenAIFileCollection`.
- Renamed type `OpenAIModelInfo` to `OpenAIModel` and `OpenAIModelInfoCollection` to `OpenAIModelCollection`.
- Renamed type `Embedding` to `OpenAIEmbedding` and `EmbeddingCollection` to `OpenAIEmbeddingCollection`.
- Renamed property `ImageUrl` to `ImageUri` and method `FromImageUrl` to `FromImageUri` in the `MessageContent` type.
- Renamed property `ParallelToolCallsEnabled` to `AllowParallelToolCalls` in the `RunCreationOptions`, `ThreadRun`, and `ChatCompletionOptions` types.
- Renamed properties `PromptTokens` to `InputTokenCount`, `CompletionTokens` to `OutputTokenCount`, and `TotalTokens` to `TotalTokenCount` in the `RunTokenUsage` and `RunStepTokenUsage` types.
- Renamed properties `InputTokens` to `InputTokenCount` and `TotalTokens` to `TotalTokenCount` in the `EmbeddingTokenUsage` type.
- Renamed properties `MaxPromptTokens` to `MaxInputTokenCount` and `MaxCompletionTokens` to `MaxOutputTokenCount` in the `ThreadRun`, `RunCreationOptions`, and `RunIncompleteReason` types.
- Removed the `virtual` keyword from the `Pipeline` property across all clients.
- Renamed the `Granularities` property of `AudioTranscriptionOptions` to `TimestampGranularities`.
- Changed `AudioTranscriptionFormat` from an enum to an "extensible enum".
- Changed `AudioTranslationFormat` from an enum to an "extensible enum".
- Changed `GenerateImageFormat` from an enum to an "extensible enum".
- Changed `GeneratedImageQuality` from an enum to an "extensible enum".
- Changed `GeneratedImageStyle` from an enum to an "extensible enum".
- Removed method overloads in `AssistantClient` and `VectorStoreClient` that take complex parameters in favor of methods that take simple string IDs.
- Updated the `TokenIds` property type in the `TranscribedSegment` type from `IReadOnlyList<int>` to `ReadOnlyMemory<int>`.
- Updated the `inputs` parameter type in the `GenerateEmbeddings` and `GenerateEmbeddingsAsync` methods of `EmbeddingClient` from `IEnumerable<IEnumerable<int>>` to `IEnumerable<ReadOnlyMemory<int>>`. 
- Changed `ChatMessageContentPartKind` from an extensible enum to an enum. 
- Changed `ChatToolCallKind` from an extensible enum to an enum. 
- Changed `ChatToolKind` from an extensible enum to an enum.
- Changed `OpenAIFilePurpose` from an extensible enum to an enum.
- Changed `OpenAIFileStatus` from an extensible enum to an enum.
- Renamed `OpenAIFilePurpose` to `FilePurpose`.
- Renamed `OpenAIFileStatus` to `FileStatus`.
- Removed constructors that take string API key and options.
  • Loading branch information
joseharriaga authored Sep 27, 2024
1 parent 75eded5 commit a330c2e
Show file tree
Hide file tree
Showing 139 changed files with 4,544 additions and 3,059 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/live-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:
- name: Run live tests
run: dotnet test ./tests/OpenAI.Tests.csproj
--configuration Release
--filter="TestCategory!=Smoke&TestCategory!=Images&TestCategory!=Uploads&TestCategory!=Moderations&TestCategory!=Manual"
--filter="TestCategory!=Smoke&TestCategory!=Images&TestCategory!=Uploads&TestCategory!=Moderations&TestCategory!=FineTuning&TestCategory!=Manual"
--logger "trx;LogFilePrefix=live"
--results-directory ${{github.workspace}}/artifacts/test-results
${{ env.version_suffix_args}}
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,11 +45,11 @@ jobs:
--filter="TestCategory=Smoke&TestCategory!=Manual"
--logger "trx;LogFileName=${{ github.workspace }}/artifacts/test-results/smoke.trx"
${{ env.version_suffix_args }}

- name: Run Live Tests
run: dotnet test ./tests/OpenAI.Tests.csproj
--configuration Release
--filter="TestCategory!=Smoke&TestCategory!=Images&TestCategory!=Uploads&TestCategory!=Moderations&TestCategory!=Manual"
--filter="TestCategory!=Smoke&TestCategory!=Images&TestCategory!=Uploads&TestCategory!=Moderations&TestCategory!=FineTuning&TestCategory!=Manual"
--logger "trx;LogFilePrefix=live"
--results-directory ${{ github.workspace }}/artifacts/test-results
${{ env.version_suffix_args }}
Expand Down
37 changes: 27 additions & 10 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,33 @@

### Breaking Changes

- Refactored `ModerationResult` by merging `ModerationCategories` and `ModerationCategoryScores` into individual `ModerationCategory` properties, each with `Flagged` and `Score` properties. (commit_id)
- Renamed type `OpenAIFileInfo` to `OpenAIFile` and `OpenAIFileInfoCollection` to `OpenAIFileCollection`. (commit_id)
- Renamed type `OpenAIModelInfo` to `OpenAIModel` and `OpenAIModelInfoCollection` to `OpenAIModelCollection`. (commit_id)
- Renamed type `Embedding` to `OpenAIEmbedding` and `EmbeddingCollection` to `OpenAIEmbeddingCollection`. (commit_id)
- Renamed property `ImageUrl` to `ImageUri` and method `FromImageUrl` to `FromImageUri` in the `MessageContent` type. (commit_id)
- Renamed property `ParallelToolCallsEnabled` to `AllowParallelToolCalls` in the `RunCreationOptions`, `ThreadRun`, and `ChatCompletionOptions` types. (commit_id)
- Renamed properties `PromptTokens` to `InputTokenCount`, `CompletionTokens` to `OutputTokenCount`, and `TotalTokens` to `TotalTokenCount` in the `RunTokenUsage` and `RunStepTokenUsage` types. (commit_id)
- Renamed properties `InputTokens` to `InputTokenCount` and `TotalTokens` to `TotalTokenCount` in the `EmbeddingTokenUsage` type. (commit_id)
- Renamed properties `MaxPromptTokens` to `MaxInputTokenCount` and `MaxCompletionTokens` to `MaxOutputTokenCount` in the `ThreadRun`, `RunCreationOptions`, and `RunIncompleteReason` types. (commit_id)
- Removed the `virtual` keyword from the `Pipeline` property across all clients. (commit_id)
- Refactored `ModerationResult` by merging `ModerationCategories` and `ModerationCategoryScores` into individual `ModerationCategory` properties, each with `Flagged` and `Score` properties. (commit_hash)
- Renamed type `OpenAIFileInfo` to `OpenAIFile` and `OpenAIFileInfoCollection` to `OpenAIFileCollection`. (commit_hash)
- Renamed type `OpenAIModelInfo` to `OpenAIModel` and `OpenAIModelInfoCollection` to `OpenAIModelCollection`. (commit_hash)
- Renamed type `Embedding` to `OpenAIEmbedding` and `EmbeddingCollection` to `OpenAIEmbeddingCollection`. (commit_hash)
- Renamed property `ImageUrl` to `ImageUri` and method `FromImageUrl` to `FromImageUri` in the `MessageContent` type. (commit_hash)
- Renamed property `ParallelToolCallsEnabled` to `AllowParallelToolCalls` in the `RunCreationOptions`, `ThreadRun`, and `ChatCompletionOptions` types. (commit_hash)
- Renamed properties `PromptTokens` to `InputTokenCount`, `CompletionTokens` to `OutputTokenCount`, and `TotalTokens` to `TotalTokenCount` in the `RunTokenUsage` and `RunStepTokenUsage` types. (commit_hash)
- Renamed properties `InputTokens` to `InputTokenCount` and `TotalTokens` to `TotalTokenCount` in the `EmbeddingTokenUsage` type. (commit_hash)
- Renamed properties `MaxPromptTokens` to `MaxInputTokenCount` and `MaxCompletionTokens` to `MaxOutputTokenCount` in the `ThreadRun`, `RunCreationOptions`, and `RunIncompleteReason` types. (commit_hash)
- Removed the `virtual` keyword from the `Pipeline` property across all clients. (commit_hash)
- Renamed the `Granularities` property of `AudioTranscriptionOptions` to `TimestampGranularities`. (commit_hash)
- Changed `AudioTranscriptionFormat` from an enum to an "extensible enum". (commit_hash)
- Changed `AudioTranslationFormat` from an enum to an "extensible enum". (commit_hash)
- Changed `GenerateImageFormat` from an enum to an "extensible enum". (commit_hash)
- Changed `GeneratedImageQuality` from an enum to an "extensible enum". (commit_hash)
- Changed `GeneratedImageStyle` from an enum to an "extensible enum". (commit_hash)
- Removed method overloads in `AssistantClient` and `VectorStoreClient` that take complex parameters in favor of methods that take simple string IDs. (commit_hash)
- Updated the `TokenIds` property type in the `TranscribedSegment` type from `IReadOnlyList<int>` to `ReadOnlyMemory<int>`. (commit_hash)
- Updated the `inputs` parameter type in the `GenerateEmbeddings` and `GenerateEmbeddingsAsync` methods of `EmbeddingClient` from `IEnumerable<IEnumerable<int>>` to `IEnumerable<ReadOnlyMemory<int>>`. (commit_hash)
- Changed `ChatMessageContentPartKind` from an extensible enum to an enum. (commit_hash)
- Changed `ChatToolCallKind` from an extensible enum to an enum. (commit_hash)
- Changed `ChatToolKind` from an extensible enum to an enum. (commit_hash)
- Changed `OpenAIFilePurpose` from an extensible enum to an enum. (commit_hash)
- Changed `OpenAIFileStatus` from an extensible enum to an enum. (commit_hash)
- Renamed `OpenAIFilePurpose` to `FilePurpose`. (commit_hash)
- Renamed `OpenAIFileStatus` to `FileStatus`. (commit_hash)
- Removed constructors that take string API key and options. (commit_hash)

### Bugs Fixed

Expand Down
108 changes: 78 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
- [How to work with Azure OpenAI](#how-to-work-with-azure-openai)
- [Advanced scenarios](#advanced-scenarios)
- [Using protocol methods](#using-protocol-methods)
- [Mock a client for testing](#mock-a-client-for-testing)
- [Automatically retrying errors](#automatically-retrying-errors)
- [Observability](#observability)

Expand Down Expand Up @@ -129,7 +130,7 @@ foreach (StreamingChatCompletionUpdate update in updates)
{
foreach (ChatMessageContentPart updatePart in update.ContentUpdate)
{
Console.Write(updatePart);
Console.Write(updatePart.Text);
}
}
```
Expand Down Expand Up @@ -309,7 +310,7 @@ To use structured outputs to constrain chat completion content, set an appropria
ChatCompletionOptions options = new()
{
ResponseFormat = ChatResponseFormat.CreateJsonSchemaFormat(
name: "math_reasoning",
jsonSchemaFormatName: "math_reasoning",
jsonSchema: BinaryData.FromString("""
{
"type": "object",
Expand All @@ -332,15 +333,15 @@ ChatCompletionOptions options = new()
"additionalProperties": false
}
"""),
strictSchemaEnabled: true)
jsonSchemaIsStrict: true)
};

ChatCompletion chatCompletion = await client.CompleteChatAsync(
["How can I solve 8x + 7 = -23?"],
options);

using JsonDocument structuredJson = JsonDocument.Parse(chatCompletion.ToString());

Console.WriteLine($"Final answer: {structuredJson.RootElement.GetProperty("final_answer").GetString()}");
Console.WriteLine("Reasoning steps:");

Expand All @@ -360,22 +361,22 @@ To generate a text embedding, use `EmbeddingClient` from the `OpenAI.Embeddings`
```csharp
using OpenAI.Embeddings;

EmbeddingClient client = new(model: "text-embedding-3-small", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));
EmbeddingClient client = new("text-embedding-3-small", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));

string description = "Best hotel in town if you like luxury hotels. They have an amazing infinity pool, a spa,"
+ " and a really helpful concierge. The location is perfect -- right downtown, close to all the tourist"
+ " attractions. We highly recommend this hotel.";

Embedding embedding = client.GenerateEmbedding(description);
ReadOnlyMemory<float> vector = embedding.Vector;
OpenAIEmbedding embedding = client.GenerateEmbedding(description);
ReadOnlyMemory<float> vector = embedding.ToFloats();
```

Notice that the resulting embedding is a list (also called a vector) of floating point numbers represented as an instance of `ReadOnlyMemory<float>`. By default, the length of the embedding vector will be 1536 when using the `text-embedding-3-small` model or 3072 when using the `text-embedding-3-large` model. Generally, larger embeddings perform better, but using them also tends to cost more in terms of compute, memory, and storage. You can reduce the dimensions of the embedding by creating an instance of the `EmbeddingGenerationOptions` class, setting the `Dimensions` property, and passing it as an argument in your call to the `GenerateEmbedding` method:

```csharp
EmbeddingGenerationOptions options = new() { Dimensions = 512 };

Embedding embedding = client.GenerateEmbedding(description, options);
OpenAIEmbedding embedding = client.GenerateEmbedding(description, options);
```

## How to generate images
Expand All @@ -387,7 +388,7 @@ To generate an image, use `ImageClient` from the `OpenAI.Images` namespace:
```csharp
using OpenAI.Images;

ImageClient client = new(model: "dall-e-3", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));
ImageClient client = new("dall-e-3", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));
```

Generating an image always requires a `prompt` that describes what should be generated. To further tailor the image generation to your specific needs, you can create an instance of the `ImageGenerationOptions` class and set the `Quality`, `Size`, and `Style` properties accordingly. Note that you can also set the `ResponseFormat` property of `ImageGenerationOptions` to `GeneratedImageFormat.Bytes` in order to receive the resulting PNG as `BinaryData` (instead of the default remote `Uri`) if this is convenient for your use case.
Expand Down Expand Up @@ -431,14 +432,14 @@ In this example, an audio file is transcribed using the Whisper speech-to-text m
```csharp
using OpenAI.Audio;

AudioClient client = new(model: "whisper-1", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));
AudioClient client = new("whisper-1", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));

string audioFilePath = Path.Combine("Assets", "audio_houseplant_care.mp3");

AudioTranscriptionOptions options = new()
{
ResponseFormat = AudioTranscriptionFormat.Verbose,
Granularities = AudioTimestampGranularities.Word | AudioTimestampGranularities.Segment,
TimestampGranularities = AudioTimestampGranularities.Word | AudioTimestampGranularities.Segment,
};

AudioTranscription transcription = client.TranscribeAudio(audioFilePath, options);
Expand All @@ -450,14 +451,14 @@ Console.WriteLine();
Console.WriteLine($"Words:");
foreach (TranscribedWord word in transcription.Words)
{
Console.WriteLine($" {word.Word,15} : {word.Start.TotalMilliseconds,5:0} - {word.End.TotalMilliseconds,5:0}");
Console.WriteLine($" {word.Word,15} : {word.StartTime.TotalMilliseconds,5:0} - {word.EndTime.TotalMilliseconds,5:0}");
}

Console.WriteLine();
Console.WriteLine($"Segments:");
foreach (TranscribedSegment segment in transcription.Segments)
{
Console.WriteLine($" {segment.Text,90} : {segment.Start.TotalMilliseconds,5:0} - {segment.End.TotalMilliseconds,5:0}");
Console.WriteLine($" {segment.Text,90} : {segment.StartTime.TotalMilliseconds,5:0} - {segment.EndTime.TotalMilliseconds,5:0}");
}
```

Expand Down Expand Up @@ -516,7 +517,7 @@ using Stream document = BinaryData.FromString("""
Upload this document to OpenAI using the `FileClient`'s `UploadFile` method, ensuring that you use `FileUploadPurpose.Assistants` to allow your assistant to access it later:

```csharp
OpenAIFileInfo salesFile = fileClient.UploadFile(
OpenAIFile salesFile = fileClient.UploadFile(
document,
"monthly_sales.json",
FileUploadPurpose.Assistants);
Expand Down Expand Up @@ -584,8 +585,8 @@ Finally, you can use the `AssistantClient`'s `GetMessages` method to retrieve th
For illustrative purposes, you could print the messages to the console and also save any images produced by the assistant to local storage:

```csharp
PageCollection<ThreadMessage> messagePages = assistantClient.GetMessages(threadRun.ThreadId, new MessageCollectionOptions() { Order = ListOrder.OldestFirst });
IEnumerable<ThreadMessage> messages = messagePages.GetAllValues();
CollectionResult<ThreadMessage> messages
= assistantClient.GetMessages(threadRun.ThreadId, new MessageCollectionOptions() { Order = MessageCollectionOrder.Ascending });

foreach (ThreadMessage message in messages)
{
Expand Down Expand Up @@ -616,7 +617,7 @@ foreach (ThreadMessage message in messages)
}
if (!string.IsNullOrEmpty(contentItem.ImageFileId))
{
OpenAIFileInfo imageInfo = fileClient.GetFile(contentItem.ImageFileId);
OpenAIFile imageInfo = fileClient.GetFile(contentItem.ImageFileId);
BinaryData imageBytes = fileClient.DownloadFile(contentItem.ImageFileId);
using FileStream stream = File.OpenWrite($"{imageInfo.Filename}.png");
imageBytes.ToStream().CopyTo(stream);
Expand Down Expand Up @@ -666,8 +667,8 @@ AssistantClient assistantClient = openAIClient.GetAssistantClient();
For this example, we will use both image data from a local file as well as an image located at a URL. For the local data, we upload the file with the `Vision` upload purpose, which would also allow it to be downloaded and retrieved later.

```csharp
OpenAIFileInfo pictureOfAppleFile = fileClient.UploadFile(
"picture-of-apple.jpg",
OpenAIFile pictureOfAppleFile = fileClient.UploadFile(
Path.Combine("Assets", "picture-of-apple.png"),
FileUploadPurpose.Vision);
Uri linkToPictureOfOrange = new("https://platform.openai.com/fictitious-files/picture-of-orange.png");
```
Expand All @@ -676,7 +677,7 @@ Next, create a new assistant with a vision-capable model like `gpt-4o` and a thr

```csharp
Assistant assistant = assistantClient.CreateAssistant(
model: "gpt-4o",
"gpt-4o",
new AssistantCreationOptions()
{
Instructions = "When asked a question, attempt to answer very concisely. "
Expand All @@ -686,23 +687,24 @@ Assistant assistant = assistantClient.CreateAssistant(
AssistantThread thread = assistantClient.CreateThread(new ThreadCreationOptions()
{
InitialMessages =
{
new ThreadInitializationMessage(
[
"Hello, assistant! Please compare these two images for me:",
MessageContent.FromImageFileId(pictureOfAppleFile.Id),
MessageContent.FromImageUrl(linkToPictureOfOrange),
]),
}
{
new ThreadInitializationMessage(
MessageRole.User,
[
"Hello, assistant! Please compare these two images for me:",
MessageContent.FromImageFileId(pictureOfAppleFile.Id),
MessageContent.FromImageUri(linkToPictureOfOrange),
]),
}
});
```

With the assistant and thread prepared, use the `CreateRunStreaming` method to get an enumerable `CollectionResult<StreamingUpdate>`. You can then iterate over this collection with `foreach`. For async calling patterns, use `CreateRunStreamingAsync` and iterate over the `AsyncCollectionResult<StreamingUpdate>` with `await foreach`, instead. Note that streaming variants also exist for `CreateThreadAndRunStreaming` and `SubmitToolOutputsToRunStreaming`.

```csharp
CollectionResult<StreamingUpdate> streamingUpdates = assistantClient.CreateRunStreaming(
thread,
assistant,
thread.Id,
assistant.Id,
new RunCreationOptions()
{
AdditionalInstructions = "When possible, try to sneak in puns if you're asked to compare things.",
Expand Down Expand Up @@ -795,6 +797,52 @@ string message = outputAsJson.RootElement

Notice how you can then call the resulting `ClientResult`'s `GetRawResponse` method and retrieve the response body as `BinaryData` via the `PipelineResponse`'s `Content` property.

### Mock a client for testing

The OpenAI .NET library has been designed to support mocking, providing key features such as:
- Client methods made virtual to allow overriding.
- Model factories to assist in instantiating API output models that lack public constructors.

To illustrate how mocking works, suppose you want to validate the behavior of the following method using the [Moq](https://github.com/devlooped/moq) library. Given the path to an audio file, it determines whether it contains a specified secret word:

```csharp
public bool ContainsSecretWord(AudioClient client, string audioFilePath, string secretWord)
{
AudioTranscription transcription = client.TranscribeAudio(audioFilePath);
return transcription.Text.Contains(secretWord);
}
```

Create mocks of `AudioClient` and `ClientResult<AudioTranscription>`, set up methods and properties that will be invoked, then test the behavior of the `ContainsSecretWord` method. Since the `AudioTranscription` class does not provide public constructors, it must be instantiated by the `OpenAIAudioModelFactory` static class:

```csharp
// Instantiate mocks and the AudioTranscription object.
Mock<AudioClient> mockClient = new();
Mock<ClientResult<AudioTranscription>> mockResult = new(null, Mock.Of<PipelineResponse>());
AudioTranscription transcription = OpenAIAudioModelFactory.AudioTranscription(text: "I swear I saw an apple flying yesterday!");

// Set up mocks' properties and methods.
mockResult
.SetupGet(result => result.Value)
.Returns(transcription);

mockClient.Setup(client => client.TranscribeAudio(
It.IsAny<string>(),
It.IsAny<AudioTranscriptionOptions>()))
.Returns(mockResult.Object);

// Perform validation.
AudioClient client = mockClient.Object;
bool containsSecretWord = ContainsSecretWord(client, "<audioFilePath>", "apple");

Assert.That(containsSecretWord, Is.True);
```

All namespaces have their corresponding model factory to support mocking with the exception of the `OpenAI.Assistants` and `OpenAI.VectorStores` namespaces, for which model factories are coming soon.

### Automatically retrying errors

By default, the client classes will automatically retry the following errors up to three additional times using exponential backoff:
Expand Down
Loading

0 comments on commit a330c2e

Please sign in to comment.