Skip to content

Minimal and opinionated OpenAI client powered by fetch.

License

Notifications You must be signed in to change notification settings

dexaai/openai-fetch

Repository files navigation

OpenAI Fetch Client

Build Status npm version

A minimal and opinionated OpenAI client powered by fetch.

Unfortunately, the official openai package patches fetch in problematic ways and is quite bloated.

Reasons to consider using openai-fetch:

  • You want a fast and small client that doesn't patch fetch
  • Supports all envs with native fetch: Node 18+, browsers, Deno, Cloudflare Workers, etc
  • Package size: openai-fetch is ~14kb and openai is ~152kb
  • You only need chat, completions, embeddings, and moderations, and TTS

Use the official openai package if:

  • Your runtime doesn't have native fetch support
  • Your app can't handle native ESM code
  • You need endpoints other than chat, completions, embeddings, and moderations, and TTS
  • You aren't concerned with lib size or fetch patching

Install

npm install openai-fetch

This package requires node >= 18 or an environment with fetch support.

This package exports ESM. If your project uses CommonJS, consider switching to ESM or use the dynamic import() function.

Usage

import { OpenAIClient } from 'openai-fetch';

const client = new OpenAIClient({ apiKey: '<your api key>' });

The apiKey is optional and will be read from process.env.OPENAI_API_KEY if present.

API

The API follows OpenAI very closely, so their reference documentation can generally be used. Everything is strongly typed, so you will know if anything is different as soon as TypeScript parses your code.

// Generate a single chat completion
client.createChatCompletion(params: ChatParams): Promise<ChatResponse>;

// Stream a single completion via a ReadableStream
client.streamChatCompletion(params: ChatStreamParams): Promise<ChatStreamResponse>;

// Generate one or more completions
client.createCompletions(params: CompletionParams): Promise<CompletionResponse>;

// Stream a single completion via a ReadableStream
client.streamCompletion(params: CompletionStreamParams): Promise<CompletionStreamResponse>;

// Generate one or more embeddings
client.createEmbeddings(params: EmbeddingParams): Promise<EmbeddingResponse>

// Checks for potentially harmful content
client.createModeration(params: ModerationParams): Promise<ModerationResponse>

// Text-to-Speech
client.createSpeech(params: SpeechParams): Promise<SpeechResponse>

Type Definitions

The type definitions are avaible through TSServer, and can be found here: type definitions.

License

MIT © Dexa