Dojo provides a user-friendly platform for creating verifiable Games, Autonomous Worlds, and various Applications that are natively composable, extensible, permissionless and persistent.
Haiku empowers developers to seamlessly integrate AI-driven content generation into their Dojo applications and games. The content generation is triggered by contract events and streamed back to the game world through Dojo's offchain message system, making it easy for developers to integrate it in their client. With Haiku, developers can effortlessly enhance their projects with intelligent, context-aware content that evolves alongside player interactions and game states.
- Key Features
- How It Works
- Getting Started
- Supported AI Models
- Supported Data Types and Limitations
- Haiku Configuration
- Seamless integration with Dojo projects.
- AI triggered by contract events.
- Results stored in torii offchain messages.
- Simple configuration and prompt engineering through
haiku.toml
file.
- Emit custom Haiku events from your smart contracts.
- The events triggers Haiku which vectorizes the event model.
- Haiku compares the vectorized model to other memories in the Haiku vector db using cosine similarity.
- The event and the returned memories are put together to create a prompt which is sent to the LLM.
- The LLM response is streamed back to torii leveraging offchain messages.
- The result is vectorized and stored in the vector db as a new memory.
Follow the instructions to install Dojo here. You need at least v1.0.0-alpha.9.
- For the latest release:
curl -sSL https://raw.githubusercontent.com/edisontim/haiku/main/scripts/install.sh | bash
- For a specific version:
curl -sSL https://raw.githubusercontent.com/edisontim/haiku/main/scripts/install.sh | bash -s -- "v0.0.2"
Events must follow the structure:
#[derive(Copy, Drop, Serde)]
#[dojo::event]
#[dojo::model(namespace: "haiku", nomapping: true)]
struct YourEventName {
#[key]
id: u32,
timestamp: u64,
// Add any additional fields or keys you need for your event
// ...
}
The event struct must include these requirements:
- Set the
namespace
to"haiku"
andnomapping
totrue
in the#[dojo::model]
attribute - Include an
id
field of typeu32
with the#[key]
attribute - Include a
timestamp
field of typeu64
You can choose any name for your event struct (replace YourEventName
) and add any additional fields that your event requires.
Add the following to your Scarb.toml
[dependencies]
haiku_event = { git = "https://github.com/edisontim/haiku" }
[[target.dojo]]
build-external-contracts = [ "haiku_event::PromptMessage" ]
For instructions on how to build and migrate your Dojo project, please refer to the official Dojo documentation. These steps are essential for preparing your project for use with Haiku.
Follow the torii launch instructions here.
After creating your Dojo manifest.toml file, you can generate a Haiku configuration template using the following command:
Usage: haiku build [MANIFEST_FILE_PATH] [OUTPUT_CONFIG_FILE_PATH]
Arguments:
[MANIFEST_FILE_PATH] Path to the manifest file [default: ./manifest.toml]
[OUTPUT_CONFIG_FILE_PATH] Path to output config file [default: ./config.toml]
The default path for the Haiku configuration file is haiku.toml
. When you run the haiku build
command, it will generate this configuration file along with a .env.haiku
file. The .env.haiku
file is where you'll need to specify your private keys and other sensitive information.
After generating the initial Haiku configuration template, you'll need to fill in the necessary details to customize it for your project. This step is crucial for ensuring that Haiku integrates correctly with your Dojo setup and functions as intended.
To run Haiku, you need to specify the path to your Haiku configuration file. Use the following command:
Usage: haiku run <CONFIG_FILE_PATH>
Arguments:
<CONFIG_FILE_PATH> Path to the configuration file
Now that your Haiku results are being streamed as offchain messages, you can integrate them into your client application. To help you get started, we've provided example client implementations in the /examples
folder of this repository. These examples demonstrate various ways to consume and display Haiku messages in different client environments.
Haiku currently supports two AI model standards:
- OpenAI: A leading provider of advanced language models and AI services.
- Ollama: A platform that simplifies the process of running open-source LLMs locally. Ollama currently supports over 100 open-source models, which can be found at https://ollama.com/library.
Haiku events currently have some limitations regarding the data types that can be used for event fields:
-
Numeric Types:
- Supported: u8, u16, u32, u64, u128
- Not supported: felt252 for numbers (Haiku converts these to short strings)
-
Boolean Type:
- Supported: bool
-
String Type:
- Supported: felt252 for strings
-
Complex Types:
- Not supported: structs, arrays, or other complex data structures
-
Address Type:
- Supported: ContractAddress
When defining your event fields, ensure you use these supported types to guarantee proper functionality within the Haiku system. If you need to represent more complex data, consider breaking it down into multiple simple fields or using string representations where appropriate.
Note: This list of supported types may expand in future versions of Haiku. Always refer to the most recent documentation for up-to-date information on supported data types.
Defines the basic settings for the Haiku system.
Key | Description |
---|---|
torii_url |
Torii address |
rpc_url |
Katana address |
world_address |
Dojo world address |
relay_url |
Used by torii's offchain message stream Default: /ip4/127.0.0.1/udp/9090/quic-v1 |
database_url |
Path to the SQLite database file for storing LLM responses - Automatically created if it doesn't exist - Used to store and retrieve previous interactions, enhancing context for future prompts Default: "haiku.db" |
Configures the Language Model (LLM) used for generating Haiku responses and the Embedding Model used to store them.
Key | Description |
---|---|
chat_completion_provider |
The chosen model provider for generating responses Possible options: "ollama" | "openai" |
chat_completion_model |
Name of the model from the provider you want to interact with |
chat_completion_url |
LLM API endpoint |
embedding_provider |
The chosen embedding provider for vectorizing text Possible options: "ollama" | "openai" | "baai-bge" |
embedding_model |
Name of the model from the provider for vectorizing text Example: "text-embedding-3-small" |
embedding_url |
Embedding Model API endpoint Example: https://api-inference.huggingface.co/models/BAAI/bge-small-en-v1.5 |
Defines the database configuration used for storing and retrieving LLM responses.
Key | Description |
---|---|
vector_size |
Size of the Embedding Model vectors Example: "1536" (for OpenAI's 'text-embedding-3-small' model) |
number_memory_to_retrieve |
Number of memories to fetch for every new prompt Default: 1 |
Key | Description |
---|---|
story |
Provides overarching context about your application or game world - This narrative will be included as input for every prompt, setting the stage for AI-generated content - Example: "In a post-apocalyptic world where nature has reclaimed abandoned cities, survivors navigate through dangerous ruins and lush overgrowth, facing mutated creatures and rival factions." |
This section defines the events that trigger Haiku generation. Pregenerated using the cairo build
command by fetching events from the Manifest.
tag: <Namespace>-<Event Model Name>
Key | Description |
---|---|
template |
Prompt used to generate the response for this specific event model. Example: "You're ${player_name}, a role ${player_role} samurai. You've healed during a battle with a role ${dungeon_role} monster. His remaining health is ${dungeon_health}, yours is ${player_health}." |
Key | Description |
---|---|
storage_keys |
Keys used to store information for future retrieval. These typically correspond to your model's keys. However, for some events, you may not want to link the response to certain keys for future memory retrieval. |
retrieval_keys |
Keys used to fetch relevant memories when creating a new prompt. These typically correspond to your model's keys. These determine which stored information will be included as context for the AI's response. |
Maps custom keys from your event to aliases, facilitating consistent storage and retrieval across different event types. This mapping should exclude the id
field.
Key | Description |
---|---|
key |
The original key name as defined in your event structure |
alias |
A standardized name used to represent similar entities across different events Example: player_id and target_entity_id might both be aliased to player |
Example: 'player_id' and 'target_entity_id' might both be aliased to 'player'
Erase again, and then
A poppy blooms.
- Katsushika Hokusai -