Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models, clip, clap and colpali. Infinity is developed under MIT License.
- Deploy any model from HuggingFace: deploy any embedding, reranking, clip and sentence-transformer model from HuggingFace
- Fast inference backends: The inference server is built on top of torch, optimum (ONNX/TensorRT) and CTranslate2, using FlashAttention to get the most out of your NVIDIA CUDA, AMD ROCM, CPU, AWS INF2 or APPLE MPS accelerator. Infinity uses dynamic batching and tokenization dedicated in worker threads.
- Multi-modal and multi-model: Mix-and-match multiple models. Infinity orchestrates them.
- Tested implementation: Unit and end-to-end tested. Embeddings via infinity are correctly embedded. Lets API users create embeddings till infinity and beyond.
- Easy to use: Built on FastAPI. Infinity CLI v2 allows launching of all arguments via Environment variable or argument. OpenAPI aligned to OpenAI's API specs. View the docs at https:///michaelfeil.github.io/infinity on how to get started.
- [2024/07] Inference deployment example via Modal and a free GPU deployment
- [2024/06] Support for multi-modal: clip, text-classification & launch all arguments from env variables
- [2024/05] launch multiple models using the
v2
cli, including--api-key
- [2024/03] infinity supports experimental int8 (cpu/cuda) and fp8 (H100/MI300) support
- [2024/03] Docs are online: https://michaelfeil.github.io/infinity/latest/
- [2024/02] Community meetup at the Run:AI Infra Club
- [2024/01] TensorRT / ONNX inference
- [2023/10] First release
pip install infinity-emb[all]
After your pip install, with your venv active, you can run the CLI directly.
infinity_emb v2 --model-id BAAI/bge-small-en-v1.5
Check the v2 --help
command to get a description for all parameters.
infinity_emb v2 --help
Instead of installing the CLI via pip, you may also use docker to run michaelf34/infinity
.
Make sure you mount your accelerator ( i.e. install nvidia-docker
and activate with --gpus all
).
port=7997
model1=michaelfeil/bge-small-en-v1.5
model2=mixedbread-ai/mxbai-rerank-xsmall-v1
volume=$PWD/data
docker run -it --gpus all \
-v $volume:/app/.cache \
-p $port:$port \
michaelf34/infinity:latest \
v2 \
--model-id $model1 \
--model-id $model2 \
--port $port
The cache path at inside the docker container is set by the environment variable HF_HOME
.
In this demo sentence-transformers/all-MiniLM-L6-v2, deployed at batch-size=2. After initialization, from a second terminal 3 requests (payload 1,1,and 5 sentences) are sent via cURL.
Instead of the cli & RestAPI use infinity's interface via the Python API.
This gives you most flexibility. The Python API builds on asyncio
with its await/async
features, to allow concurrent processing of requests. Arguments of the CLI are also available via Python.
import asyncio
from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
array = AsyncEngineArray.from_args([
EngineArgs(model_name_or_path = "BAAI/bge-small-en-v1.5", engine="torch", embedding_dtype="float32", dtype="auto")
])
async def embed_text(engine: AsyncEmbeddingEngine):
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
# or handle the async start / stop yourself.
await engine.astart()
embeddings, usage = await engine.embed(sentences=sentences)
await engine.astop()
asyncio.run(embed_text(array[0]))
Example embedding models:
- Any trending embedding / reranking model is likely supported: https://huggingface.co/models?other=text-embeddings-inference&sort=trending
- mixedbread-ai/mxbai-embed-large-v1
- WhereIsAI/UAE-Large-V1
- BAAI/bge-base-en-v1.5
- Alibaba-NLP/gte-large-en-v1.5
- jinaai/jina-embeddings-v2-base-code
- intfloat/multilingual-e5-large-instruct
Reranking gives you a score for similarity between a query and multiple documents. Use it in conjunction with a VectorDB+Embeddings, or as standalone for small amount of documents. Please select a model from huggingface that is a AutoModelForSequenceClassification with one class classification.
import asyncio
from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine
query = "What is the python package infinity_emb?"
docs = ["This is a document not related to the python package infinity_emb, hence...",
"Paris is in France!",
"infinity_emb is a package for sentence embeddings and rerankings using transformer models in Python!"]
array = AsyncEmbeddingEngine.from_args(
[EngineArgs(model_name_or_path = "mixedbread-ai/mxbai-rerank-xsmall-v1", engine="torch")]
)
async def rerank(engine: AsyncEmbeddingEngine):
async with engine:
ranking, usage = await engine.rerank(query=query, docs=docs)
print(list(zip(ranking, docs)))
# or handle the async start / stop yourself.
await engine.astart()
ranking, usage = await engine.rerank(query=query, docs=docs)
await engine.astop()
asyncio.run(rerank(array[0]))
When using the CLI, use this command to launch rerankers:
infinity_emb v2 --model-id mixedbread-ai/mxbai-rerank-xsmall-v1
Example models:
CLIP models are able to encode images and text at the same time.
import asyncio
from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine
sentences = ["This is awesome.", "I am bored."]
images = ["http://images.cocodataset.org/val2017/000000039769.jpg"]
engine_args = EngineArgs(
model_name_or_path = "wkcn/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M",
engine="torch"
)
array = AsyncEngineArray.from_args([engine_args])
async def embed(engine: AsyncEmbeddingEngine):
await engine.astart()
embeddings, usage = await engine.embed(sentences=sentences)
embeddings_image, _ = await engine.image_embed(images=images)
await engine.astop()
asyncio.run(embed(array["wkcn/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M"]))
Example models:
- wkcn/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M
- jinaai/jina-clip-v1 (requires
pip install timm
) - Currently no support for pure vision models: nomic-ai/nomic-embed-vision-v1.5, ..
CLAP models are able to encode audio and text at the same time.
import asyncio
from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine
import requests
import soundfile as sf
import io
sentences = ["This is awesome.", "I am bored."]
url = "https://bigsoundbank.com/UPLOAD/wav/2380.wav"
raw_bytes = requests.get(url, stream=True).content
audios = [raw_bytes]
engine_args = EngineArgs(
model_name_or_path = "laion/clap-htsat-unfused",
dtype="float32",
engine="torch"
)
array = AsyncEngineArray.from_args([engine_args])
async def embed(engine: AsyncEmbeddingEngine):
await engine.astart()
embeddings, usage = await engine.embed(sentences=sentences)
embedding_audios = await engine.audio_embed(audios=audios)
await engine.astop()
asyncio.run(embed(array["laion/clap-htsat-unfused"]))
- Note: The sampling rate of the audio data needs to match the model *
Example models:
Use text classification with Infinity's classify
feature, which allows for sentiment analysis, emotion detection, and more classification tasks.
import asyncio
from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine
sentences = ["This is awesome.", "I am bored."]
engine_args = EngineArgs(
model_name_or_path = "SamLowe/roberta-base-go_emotions",
engine="torch", model_warmup=True)
array = AsyncEngineArray.from_args([engine_args])
async def classifier():
async with engine:
predictions, usage = await engine.classify(sentences=sentences)
# or handle the async start / stop yourself.
await engine.astart()
predictions, usage = await engine.classify(sentences=sentences)
await engine.astop()
asyncio.run(classifier(array["SamLowe/roberta-base-go_emotions"]))
Example models:
- Serverless deployments at Runpod
- Truefoundry Cognita
- Langchain example
- imitater - A unified language model server built upon vllm and infinity.
- Dwarves Foundation: Deployment examples using Modal.com
- infiniflow/Ragflow
- SAP Core AI
- gpt_server - gpt_server is an open-source framework designed for production-level deployment of LLMs (Large Language Models) or Embeddings.
- KubeAI: Kubernetes AI Operator for inferencing
What are embedding models?
Embedding models can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search. And it also can be used in vector databases for LLMs.The most known architecture are encoder-only transformers such as BERT, and most popular implementation include SentenceTransformers.
What models are supported?
All models of the sentence transformers org are supported https://huggingface.co/sentence-transformers / sbert.net. LLM's like LLAMA2-7B are not intended for deployment.
With the command --engine torch
the model must be compatible with https://github.com/UKPLab/sentence-transformers/ and AutoModel
With the command --engine optimum
, there must be an onnx file. Models from https://huggingface.co/Xenova are recommended.
With the command --engine ctranslate2
- only BERT
models are supported.
For the latest trends, you might want to check out one of the following models. https://huggingface.co/spaces/mteb/leaderboard
Launching multiple models
Since infinity_emb>=0.0.34, you can use cli v2
method to launch multiple models at the same time.
Checkout infinity_emb v2 --help
for all args.
Using Langchain with Infinity
Infinity has a official integration into pip install langchain>=0.342
.
You can find more documentation on that here:
https://python.langchain.com/docs/integrations/text_embedding/infinity
from langchain.embeddings.infinity import InfinityEmbeddings
from langchain.docstore.document import Document
documents = [Document(page_content="Hello world!", metadata={"source": "unknown"})]
emb_model = InfinityEmbeddings(model="BAAI/bge-small", infinity_api_url="http://localhost:7997/v1")
print(emb_model.embed_documents([doc.page_content for doc in docs]))
View the docs at https:///michaelfeil.github.io/infinity on how to get started.
After startup, the Swagger Ui will be available under {url}:{port}/docs
, in this case http://localhost:7997/docs
. You can also find a interactive preview here: https://infinity.modal.michaelfeil.eu/docs (and https://michaelfeil-infinity.hf.space/docs)
Install via Poetry 1.7.1 and Python3.11 on Ubuntu 22.04
cd libs/infinity_emb
poetry install --extras all --with test
To pass the CI:
cd libs/infinity_emb
make format
make lint
poetry run pytest ./tests
All contributions must be made in a way to be compatible with the MIT License of this repo.
@software{feil_2023_11630143,
author = {Feil, Michael},
title = {Infinity - To Embeddings and Beyond},
month = oct,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.11630143},
url = {https://doi.org/10.5281/zenodo.11630143}
}