Skip to content
/ lmnr Public
forked from lmnr-ai/lmnr

Laminar - Open-source data flywheel platform for LLM apps. Fast and reliable. Written in Rust 🦀. YC S24.

License

Notifications You must be signed in to change notification settings

aalexmrt/lmnr

 
 

Repository files navigation

Static Badge X (formerly Twitter) Follow Static Badge

Laminar - LLM engineering from first principles

Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence. Screenshot 2024-09-25 at 8 58 56 PM

Think of it as DataDog + PostHog for LLM apps.

  • OpenTelemetry-based instrumentation: automatic for LLM / vector DB calls with just 2 lines of code + decorators to track functions (powered by an amazing OpenLLMetry open-source package by TraceLoop).
  • Online evaluations: Laminar can host your custom evaluation code or prompts and run them as your application traces arrive.
  • Built for scale with a modern stack: written in Rust, RabbitMQ for message queue, Postgres for data, Clickhouse for analytics.
  • Insightful, fast dashboards for traces / spans / events / evaluations.

Read the docs.

Getting started

Laminar Cloud

The easiest way to get started is with a generous free tier on our managed platform -> lmnr.ai

Self-hosting with Docker compose

For a quick start, clone the repo and start the services with docker compose:

git clone https://github.com/lmnr-ai/lmnr
cd lmnr
docker compose up -d

This will spin up a lightweight version of the stack with just the database, app-server, and frontend. This is good for a quickstart or for lightweight usage.

For production environment, we recommend using docker compose -f docker-compose-full.yml up -d. This may take a while, but it will enable all features.

This will spin up the following containers:

  • app-server – the core app logic, backend, and the LLM proxies
  • rabbitmq – message queue for sending the traces and observations reliably
  • qdrant – vector database
  • semantic-search-service – service for interacting with qdrant and embeddings
  • frontend – the visual front-end dashboard for interacting with traces
  • python-executor – a small python sandbox that can run arbitrary code wrapped under a thin gRPC service
  • postgres – the database for all the application data
  • clickhouse – columnar OLAP database for more efficient trace and label analytics

Local development

The simple set up above will pull latest Laminar images from Github Container Registry.

For running and building Laminar locally, or to learn more about docker compose files, follow the guide in Contributing.

Usage. Instrumenting Python code

First, create a project and generate a Project API Key. Then,

pip install lmnr
echo "LMNR_PROJECT_API_KEY=<YOUR_PROJECT_API_KEY>" >> .env

To automatically instrument LLM calls of popular frameworks and LLM provider libraries just add

from lmnr import Laminar as L
L.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")

In addition to automatic instrumentation, we provide a simple @observe() decorator, if you want to trace inputs / outputs of functions

Example

import os
from openai import OpenAI

from lmnr import observe, Laminar as L
L.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@observe()  # annotate all functions you want to trace
def poem_writer(topic="turbulence"):
    prompt = f"write a poem about {topic}"
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": prompt},
        ],
    )
    poem = response.choices[0].message.content
    return poem

if __name__ == "__main__":
    print(poem_writer(topic="laminar flow"))

Laminar pipelines as prompt chain managers

You can create Laminar pipelines in the UI and manage chains of LLM calls there.

After you are ready to use your pipeline in your code, deploy it in Laminar by selecting the target version for the pipeline.

Once your pipeline target is set, you can call it from Python in just a few lines.

from lmnr import Laminar as L

L.initialize('<YOUR_PROJECT_API_KEY>')

result = l.run(
    pipeline = 'my_pipeline_name',
    inputs = {'input_node_name': 'some_value'},
    # all environment variables
    env = {'OPENAI_API_KEY': 'sk-some-key'},
)

Learn more

To learn more about instrumenting your code, check out our client libraries:

NPM Version PyPI - Version

To get deeper understanding of the concepts, follow on to the docs.

About

Laminar - Open-source data flywheel platform for LLM apps. Fast and reliable. Written in Rust 🦀. YC S24.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 57.0%
  • Rust 41.3%
  • Python 1.1%
  • Dockerfile 0.2%
  • JavaScript 0.2%
  • CSS 0.1%
  • Shell 0.1%