Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Float type to the function parameter values #77

Merged
merged 10 commits into from
Sep 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,9 @@ demos/function_calling/ollama/models/
demos/function_calling/ollama/id_ed*
docs/build/
demos/function_calling/open-webui/
demos/employee_details_copilot/open-webui/
demos/employee_details_copilot_arch/open-webui/
demos/network_copilot/open-webui/
demos/employee_details_copilot/ollama/models/
demos/employee_details_copilot_arch/ollama/models/
demos/network_copilot/ollama/models/
4 changes: 0 additions & 4 deletions demos/employee_details_copilot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@
This demo shows how you can use intelligent prompt gateway to act a copilot for calling the correct proc by capturing the required and optional parametrs from the prompt. This demo assumes you are using ollama running natively. If you want to run ollama running inside docker then please update ollama endpoint in docker-compose file.

# Starting the demo
1. Ensure that submodule is up to date
```sh
git submodule sync --recursive
```
1. Create `.env` file and set OpenAI key using env var `OPENAI_API_KEY`
1. Start services
```sh
Expand Down
29 changes: 29 additions & 0 deletions demos/employee_details_copilot/api_server/app/functions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from typing import List, Optional

# Function for top_employees
def top_employees(grouping: str, ranking_criteria: str, top_n: int):
pass

# Function for aggregate_stats
def aggregate_stats(grouping: str, aggregate_criteria: str, aggregate_type: str):
pass

# Function for employees_projects
def employees_projects(min_performance_score: float, min_years_experience: int, department: str, min_project_count: int = None, months_range: int = None):
pass

# Function for salary_growth
def salary_growth(min_salary_increase_percentage: float, department: str = None):
pass

# Function for promotions_increases
def promotions_increases(year: int, min_salary_increase_percentage: float = None, department: str = None):
pass

# Function for avg_project_performance
def avg_project_performance(min_project_count: int, min_performance_score: float, department: str = None):
pass

# Function for certifications_experience
def certifications_experience(certifications: list, min_years_experience: int, department: str = None):
pass
78 changes: 78 additions & 0 deletions demos/employee_details_copilot/api_server/app/generate_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import inspect
import yaml
import functions # This is your module containing the function definitions
import os


def generate_config_from_function(func):
func_name = func.__name__
func_doc = func.__doc__

# Get function signature
sig = inspect.signature(func)
params = []

# Extract parameter info
for name, param in sig.parameters.items():
param_info = {
'name': name,
'description': f"Provide the {name.replace('_', ' ')}", # Customize as needed
'required': param.default == inspect.Parameter.empty, # True if no default value
'type': param.annotation.__name__ if param.annotation != inspect.Parameter.empty else 'str' # Get type if available
}
params.append(param_info)

# Define the config for this function
config = {
'name': func_name,
'description': func_doc or "",
'parameters': params,
'endpoint': {
'cluster': 'api_server',
'path': f"/{func_name}"
},
'system_prompt': f"You are responsible for handling {func_name} requests."
}

return config


def generate_full_config(module):
config = {'prompt_targets': []}

# Automatically get all functions from the module
functions_list = inspect.getmembers(module, inspect.isfunction)

for func_name, func_obj in functions_list:
func_config = generate_config_from_function(func_obj)
config['prompt_targets'].append(func_config)

return config


def replace_prompt_targets_in_config(file_path, new_prompt_targets):
# Load the existing bolt_config.yaml
with open(file_path, 'r') as file:
config_data = yaml.safe_load(file)

# Replace the prompt_targets section with the new one
config_data['prompt_targets'] = new_prompt_targets

# Write the updated config back to the YAML file
with open("bolt_config.yaml", 'w+') as file:
yaml.dump(config_data, file, sort_keys=False)

print(f"Updated prompt_targets in bolt_config.yaml")


# Main execution
if __name__ == "__main__":
# Path to the existing bolt_config.yaml two directories up
bolt_config_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../bolt_config.yaml'))

# Generate new prompt_targets from the functions module
new_config = generate_full_config(functions)
new_prompt_targets = new_config['prompt_targets']

# Replace the prompt_targets in the existing bolt_config.yaml
replace_prompt_targets_in_config(bolt_config_path, new_prompt_targets)
9 changes: 4 additions & 5 deletions demos/employee_details_copilot/api_server/app/main.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import random
from typing import List
from fastapi import FastAPI, HTTPException, Response
from datetime import datetime, date, timedelta, timezone
import logging
from pydantic import BaseModel
from utils import load_sql
Expand Down Expand Up @@ -118,7 +117,7 @@ class TopEmployeesProjects(BaseModel):
months_range: int = None # Optional (for filtering recent projects)


@app.post("/top_employees_projects")
@app.post("/employees_projects")
async def employees_projects(req: TopEmployeesProjects, res: Response):
params, filters = {}, []

Expand Down Expand Up @@ -225,8 +224,8 @@ class AvgProjPerformanceRequest(BaseModel):
department: str = None # Optional


@app.post("/avg_project_performance")
async def avg_project_performance(req: AvgProjPerformanceRequest, res: Response):
@app.post("/project_performance")
async def project_performance(req: AvgProjPerformanceRequest, res: Response):
params, filters = {}, []

if req.department:
Expand Down Expand Up @@ -257,7 +256,7 @@ class CertificationsExperienceRequest(BaseModel):
min_years_experience: int
department: str = None # Optional

@app.post("/employees_certifications_experience")
@app.post("/certifications_experience")
async def certifications_experience(req: CertificationsExperienceRequest, res: Response):
# Convert the list of certifications into a format for SQL query
certs_filter = ', '.join([f"'{cert}'" for cert in req.certifications])
Expand Down
6 changes: 4 additions & 2 deletions demos/employee_details_copilot/bolt_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ prompt_targets:

# 5. Employees with Highest Average Project Performance
- type: function_resolver
name: avg_project_performance
name: project_performance
description: |
Fetch employees with the highest average performance across all projects they have worked on over time. You can filter by minimum project count, department, and minimum performance score.
parameters:
Expand All @@ -161,13 +161,14 @@ prompt_targets:
description: Minimum performance score to filter employees.
required: true
type: float
default: 4.0
- name: department
description: Department to filter by (optional).
required: false
type: string
endpoint:
cluster: api_server
path: /avg_project_performance
path: /project_performance
system_prompt: |
You are responsible for fetching employees with the highest average performance across all projects they’ve worked on. Apply filters for minimum project count, performance score, and department.

Expand All @@ -190,6 +191,7 @@ prompt_targets:
description: Department to filter employees by (optional).
required: false
type: string
default: "Engineering"
endpoint:
cluster: api_server
path: /certifications_experience
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
FROM Bolt-Function-Calling-1B-Q3_K_L.gguf
FROM Bolt-Function-Calling-1B-Q4_K_M.gguf

# Set the size of the context window used to generate the next token
# PARAMETER num_ctx 16384
PARAMETER num_ctx 4096

# Set parameters for response generation
Expand Down
24 changes: 24 additions & 0 deletions demos/employee_details_copilot_arch/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Function calling
This demo shows how you can use intelligent prompt gateway as copilot to explore employee data by calling the correct api functions. It calls appropriate function and also engages with user to extract required parameters. This demo assumes you are using ollama natively.

# Starting the demo
1. Create `.env` file and set OpenAI key using env var `OPENAI_API_KEY`
1. Start services
```sh
docker compose up
```
1. Download Bolt-FC model. This demo assumes we have downloaded [Bolt-Function-Calling-1B:Q4_K_M](https://huggingface.co/katanemolabs/Bolt-Function-Calling-1B.gguf/blob/main/Bolt-Function-Calling-1B-Q4_K_M.gguf) to local folder.
1. If running ollama natively run
```sh
ollama serve
```
2. Create model file in ollama repository
```sh
ollama create Bolt-Function-Calling-1B:Q4_K_M -f Bolt-FC-1B-Q4_K_M.model_file
```
3. Navigate to http://localhost:18080/
4. You can type in queries like "show me the top 5 employees in each department with highest salary"
- You can also ask follow up questions like "just show the top 2"
5. To see metrics navigate to "http://localhost:3000/" (use admin/grafana for login)
- Open up dahsboard named "Intelligent Gateway Overview"
- On this dashboard you can see reuqest latency and number of requests
16 changes: 16 additions & 0 deletions demos/employee_details_copilot_arch/api_server/.vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "function-calling api server",
"cwd": "${workspaceFolder}/app",
"type": "debugpy",
"request": "launch",
"module": "uvicorn",
"args": ["main:app","--reload", "--port", "8001"],
}
]
}
19 changes: 19 additions & 0 deletions demos/employee_details_copilot_arch/api_server/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
FROM python:3 AS base

FROM base AS builder

WORKDIR /src

COPY requirements.txt /src/
RUN pip install --prefix=/runtime --force-reinstall -r requirements.txt

COPY . /src

FROM python:3-slim AS output

COPY --from=builder /runtime /usr/local

COPY /app /app
WORKDIR /app

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
Loading
Loading