Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update instructions for ice python package #18

Merged
merged 2 commits into from
Nov 28, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Update instructions for ice python package
  • Loading branch information
lslunis committed Nov 23, 2022
commit 99989d78dfd13d96a19b0e156df7ec8c1f925a7e
29 changes: 11 additions & 18 deletions before-we-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,39 +6,32 @@ description: How to install and run the Interaction Composition Explorer

The recipes in this primer are implemented using the [Interactive Composition Explorer](https://github.com/oughtinc/ice) (ICE). If you’d like to follow along with the implementation (strongly recommended), set it up first.

## Install Docker
## Requirements

ICE comes as a Docker container with everything you need to start writing language model recipes. To run it, you need [Docker Desktop](https://www.docker.com/products/docker-desktop/).
ICE requires Python 3.10.

If you use Windows, you'll need to run ICE inside of [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).

## Run ICE

Clone ICE:
Install ICE:

```shell
git clone https://github.com/oughtinc/ice.git
pip install ought-ice
```

Obtain an [`OPENAI_API_KEY`](https://beta.openai.com/account/api-keys) and create an `.env` file containing it in the ICE folder:

```shell
# .env
OPENAI_API_KEY=sk-...f8 # Replace with your API key.
```

Start ICE in its own terminal and leave it running:
{% code title="~/.ought-ice/.env" %}

```shell
scripts/run-local.sh
OPENAI_API_KEY=sk-...f8 # Replace with your API key.
```

On the first run, downloading the Docker container will take a few minutes.

## Enter the container
{% endcode %}

Open a shell in the container and use it to run all commands in the upcoming chapters:
Start ICE in its own terminal and leave it running:

```shell
docker compose exec ice bash
python -m ice.server
```

This command gives you a shell in the `ice` directory. Any files you create under this directory will be visible in the container.
18 changes: 10 additions & 8 deletions chapters/hello-world.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@ description: The simplest recipe

Let’s first get used to the infrastructure for writing, running, and debugging recipes.

Create a file `hello.py` anywhere in the ICE directory:
Create a file `hello.py` anywhere:

{% code title="hello.py" %}

```python
from ice.recipe import recipe

Expand All @@ -19,9 +20,10 @@ async def say_hello():

recipe.main(say_hello)
```

{% endcode %}

Run the recipe [in the Docker container](../before-we-start.md#enter-the-container):
Run the recipe:

```shell
python hello.py
Expand All @@ -32,7 +34,7 @@ This will run the recipe and save an execution trace.
On the terminal, you will see a trace link and output:

```
Trace: http://localhost:3000/traces/01GE0GN5PPQWYGMT1B4GFPDZ09
Trace: http://localhost:8935/traces/01GE0GN5PPQWYGMT1B4GFPDZ09
Hello world!
```

Expand All @@ -44,11 +46,11 @@ If you follow the trace link (yours will be different), you will see a function

<summary>The recipe, line by line</summary>

* We use `recipe.main` to denote the recipe entry point and to automatically trace all global async functions that were defined in this file. Synchronous functions are assumed to be simple and fast, and not worth tracing.
* `recipe.main` must appear at the bottom of the file.
* The entry point must be async.
* Most recipe functions will be async so that language model calls are parallelized as much as possible.
* Different recipes take different arguments, which will be provided as keyword arguments to the entry point. This recipe doesn’t use any arguments.
- We use `recipe.main` to denote the recipe entry point and to automatically trace all global async functions that were defined in this file. Synchronous functions are assumed to be simple and fast, and not worth tracing.
- `recipe.main` must appear at the bottom of the file.
- The entry point must be async.
- Most recipe functions will be async so that language model calls are parallelized as much as possible.
- Different recipes take different arguments, which will be provided as keyword arguments to the entry point. This recipe doesn’t use any arguments.

</details>

Expand Down
18 changes: 17 additions & 1 deletion chapters/tool-use/interpreters.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ This is similar to the correct answer `7127675352 miles`, but not the same.
Let’s add a method for evaluating Python expressions:

{% code title="eval_direct.py" %}

```python
from ice.recipe import recipe

Expand All @@ -43,6 +44,7 @@ async def answer_by_computation(question: str):

recipe.main(answer_by_computation)
```

{% endcode %}

This works as expected for expressions that are literally Python code:
Expand All @@ -58,9 +60,11 @@ python eval_direct.py --question "1 + 1"
Of course, it doesn’t work for natural language questions that benefit from compute:

{% code overflow="wrap" %}

```shell
python eval_direct.py --question "What is 578921 days * 12312 miles/day?"
```

{% endcode %}

```
Expand All @@ -70,14 +74,15 @@ Error: invalid syntax (<string>, line 1)
So, we need to choose what to evaluate.

{% hint style="warning" %}
Evaluating arbitrary expressions is dangerous. Don’t use this approach outside of Docker.
Evaluating arbitrary expressions is dangerous. Don’t use this approach outside of highly experimental code.
{% endhint %}

## Choosing what to evaluate

We make a prompt that asks the model what expression to enter into a Python interpreter to answer the question. We’ll also print out the result of evaluating this expression:

{% code title="eval_selective.py" %}

```python
from ice.recipe import recipe

Expand Down Expand Up @@ -113,14 +118,17 @@ async def eval_selective(question: str):

recipe.main(eval_selective)
```

{% endcode %}

If we run this on our example…

{% code overflow="wrap" %}

```shell
python eval_selective.py --question "What is 578921 days * 12312 miles/day?"
```

{% endcode %}

…we get:
Expand All @@ -138,6 +146,7 @@ This is a helpful expression and result!
Now all we need to do this provide this expression and result as additional context for the basic question-answerer.

{% code title="answer_by_computation.py" %}

```python
from ice.recipe import recipe

Expand Down Expand Up @@ -187,14 +196,17 @@ async def answer_by_computation(question: str):

recipe.main(answer_by_computation)
```

{% endcode %}

Rerunning our test case…

{% code overflow="wrap" %}

```shell
python answer_by_computation.py --question "What is 578921 days * 12312 miles/day?"
```

{% endcode %}

…we get the correct answer:
Expand All @@ -210,17 +222,21 @@ Another example:
Running this:

{% code overflow="wrap" %}

```shell
python answer_by_computation.py --question "If I have \$500 and get 3.7% interest over 16 years, what do I have at the end?"
```

{% endcode %}

We get:

{% code overflow="wrap" %}

```
If you have $500 and get 3.7% interest over 16 years, you will have $894.19 at the end.
```

{% endcode %}

In contrast, the basic question-answerer says “You would have $1,034,957.29 at the end.”
Expand Down