Skip to content

Commit

Permalink
Notebook cleanup and update (Azure-Samples#36)
Browse files Browse the repository at this point in the history
  • Loading branch information
jgbradley1 authored Jun 28, 2024
1 parent c53814c commit f70e224
Show file tree
Hide file tree
Showing 6 changed files with 212 additions and 217 deletions.
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,11 @@
logs
logs/*

# ignore example directory crated by HelloWorld.ipynb
# ignore files created by the jupyter notebooks demos
example_files/
files/
prompts
testdata
.scripts/

# Byte-compiled / optimized / DLL files
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ For FAQ, access instructions, and our roadmap, please visit `aka.ms/graphrag`

### Deployment Guide
To deploy the solution accelerator, see the [deployment guide](docs/DEPLOYMENT-GUIDE.md). This will result in a full deployment of graphrag as an API.
Afterwards, check out the [Hello World](notebooks/HelloWorld.ipynb) notebook for a demonstration of various API calls.
Afterwards, check out the [Quickstart](notebooks/1-Quickstart.ipynb) notebook for a demonstration of various API calls.

## Development Guide
Interested in contributing? Check out the [development guide](docs/DEVELOPMENT-GUIDE.md).
Expand Down
6 changes: 3 additions & 3 deletions docs/DEPLOYMENT-GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The deployment process requires the following tools to be installed:
* [kubectl](https://kubernetes.io/docs/tasks/tools) - k8s command line tool
* [yq](https://github.com/mikefarah/yq?tab=readme-ov-file#install) >= v4.40.7 - yaml file parser

TIP: If you open this repository inside a devcontainer (i.e. VSCode Dev Containers or Codespaces), all required tools for deployment will already be available. Opening a devcontainer using VS Code requires <a href="https://docs.docker.com/engine/install/" target="_blank" >Docker to be installed</a>.
TIP: If you open this repository inside a devcontainer (i.e. VSCode Dev Containers or Codespaces), all required tools for deployment will already be available. Opening a devcontainer using VS Code requires <a href="https://docs.docker.com/engine/install/" target="_blank" >Docker to be installed</a>.

The setup/deployment process has been mostly automated with a shell script and Bicep files (infrastructure as code). Azure CLI will deploy all necessary Azure resources using these Bicep files. The deployment is configurable using values defined in `infra/deploy.parameters.json`. To the utmost extent, we have provided default values but users are still expected to modify some values.

Expand All @@ -25,7 +25,7 @@ You will need the following <a href="https://learn.microsoft.com/en-us/azure/rol
| Permission | Scope |
| :--- | ---: |
Contributor | Subscription
Role Based Access Control (RBAC) Administrator | Subscription
Role Based Access Control (RBAC) Administrator | Subscription

#### Resource Provider
The Azure subscription that you deploy this solution accelerator in will require the `Microsoft.OperationsManagement` resource provider to be registered.
Expand Down Expand Up @@ -99,4 +99,4 @@ bash deploy.sh -p deploy.parameters.json
When deploying for the first time, it will take ~40-50 minutes to deploy. Subsequent runs of this command will be faster.

### 6. Use GraphRAG
Once the deployment has finished, check out our [`Hello World`](../notebooks/HelloWorld.ipynb) notebook for a demonstration of how to use the GraphRAG API. To access the API documentation, visit `<APIM_gateway_url>/manpage/docs` in your browser. You can find the `APIM_gateway_url` by looking in the Azure Portal for the deployed APIM instance.
Once the deployment has finished, check out our [`Quickstart`](../notebooks/1-Quickstart.ipynb) notebook for a demonstration of how to use the GraphRAG API. To access the API documentation, visit `<APIM_gateway_url>/manpage/docs` in your browser. You can find the `APIM_gateway_url` by looking in the Azure Portal for the deployed APIM instance.
165 changes: 66 additions & 99 deletions notebooks/Quickstart.ipynb → notebooks/1-Quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisite installs to run the quickstart notebook\n",
"Install 3rd party packages that are not part of the Python Standard Library"
"## Prerequisites\n",
"Install 3rd party packages, not part of the Python Standard Library, to run the notebook"
]
},
{
Expand All @@ -21,7 +21,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install devtools pandas python-magic requests tqdm"
"! pip install devtools python-magic requests tqdm"
]
},
{
Expand All @@ -32,12 +32,10 @@
"source": [
"import getpass\n",
"import json\n",
"import sys\n",
"import time\n",
"from pathlib import Path\n",
"\n",
"import magic\n",
"import pandas as pd\n",
"import requests\n",
"from devtools import pprint\n",
"from tqdm import tqdm"
Expand All @@ -47,15 +45,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configuration - API Key, file directions and API endpoints"
"## (REQUIRED) User Configuration\n",
"Set the API subscription key, API base endpoint, and some file directory names that will be referenced later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get API Key for API Management Service\n",
"For authentication, the API requires a *subscription key* to be passed in the header of all requests. To find this key, visit the Azure Portal. The API subscription key will be located under `<my_resource_group> --> <API Management service> --> <APIs> --> <Subscriptions> --> <Built-in all-access subscription> Primary Key`."
"#### API subscription key\n",
"\n",
"APIM supports multiple forms of authentication and access control (e.g. managed identity). For this notebook demonstration, we will use a **[subscription key](https://learn.microsoft.com/en-us/azure/api-management/api-management-subscriptions)**. To locate this key, visit the Azure Portal. The subscription key can be found under `<my_resource_group> --> <API Management service> --> <APIs> --> <Subscriptions> --> <Built-in all-access subscription> Primary Key`. For multiple API users, individual subscription keys can be generated."
]
},
{
Expand All @@ -66,7 +66,15 @@
"source": [
"ocp_apim_subscription_key = getpass.getpass(\n",
" \"Enter the subscription key to the GraphRag APIM:\"\n",
")"
")\n",
"\n",
"\"\"\"\n",
"\"Ocp-Apim-Subscription-Key\": \n",
" This is a custom HTTP header used by Azure API Management service (APIM) to \n",
" authenticate API requests. The value for this key should be set to the subscription \n",
" key provided by the Azure APIM instance in your GraphRAG resource group.\n",
"\"\"\"\n",
"headers = {\"Ocp-Apim-Subscription-Key\": ocp_apim_subscription_key}"
]
},
{
Expand All @@ -75,13 +83,7 @@
"source": [
"#### Setup directories and API endpoint\n",
"\n",
"The following parameters are required to access and use the GraphRAG solution accelerator API:\n",
"* file_directory\n",
"* storage_name\n",
"* index_name\n",
"* endpoint\n",
"\n",
"For demonstration purposes, you may use the provided `get-wiki-articles.py` script to download a small set of wikipedia articles or provide your own data."
"For demonstration purposes, please use the provided `get-wiki-articles.py` script to download a small set of wikipedia articles or provide your own data (graphrag requires txt files to be utf-8 encoded)."
]
},
{
Expand All @@ -91,18 +93,21 @@
"outputs": [],
"source": [
"\"\"\"\n",
"These parameters must be defined by the user:\n",
"These parameters must be defined by the notebook user:\n",
"\n",
"- file_directory: local directory where data files of interest are stored.\n",
"- storage_name: unique name for an Azure blob storage container where files will be uploaded.\n",
"- index_name: unique name for a single knowledge graph construction. Multiple indexes can be created from the same blob container of data.\n",
"- apim_url: the endpoint URL for GraphRAG service (this is the Gateway URL found in the APIM resource).\n",
"- file_directory: a local directory of text files. The file structure should be flat,\n",
" with no nested directories. (i.e. file_directory/file1.txt, file_directory/file2.txt, etc.)\n",
"- storage_name: a unique name to identify a blob storage container in Azure where files\n",
" from `file_directory` will be uploaded.\n",
"- index_name: a unique name to identify a single graphrag knowledge graph index.\n",
" Note: Multiple indexes may be created from the same `storage_name` blob storage container.\n",
"- endpoint: the base/endpoint URL for the GraphRAG API (this is the Gateway URL found in the APIM resource).\n",
"\"\"\"\n",
"\n",
"file_directory = \"\"\n",
"storage_name = \"\"\n",
"index_name = \"\"\n",
"apim_url = \"\""
"endpoint = \"\""
]
},
{
Expand All @@ -112,31 +117,17 @@
"outputs": [],
"source": [
"assert (\n",
" file_directory != \"\" and storage_name != \"\" and index_name != \"\" and apim_url != \"\"\n",
" file_directory != \"\" and storage_name != \"\" and index_name != \"\" and endpoint != \"\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"\"Ocp-Apim-Subscription-Key\": \n",
" This is a custom HTTP header used by Azure API Management service (APIM) to \n",
" authenticate API requests. The value for this key should be set to the subscription \n",
" key provided by the Azure APIM instance in your GraphRAG resource group.\n",
"\"\"\"\n",
"\n",
"headers = {\"Ocp-Apim-Subscription-Key\": ocp_apim_subscription_key}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upload Files to Storage Data"
"## Upload Files\n",
"\n",
"For a demonstration of how to index data in graphrag, we first need to ingest a few files into graphrag."
]
},
{
Expand All @@ -156,16 +147,16 @@
" Upload files to a blob storage container.\n",
"\n",
" Args:\n",
" file_directory - a local directory of .txt files to upload. All files must be in utf-8 encoding.\n",
" storage_name - a unique name for the Azure storage container.\n",
" file_directory - a local directory of .txt files to upload. All files must have utf-8 encoding.\n",
" storage_name - a unique name for the Azure storage blob container.\n",
" batch_size - the number of files to upload in a single batch.\n",
" overwrite - whether or not to overwrite files if they already exist in the storage container.\n",
" overwrite - whether or not to overwrite files if they already exist in the storage blob container.\n",
" max_retries - the maximum number of times to retry uploading a batch of files if the API is busy.\n",
"\n",
" NOTE: Uploading files may sometimes fail if the blob container was recently deleted\n",
" (i.e. a few seconds before. The solution \"in practice\" is to sleep a few seconds and try again.\n",
" \"\"\"\n",
" url = apim_url + \"/data\"\n",
" url = endpoint + \"/data\"\n",
"\n",
" def upload_batch(\n",
" files: list, storage_name: str, overwrite: bool, max_retries: int\n",
Expand Down Expand Up @@ -236,9 +227,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Index\n",
"## Build an Index\n",
"\n",
"After data files have been uploaded, it is now possible to construct a knowledge graph by creating a search index. If an entity configuration is not provided, a default entity configuration will be used that has been shown to generally work well."
"After data files have been uploaded, we can construct a knowledge graph by building a search index."
]
},
{
Expand All @@ -252,13 +243,10 @@
" index_name: str,\n",
") -> requests.Response:\n",
" \"\"\"Create a search index.\n",
" This function kicks off a job that builds a knowledge graph (KG) index from files located in a blob storage container.\n",
" This function kicks off a job that builds a knowledge graph index from files located in a blob storage container.\n",
" \"\"\"\n",
" url = apim_url + \"/index\"\n",
" request = {\n",
" \"storage_name\": storage_name,\n",
" \"index_name\": index_name\n",
" }\n",
" url = endpoint + \"/index\"\n",
" request = {\"storage_name\": storage_name, \"index_name\": index_name}\n",
" return requests.post(url, params=request, headers=headers)"
]
},
Expand All @@ -268,10 +256,7 @@
"metadata": {},
"outputs": [],
"source": [
"response = build_index(\n",
" storage_name=storage_name,\n",
" index_name=index_name\n",
")\n",
"response = build_index(storage_name=storage_name, index_name=index_name)\n",
"print(response)\n",
"if response.ok:\n",
" print(response.text)\n",
Expand All @@ -283,9 +268,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the status of an indexing job\n",
"### Check status of an indexing job\n",
"\n",
"Please wait for your index to reach 100 percent complete before continuing on to the next section to run queries."
"Please wait for your index to reach 100 percent completion before continuing on to the next section (running queries). You may rerun the next cell multiple times to monitor status. Note: the indexing speed of graphrag is directly correlated to the TPM quota of the Azure OpenAI model you are using."
]
},
{
Expand All @@ -295,18 +280,11 @@
"outputs": [],
"source": [
"def index_status(index_name: str) -> requests.Response:\n",
" url = apim_url + f\"/index/status/{index_name}\"\n",
" return requests.get(url, headers=headers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = index_status(index_name)\n",
" url = endpoint + f\"/index/status/{index_name}\"\n",
" return requests.get(url, headers=headers)\n",
"\n",
"\n",
"response = index_status(index_name)\n",
"pprint(response.json())"
]
},
Expand All @@ -316,7 +294,7 @@
"source": [
"## Query\n",
"\n",
"After an indexing job has completed, the knowledge graph is ready to query. Two types of queries (global and local) are currently supported. In addition, you can issue a query over a single index or multiple indexes."
"Once an indexing job is complete, the knowledge graph is ready to query. Two types of queries (global and local) are currently supported. We encourage you to try both and experience the difference in responses. Note that query response time is also correlated to the TPM quota of the Azure OpenAI model you are using."
]
},
{
Expand All @@ -325,7 +303,7 @@
"metadata": {},
"outputs": [],
"source": [
"\"\"\"Needed helper function to parse out the clear result from the query response. \"\"\"\n",
"# a helper function to parse out the result from a query response\n",
"def parse_query_response(\n",
" response: requests.Response, return_context_data: bool = False\n",
") -> requests.Response | dict[list[dict]]:\n",
Expand All @@ -350,7 +328,7 @@
"source": [
"### Global Query \n",
"\n",
"Global search queries are resource-intensive, but give good responses to questions that require an understanding of the dataset as a whole."
"Global queries are resource-intensive, but provide good responses to questions that require an understanding of the dataset as a whole."
]
},
{
Expand All @@ -359,25 +337,19 @@
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"\n",
"def global_search(index_name: str | list[str], query: str) -> requests.Response:\n",
" \"\"\"Run a global query over the knowledge graph(s) associated with one or more indexes\"\"\"\n",
" url = apim_url + \"/query/global\"\n",
" url = endpoint + \"/query/global\"\n",
" request = {\"index_name\": index_name, \"query\": query}\n",
" return requests.post(url, json=request, headers=headers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# pass in a single index name as a string or to query across multiple indexes, set index_name=[myindex1, myindex2]\n",
" return requests.post(url, json=request, headers=headers)\n",
"\n",
"\n",
"global_response = global_search(\n",
" index_name=index_name, query=\"Summarize the main topics of this data\"\n",
")\n",
"# print the result and save context data in a variable\n",
"global_response_data = parse_query_response(global_response, return_context_data=True)\n",
"global_response_data"
]
Expand All @@ -397,25 +369,20 @@
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"\n",
"def local_search(index_name: str | list[str], query: str) -> requests.Response:\n",
" \"\"\"Run a local query over the knowledge graph(s) associated with one or more indexes\"\"\"\n",
" url = apim_url + \"/query/local\"\n",
" url = endpoint + \"/query/local\"\n",
" request = {\"index_name\": index_name, \"query\": query}\n",
" return requests.post(url, json=request, headers=headers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# pass in a single index name as a string or to query across multiple indexes, set index_name=[myindex1, myindex2]\n",
" return requests.post(url, json=request, headers=headers)\n",
"\n",
"\n",
"# perform a local query\n",
"local_response = local_search(\n",
" index_name=index_name, query=\"Who are the primary actors in these communities?\"\n",
")\n",
"# print the result and save context data in a variable\n",
"local_response_data = parse_query_response(local_response, return_context_data=True)\n",
"local_response_data"
]
Expand All @@ -437,7 +404,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit f70e224

Please sign in to comment.