Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Commit

Permalink
Show how to get experiment tracker and model deployer URLs
Browse files Browse the repository at this point in the history
  • Loading branch information
fa9r committed Feb 14, 2023
1 parent 7276ba9 commit 205f348
Show file tree
Hide file tree
Showing 3 changed files with 71 additions and 11 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,7 @@ dmypy.json

# for wandb
mlruns
wandb/

# poetry
*poetry.lock*
49 changes: 43 additions & 6 deletions 2-1_Experiment_Tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,26 @@
"\n",
"Click on the `Parameters >` tab on top of the table to see *all* hyperparameters of your model. Now you can see at a glance which model performed best and which hyperparameters changed between different runs. In our case, we can see that the SVC model with `gamma=0.001` achieved the best test accuracy of `0.969`.\n",
"\n",
"If we click on one of the links in the `Start Time` column, we can see additional details of the respective run. In particular, we can find a `model.pkl` file under the `Artifacts` tab, which we could now use to deploy our model in an inference/production environment. In the next lesson, `2-2_Local_Deployment.ipynb`, we will learn how to do this automatically as part of our pipelines with the [MLflow Models](https://mlflow.org/docs/latest/models.html) component."
"If we click on one of the links in the `Start Time` column, we can see additional details of the respective run. In particular, we can find a `model.pkl` file under the `Artifacts` tab, which we could now use to deploy our model in an inference/production environment. In the next lesson, `2-2_Local_Deployment.ipynb`, we will learn how to do this automatically as part of our pipelines with the [MLflow Models](https://mlflow.org/docs/latest/models.html) component.\n",
"\n",
"If you would like to inspect the MLflow logs of your runs manually, you can find\n",
"the logging location using the `experiment_tracker_url` metadata field of the \n",
"trainer step of your pipeline run, e.g.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from zenml.post_execution import get_unlisted_runs\n",
"\n",
"pipeline_run = get_unlisted_runs()[-1]\n",
"step = pipeline_run.get_step(\"trainer\")\n",
"experiment_tracker_url = step.metadata[\"experiment_tracker_url\"].value\n",
"\n",
"print(experiment_tracker_url)"
]
},
{
Expand Down Expand Up @@ -390,7 +409,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, execute the cell below to run your pipeline with different gamma values. Follow the link to see your runs recorded in your Weights & Biases project:"
"Finally, execute the cell below to run your pipeline with different gamma values."
]
},
{
Expand All @@ -399,9 +418,27 @@
"metadata": {},
"outputs": [],
"source": [
"svc_wandb_pipeline.run(unlisted=True)\n",
"\n",
"print(f\"https://wandb.ai/{WANDB_ENTITY}/{WANDB_PROJECT}/runs/\")"
"svc_wandb_pipeline.run(unlisted=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the cell below and follow the link to see the run in your Weights & Biases \n",
"project:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trainer_step = get_unlisted_runs()[-1].get_step(\"trainer\")\n",
"experiment_tracker_url = trainer_step.metadata[\"experiment_tracker_url\"].value\n",
"print(experiment_tracker_url)"
]
},
{
Expand Down Expand Up @@ -435,7 +472,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0 (default, Nov 29 2022, 17:00:25) [Clang 14.0.0 (clang-1400.0.29.202)]"
"version": "3.10.0"
},
"vscode": {
"interpreter": {
Expand Down
32 changes: 27 additions & 5 deletions 2-2_Local_Deployment.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,30 @@
"source": [
"If you see a checkmark under status, the model was correctly deployed. Congrats!\n",
"\n",
"To find the URL of a model deployed by a specific run, you can use the \n",
"`deployed_model_url` metadata field of the model deployer step of your pipeline \n",
"run, e.g.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from zenml.post_execution import get_unlisted_runs\n",
"\n",
"last_run = get_unlisted_runs()[-1]\n",
"deployer_step = last_run.get_step(\"model_deployer\")\n",
"deployed_model_url = deployer_step.metadata[\"deployed_model_url\"].value\n",
"print(deployed_model_url)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"To interact with our deployed model in Python, we can use the `find_model_server()` method of ZenMLs model-deployer stack component:"
]
},
Expand Down Expand Up @@ -220,8 +244,6 @@
"metadata": {},
"outputs": [],
"source": [
"from zenml.post_execution import get_unlisted_runs\n",
"\n",
"last_run = get_unlisted_runs()[-1]\n",
"X_test = last_run.steps[0].outputs[\"X_test\"].read()\n",
"y_test = last_run.steps[0].outputs[\"y_test\"].read()"
Expand Down Expand Up @@ -258,7 +280,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.13 64-bit ('zenbytes-dev')",
"display_name": "zenml310",
"language": "python",
"name": "python3"
},
Expand All @@ -272,11 +294,11 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
"version": "3.10.0"
},
"vscode": {
"interpreter": {
"hash": "ec45946565c50b1d690aa5a9e3c974f5b62b9cc8d8934e441e52186140f79402"
"hash": "569b3361e3ec4d7692543ddda480ca8173a6c158bb706498f2e35ca1687a80ea"
}
}
},
Expand Down

0 comments on commit 205f348

Please sign in to comment.