From 205f348055427ef8bd1889aa04f7c5a8068b181b Mon Sep 17 00:00:00 2001 From: Felix Altenberger Date: Tue, 14 Feb 2023 14:05:52 +0100 Subject: [PATCH] Show how to get experiment tracker and model deployer URLs --- .gitignore | 1 + 2-1_Experiment_Tracking.ipynb | 49 ++++++++++++++++++++++++++++++----- 2-2_Local_Deployment.ipynb | 32 +++++++++++++++++++---- 3 files changed, 71 insertions(+), 11 deletions(-) diff --git a/.gitignore b/.gitignore index 02baaa1..2082114 100644 --- a/.gitignore +++ b/.gitignore @@ -133,6 +133,7 @@ dmypy.json # for wandb mlruns +wandb/ # poetry *poetry.lock* diff --git a/2-1_Experiment_Tracking.ipynb b/2-1_Experiment_Tracking.ipynb index 7c9e578..821b23e 100644 --- a/2-1_Experiment_Tracking.ipynb +++ b/2-1_Experiment_Tracking.ipynb @@ -281,7 +281,26 @@ "\n", "Click on the `Parameters >` tab on top of the table to see *all* hyperparameters of your model. Now you can see at a glance which model performed best and which hyperparameters changed between different runs. In our case, we can see that the SVC model with `gamma=0.001` achieved the best test accuracy of `0.969`.\n", "\n", - "If we click on one of the links in the `Start Time` column, we can see additional details of the respective run. In particular, we can find a `model.pkl` file under the `Artifacts` tab, which we could now use to deploy our model in an inference/production environment. In the next lesson, `2-2_Local_Deployment.ipynb`, we will learn how to do this automatically as part of our pipelines with the [MLflow Models](https://mlflow.org/docs/latest/models.html) component." + "If we click on one of the links in the `Start Time` column, we can see additional details of the respective run. In particular, we can find a `model.pkl` file under the `Artifacts` tab, which we could now use to deploy our model in an inference/production environment. In the next lesson, `2-2_Local_Deployment.ipynb`, we will learn how to do this automatically as part of our pipelines with the [MLflow Models](https://mlflow.org/docs/latest/models.html) component.\n", + "\n", + "If you would like to inspect the MLflow logs of your runs manually, you can find\n", + "the logging location using the `experiment_tracker_url` metadata field of the \n", + "trainer step of your pipeline run, e.g.:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from zenml.post_execution import get_unlisted_runs\n", + "\n", + "pipeline_run = get_unlisted_runs()[-1]\n", + "step = pipeline_run.get_step(\"trainer\")\n", + "experiment_tracker_url = step.metadata[\"experiment_tracker_url\"].value\n", + "\n", + "print(experiment_tracker_url)" ] }, { @@ -390,7 +409,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Finally, execute the cell below to run your pipeline with different gamma values. Follow the link to see your runs recorded in your Weights & Biases project:" + "Finally, execute the cell below to run your pipeline with different gamma values." ] }, { @@ -399,9 +418,27 @@ "metadata": {}, "outputs": [], "source": [ - "svc_wandb_pipeline.run(unlisted=True)\n", - "\n", - "print(f\"https://wandb.ai/{WANDB_ENTITY}/{WANDB_PROJECT}/runs/\")" + "svc_wandb_pipeline.run(unlisted=True)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Run the cell below and follow the link to see the run in your Weights & Biases \n", + "project:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "trainer_step = get_unlisted_runs()[-1].get_step(\"trainer\")\n", + "experiment_tracker_url = trainer_step.metadata[\"experiment_tracker_url\"].value\n", + "print(experiment_tracker_url)" ] }, { @@ -435,7 +472,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.0 (default, Nov 29 2022, 17:00:25) [Clang 14.0.0 (clang-1400.0.29.202)]" + "version": "3.10.0" }, "vscode": { "interpreter": { diff --git a/2-2_Local_Deployment.ipynb b/2-2_Local_Deployment.ipynb index 778d9d7..5b14d8b 100644 --- a/2-2_Local_Deployment.ipynb +++ b/2-2_Local_Deployment.ipynb @@ -183,6 +183,30 @@ "source": [ "If you see a checkmark under status, the model was correctly deployed. Congrats!\n", "\n", + "To find the URL of a model deployed by a specific run, you can use the \n", + "`deployed_model_url` metadata field of the model deployer step of your pipeline \n", + "run, e.g.:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from zenml.post_execution import get_unlisted_runs\n", + "\n", + "last_run = get_unlisted_runs()[-1]\n", + "deployer_step = last_run.get_step(\"model_deployer\")\n", + "deployed_model_url = deployer_step.metadata[\"deployed_model_url\"].value\n", + "print(deployed_model_url)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ "To interact with our deployed model in Python, we can use the `find_model_server()` method of ZenMLs model-deployer stack component:" ] }, @@ -220,8 +244,6 @@ "metadata": {}, "outputs": [], "source": [ - "from zenml.post_execution import get_unlisted_runs\n", - "\n", "last_run = get_unlisted_runs()[-1]\n", "X_test = last_run.steps[0].outputs[\"X_test\"].read()\n", "y_test = last_run.steps[0].outputs[\"y_test\"].read()" @@ -258,7 +280,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3.8.13 64-bit ('zenbytes-dev')", + "display_name": "zenml310", "language": "python", "name": "python3" }, @@ -272,11 +294,11 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.10.0" }, "vscode": { "interpreter": { - "hash": "ec45946565c50b1d690aa5a9e3c974f5b62b9cc8d8934e441e52186140f79402" + "hash": "569b3361e3ec4d7692543ddda480ca8173a6c158bb706498f2e35ca1687a80ea" } } },