Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Commit

Permalink
fixing links (#29)
Browse files Browse the repository at this point in the history
  • Loading branch information
bcdurak authored Jun 20, 2023
1 parent 64a2b2d commit d2201ba
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 11 deletions.
6 changes: 3 additions & 3 deletions 1-1_Pipelines.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As an ML practitioner, you are probably familiar with building ML models using Scikit-learn, PyTorch, TensorFlow, or similar. An **[ML Pipeline](https://docs.zenml.io/starter-guide/pipelines)** is simply an extension, including other steps you would typically do before or after building a model, like data acquisition, preprocessing, model deployment, or monitoring. The ML pipeline essentially defines a step-by-step procedure of your work as an ML practitioner. Defining ML pipelines explicitly in code is great because:\n",
"As an ML practitioner, you are probably familiar with building ML models using Scikit-learn, PyTorch, TensorFlow, or similar. An **[ML Pipeline](https://docs.zenml.io/user-guide/starter-guide)** is simply an extension, including other steps you would typically do before or after building a model, like data acquisition, preprocessing, model deployment, or monitoring. The ML pipeline essentially defines a step-by-step procedure of your work as an ML practitioner. Defining ML pipelines explicitly in code is great because:\n",
"- We can easily rerun all of our work, not just the model, eliminating bugs and making our models easier to reproduce.\n",
"- Data and models can be versioned and tracked, so we can see at a glance which dataset a model was trained on and how it compares to other models.\n",
"- If the entire pipeline is coded up, we can automate many operational tasks, like retraining and redeploying models when the underlying problem or data changes or rolling out new and improved models with CI/CD workflows.\n",
Expand All @@ -86,7 +86,7 @@
"metadata": {},
"source": [
"## ZenML Setup\n",
"Throughout this series, we will define our ML pipelines using [ZenML](https://github.com/zenml-io/zenml/). ZenML is an excellent tool for this task, as it is straightforward and intuitive to use and has [integrations](https://docs.zenml.io/component-gallery/integrations) with most of the advanced MLOps tools we will want to use later. Make sure you have ZenML installed (via `pip install zenml`). Next, let's run some commands to make sure you start with a fresh ML stack."
"Throughout this series, we will define our ML pipelines using [ZenML](https://github.com/zenml-io/zenml/). ZenML is an excellent tool for this task, as it is straightforward and intuitive to use and has [integrations](https://zenml.io/integrations) with most of the advanced MLOps tools we will want to use later. Make sure you have ZenML installed (via `pip install zenml`). Next, let's run some commands to make sure you start with a fresh ML stack."
]
},
{
Expand Down Expand Up @@ -152,7 +152,7 @@
"\n",
"![Digits Pipeline](_assets/1-1/digits_pipeline.png)\n",
"\n",
"We can identify three distinct steps in our example: data loading, model training, and model evaluation. Let us now define each of them as a ZenML **[Pipeline Step](https://docs.zenml.io/starter-guide/pipelines#step)** simply by moving each step to its own function and decorating them with ZenML's `@step` [Python decorator](https://realpython.com/primer-on-python-decorators/)."
"We can identify three distinct steps in our example: data loading, model training, and model evaluation. Let us now define each of them as a ZenML **[Pipeline Step](https://docs.zenml.io/user-guide/starter-guide)** simply by moving each step to its own function and decorating them with ZenML's `@step` [Python decorator](https://realpython.com/primer-on-python-decorators/)."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions 1-2_Artifact_Lineage.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@
"\n",
"You might now wonder how our ML pipelines can keep track of which artifacts changed and which did not. This requires several additional MLOps components that you would typically have to set up and configure yourself. Luckily, ZenML automatically set this up for us.\n",
"\n",
"Under the hood, all the artifacts in our ML pipeline are automatically stored in an [Artifact Store](https://docs.zenml.io/component-gallery/artifact-stores). By default, this is simply a place in your local file system, but we could also configure ZenML to store this data in a cloud bucket like [Amazon S3](https://docs.zenml.io/component-gallery/artifact-stores/s3) or any other place instead. We will see this in more detail when we migrate our MLOps stack to the cloud in a later chapter."
"Under the hood, all the artifacts in our ML pipeline are automatically stored in an [Artifact Store](https://docs.zenml.io/user-guide/starter-guide/understand-stacks#artifact-store). By default, this is simply a place in your local file system, but we could also configure ZenML to store this data in a cloud bucket like [Amazon S3](https://docs.zenml.io/component-gallery/artifact-stores/s3) or any other place instead. We will see this in more detail when we migrate our MLOps stack to the cloud in a later chapter."
]
},
{
Expand All @@ -327,14 +327,14 @@
"## Orchestrators\n",
"\n",
"In addition to the artifact store, ZenML automatically set an\n",
"[Orchestrator](https://docs.zenml.io/component-gallery/orchestrators) for you,\n",
"[Orchestrator](https://docs.zenml.io/user-guide/starter-guide/understand-stacks#orchestrator) for you,\n",
"which is the component that defines how and where each pipeline step is executed \n",
"when calling `pipeline.run()`. \n",
"\n",
"This component is not of much interest to us right now, but we will learn more \n",
"about it in later chapters, when we will run our pipelines on a \n",
"[Kubernetes](https://kubernetes.io/) cluster using the \n",
"[Kubeflow](https://docs.zenml.io/component-gallery/orchestrators/kubeflow) orchestrator."
"[Kubeflow](https://docs.zenml.io/user-guide/component-guide/orchestrators/kubeflow) orchestrator."
]
},
{
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ In the end, you will be able to take any of your ML models from experimentation

The series is structured into four chapters with several lessons each. Click on any of the links below to open the respective lesson directly in Colab.

| :dango: 1. ML Pipelines | :recycle: 2. Training / Serving | :file_folder: 3. Data Management | :rocket: More Coming Soon! |
|------------------------|-------------------------|---------------------|------------------------|
| [1.1 ML Pipelines](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/1-1_Pipelines.ipynb) | [2.1 Experiment Tracking](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/2-1_Experiment_Tracking.ipynb) | [3.1 Data Skew](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/3-1_Data_Skew.ipynb) | |
| [1.2 Artifact Lifecycle](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/1-2_Artifact_Lineage.ipynb) | [2.2 Local Deployment](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/2-2_Local_Deployment.ipynb) | | |
| | [2.3 Inference Pipelines](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/2-3_Inference_Pipelines.ipynb) | | |
| :dango: 1. ML Pipelines | :recycle: 2. Training / Serving | :file_folder: 3. Data Management | :rocket: More Coming Soon! |
|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|----------------------------|
| [1.1 ML Pipelines](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/1-1_Pipelines.ipynb) | [2.1 Experiment Tracking](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/2-1_Experiment_Tracking.ipynb) | [3.1 Data Skew](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/3-1_Data_Skew.ipynb) | |
| [1.2 Artifact Lifecycle](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/1-2_Artifact_Lineage.ipynb) | [2.2 Local Deployment](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/2-2_Local_Deployment.ipynb) | | |
| | [2.3 Inference Pipelines](https://colab.research.google.com/github/zenml-io/zenbytes/blob/main/2-3_Inference_Pipelines.ipynb) | | |

<!--
### Syllabus Details:
Expand Down

0 comments on commit d2201ba

Please sign in to comment.