Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple output versions for a step outputs #3072

Merged

Conversation

avishniakov
Copy link
Contributor

@avishniakov avishniakov commented Oct 10, 2024

Describe changes

Highlights:

  • StepRunResponse.outputs is now Dict[str, List["ArtifactVersionResponse"]] to support multiple versions of the same artifact
  • StepNodeDetails.outputs is now Dict[str, List[str]] for the same reason
  • type is removed from StepOutputSchema and now resides in the ArtifactVersionSchema directly
  • Old types are DEFAULT and MANUAL appended with new types of artifacts: EXTERNAL and PREEXISTING for ExternalArtifact and register_artifact respectively.
  • ArtifactVersionResponse/Request now expects save_type

It is very shaky for the frontend, so I would keep it for a while until we align with @Cahllagerfeld

P.S. docs to follow after some reviews feedback being collected.

Pre-requisites

Please ensure you have done the following:

  • I have read the CONTRIBUTING.md document.
  • If my change requires a change to docs, I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • I have based my new branch on develop and the open PR is targeting develop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.
  • If my changes require changes to the dashboard, these changes are communicated/requested.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Other (add details above)

@github-actions github-actions bot added internal To filter out internal PRs and issues enhancement New feature or request labels Oct 10, 2024
Copy link
Contributor

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

@schustmi
Copy link
Contributor

I have more of a general question:
Your current implementation solves how an artifact was saved, but how do we expose the way it was loaded? E.g. even if an artifact is a regular step output, I can load it either the "normal" way by defining it as a step input, or manually by calling load_artifact(...).

@avishniakov
Copy link
Contributor Author

avishniakov commented Oct 10, 2024

I have more of a general question: Your current implementation solves how an artifact was saved, but how do we expose the way it was loaded? E.g. even if an artifact is a regular step output, I can load it either the "normal" way by defining it as a step input, or manually by calling load_artifact(...).

True, I didn't touch inputs here at all, since it was not in the scope of the ticket. I would prefer to handle it separately.
Do you refer to the DAG and how we define which input type is it? It would be hell complex design/FE wise, honestly speaking.

I can do this crazy thing and I have no clue how it would be shown:

@step
def step_1()->Annotated[int,"my_int"]:
    return 42

@pipeline
def pipe_1():
    step_1()
    step_2(Client().get_artifact_version("my_int"), after=["step_1"])

So it is a Schrödinger-Artifact now: it is DEFAULT and LAZY_LOADED at the same time...

Copy link
Contributor

Classification template updates in examples/mlops_starter have been pushed.

Copy link
Contributor

E2E template updates in examples/e2e have been pushed.

@schustmi
Copy link
Contributor

I see, makes sense let's handle that separately.
I think the problem is mostly on how to visualize what type of input it is right? Could just be a different colored line depending on the type type of input, but I'm sure Zuri will come up with something much better 😄
In general I think it's both, it's a regular output for the first step, but for the second step it's a coincidence that it was generated in the same run, and the general intention of the user was to load an artifact using the lazy loading, not using the regular way to pass artifacts, otherwise they would have done that ;)

@schustmi
Copy link
Contributor

That's also why the input type can't be stored in the artifact table, one artifact might be different types of inputs for different steps/runs or even the same exact step

@avishniakov
Copy link
Contributor Author

That's also why the input type can't be stored in the artifact table, one artifact might be different types of inputs for different steps/runs or even the same exact step

Yep, I also realized that during development - this will stay separately as is it is now, but we would need to enrich it for more types beyond default and manual, IMO.

src/zenml/artifacts/utils.py Outdated Show resolved Hide resolved
src/zenml/enums.py Outdated Show resolved Hide resolved
src/zenml/models/v2/core/step_run.py Outdated Show resolved Hide resolved
src/zenml/orchestrators/input_utils.py Outdated Show resolved Hide resolved
src/zenml/orchestrators/utils.py Outdated Show resolved Hide resolved
src/zenml/zen_stores/schemas/step_run_schemas.py Outdated Show resolved Hide resolved
src/zenml/zen_stores/sql_zen_store.py Outdated Show resolved Hide resolved
src/zenml/orchestrators/input_utils.py Outdated Show resolved Hide resolved
for output_name, output_artifacts in artifacts.items():
for output_artifact in output_artifacts:
artifact_config = None
if output_config := output_configurations.get(output_name, None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to make sure that this function is only being called with regular step output artifacts, as otherwise the artifact config does not apply

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently not the case

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this, but one thing we don't (and never did): If a manual save was a model artifact for the original step run, it will not be a model artifact in the cached step run.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a test for this case, which is skipped now. We will handle this separately, as discussed.

src/zenml/zen_stores/schemas/step_run_schemas.py Outdated Show resolved Hide resolved
Copy link
Contributor

coderabbitai bot commented Oct 28, 2024

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -331,7 +330,7 @@ def inputs(self) -> Dict[str, "ArtifactVersionResponse"]:
return self.get_body().inputs

@property
def outputs(self) -> Dict[str, "ArtifactVersionResponse"]:
def outputs(self) -> Dict[str, List["ArtifactVersionResponse"]]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also wondering whether we might want to separate this into regular outputs from the step, and manual ones (using save_artifact and register_artifact)

schustmi and others added 4 commits October 31, 2024 19:13
This commit refactors the artifact saving logic in the `cacheable_multiple_versioned_producer` function. It introduces two new parameters, `is_model_artifact` and `is_deployment_artifact`, which allow specifying the type of the saved artifact. This change improves the flexibility and customization of the function.

Related to #PRD-663
@avishniakov avishniakov requested a review from schustmi November 5, 2024 15:26
@avishniakov
Copy link
Contributor Author

There is still some unrelated error on mac 3.9, but I will merge this in as is.

Using Python 3.9.20 environment at /Users/runner/hostedtoolcache/Python/3.9.20/x64
  × No solution found when resolving dependencies:
  ╰─▶ Because torch==2.4.0 has no wheels with a matching Python implementation
      tag and vllm>=0.6.0 depends on torch==2.4.0, we can conclude that
      vllm>=0.6.0 cannot be used.
      And because only the following versions of vllm are available:
          vllm<=0.6.0
          vllm==0.6.1
          vllm==0.6.1.post1
          vllm==0.6.1.post2
          vllm==0.6.2
          vllm==0.6.3
          vllm==0.6.3.post1
      and you require vllm>=0.6.0, we can conclude that your requirements
      are unsatisfiable.

@avishniakov avishniakov merged commit f82cd8a into develop Nov 7, 2024
67 of 68 checks passed
@avishniakov avishniakov deleted the feature/PRD-663-multiple-output-versions-for-a-step branch November 7, 2024 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request internal To filter out internal PRs and issues requires-frontend-changes run-slow-ci
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants