Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset type conversion utilities #2543

Open
9 tasks done
claralp opened this issue Jan 6, 2025 · 1 comment
Open
9 tasks done

Dataset type conversion utilities #2543

claralp opened this issue Jan 6, 2025 · 1 comment
Labels
🗃️ data Related to data 📚 documentation Improvements or additions to documentation

Comments

@claralp
Copy link
Contributor

claralp commented Jan 6, 2025

System Info

Some things that are not really correct in the dataset type conversions in https://huggingface.co/docs/trl/main/en/dataset_formats#utilities-for-converting-dataset-types:

  1. when converting a preference dataset to an unpaired preference dataset with unpair_preference_dataset(), we are converting from a relative ranking to an absolute ranking. In a preference dataset, despite having a "chosen" and a "rejected" example, both can be good or both bad, just one slightly better/worse. See the example below.
    So one should not convert a Preference dataset to an Unpaired Preference Dataset without keeping an eye on absolute ratings from e.g. a reward model.
    Suggestion: At least add a warning to the documentation and conversion code or even remove it
  2. when converting from Unpaired preference or Stepwise supervision to anything un-labeled like Language modeling or Prompt-completion, only the good (label=True) examples should be used. Like when converting from a Preference dataset it only uses the chosen completions.
    Suggestion: Can easily fix that in the example conversion code

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

from datasets import Dataset
dataset_dict = {
    "prompt": ["The sky is", "The sun is"]
    "chosen": [" blue.", " in our solar system"],
    "rejected": [" above.", " in the sky."]
}
dataset = Dataset.from_dict(dataset_dict)
dataset = unpair_preference_dataset(dataset)
dataset[1]

outputs:

e.g.

{'prompt': 'The sky is', 'completion': ' above.', 'label': False}

Expected behavior

{'prompt': 'The sky is', 'completion': ' blue.', 'label': True}
{'prompt': 'The sky is', 'completion': ' above.', 'label': True}

Checklist

  • I have checked that my issue isn't already filed (see open issues)
  • I have included my system information
  • Any code provided is minimal, complete, and reproducible (more on MREs)
  • Any code provided is properly formatted in code blocks, (no screenshot, more on code blocks)
  • Any traceback provided is complete
@qgallouedec
Copy link
Member

Both points sounds valid to me. For 1. I'd go for a warning in the doc (not in the function). Would you like to open a PR?

@qgallouedec qgallouedec added 📚 documentation Improvements or additions to documentation 🗃️ data Related to data labels Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🗃️ data Related to data 📚 documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants