Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
Updated
Jun 13, 2024 - Python
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Add a description, image, and links to the safe-rlhf topic page so that developers can more easily learn about it.
To associate your repository with the safe-rlhf topic, visit your repo's landing page and select "manage topics."