Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback

  • Hamish Ivison
  • , Yizhong Wang
  • , Jiacheng Liu
  • , Zeqiu Wu
  • , Valentina Pyatkin
  • , Nathan Lambert
  • , Noah A. Smith
  • , Yejin Choi
  • , Hannaneh Hajishirzi

Research output: Contribution to journalConference articlepeer-review

16 Scopus citations

Abstract

Learning from preference feedback has emerged as an essential step for improving the generation quality and performance of modern language models (LMs). Despite its widespread use, the way preference-based learning is applied varies wildly, with differing data, learning algorithms, and evaluations used, making disentangling the impact of each aspect difficult. In this work, we identify four core aspects of preference-based learning: preference data, learning algorithm, reward model, and policy training prompts, systematically investigate the impact of these components on downstream model performance, and suggest a recipe for strong learning for preference feedback. Our findings indicate that all aspects are important for performance, with better preference data leading to the largest improvements, followed by the choice of learning algorithm, the use of improved reward models, and finally the use of additional unlabeled prompts for policy training. Notably, PPO outperforms DPO by up to 2.5% in math and 1.2% in general domains. High-quality preference data leads to improvements of up to 8% in instruction following and truthfulness. Despite significant gains of up to 5% in mathematical evaluation when scaling up reward models, we surprisingly observe marginal improvements in other categories.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume37
StatePublished - 2024
Externally publishedYes
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024

Bibliographical note

Publisher Copyright:
© 2024 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback'. Together they form a unique fingerprint.

Cite this