Knowledge Editing in Language Models via Adapted Direct Preference Optimization

Amit Rozner, Barak Battash, Lior Wolf, Ofir Lindenbaum

Research output: Working paper / PreprintPreprint

4 Downloads (Pure)

Abstract

Large Language Models (LLMs) can become outdated over time as they may lack updated world knowledge, leading to factual knowledge errors and gaps. Knowledge Editing (KE) aims to overcome this challenge using weight updates that do not require expensive retraining. We propose treating KE as an LLM alignment problem. Toward this goal, we introduce Knowledge Direct Preference Optimization (KDPO), a variation of the Direct Preference Optimization (DPO) that is more effective for knowledge modifications. Our method is based on an online approach that continually updates the knowledge stored in the model. We use the current knowledge as a negative sample and the new knowledge we want to introduce as a positive sample in a process called DPO. We also use teacher-forcing for negative sample generation and optimize using the positive sample, which helps maintain localized changes. We tested our KE method on various datasets and models, comparing it to several cutting-edge methods, with 100 and 500 sequential edits. Additionally, we conducted an ablation study comparing our method to the standard DPO approach. Our experimental results show that our modified DPO method allows for more refined KE, achieving similar or better performance compared to previous methods.
Original languageEnglish
StatePublished - 14 Jun 2024

Bibliographical note

9 pages, 4 figures

Keywords

  • cs.CL
  • cs.AI

Fingerprint

Dive into the research topics of 'Knowledge Editing in Language Models via Adapted Direct Preference Optimization'. Together they form a unique fingerprint.

Cite this