Abstract
Using language models as a remote service entails sending private information to an untrusted provider. In addition, potential eavesdroppers can intercept the messages, thereby exposing the information. In this work, we explore the prospects of avoiding such data exposure at the level of text manipulation. We focus on text classification models, examining various token mapping and contextualized manipulation functions in order to see whether classifier accuracy may be maintained while keeping the original text unrecoverable. We find that although some token mapping functions are easy and straightforward to implement, they heavily influence performance on the downstream task, and via a sophisticated attacker can be reconstructed. In comparison, contextualized manipulation provides an improvement in performance.
Original language | English |
---|---|
Title of host publication | PrivateNLP 2024 - 5th Workshop on Privacy in Natural Language Processing, Proceedings of the Workshop |
Editors | Ivan Habernal, Sepideh Ghanavati, Abhilasha Ravichander, Vijayanta Jain, Patricia Thaine, Timour Igamberdiev, Niloofar Mireshghallah, Oluwaseyi Feyisetan |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 29-38 |
Number of pages | 10 |
ISBN (Electronic) | 9798891761391 |
State | Published - 2024 |
Externally published | Yes |
Event | 5th Workshop on Privacy in Natural Language Processing, PrivateNLP 2024 - Co-located with ACL 2024 - Bangkok, Thailand Duration: 15 Aug 2024 → … |
Publication series
Name | PrivateNLP 2024 - 5th Workshop on Privacy in Natural Language Processing, Proceedings of the Workshop |
---|
Conference
Conference | 5th Workshop on Privacy in Natural Language Processing, PrivateNLP 2024 - Co-located with ACL 2024 |
---|---|
Country/Territory | Thailand |
City | Bangkok |
Period | 15/08/24 → … |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.