One for All and All for One: Distributed Learning of Fair Allocations With Multi-Player Bandits: Distributed Learning of Fair Allocations with Multi-Player Bandits

Ilai Bistritz, Tavor Z. Baharav, Amir Leshem, Nicholas Bambos

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Consider N cooperative but non-communicating players where each plays one out of M arms for T turns. Players have different utilities for each arm, represented as an N× M matrix. These utilities are unknown to the players. In each turn, players select an arm and receive a noisy observation of their utility for it. However, if any other players selected the same arm in that turn, all colliding players will receive zero utility due to the conflict. No communication between the players is possible. We propose two distributed algorithms which learn fair matchings between players and arms while minimizing the regret. We show that our first algorithm learns a max-min fairness matching with near- O\log T) regret (up to a log log T factor). However, if one has a known target Quality of Service (QoS) (which may vary between players) then we show that our second algorithm learns a matching where all players obtain an expected reward of at least their QoS with constant regret, given that such a matching exists. In particular, if the max-min value is known, a max-min fairness matching can be learned with O(1) regret.

Original languageUndefined/Unknown
Article number9404291
Pages (from-to)584-598
Number of pages15
JournalIEEE Journal on Selected Areas in Information Theory
Volume2
Issue number2
DOIs
StatePublished - 1 Apr 2021

Bibliographical note

Publisher Copyright:
© 2020 IEEE.

Keywords

  • Multi-player bandits
  • distributed learning
  • fairness
  • online learning
  • resource allocation

Cite this