Manipulating Learning Algorithms in Strategic Environments28 Oct 2019
Department of Computer Science
University of Virginia
- Date: Friday November 1st, 2019
- Time: 12:00PM
- Location: Rice 242
Abstract There has been significant amount of recent interests in adversarial attacks to machine learning algorithms, particularly deep learning algorithms. In this talk, we pursue a closely related, yet far less explored, theme alone this research agenda, i.e., strategic attacks to learning algorithms. In particular, we consider settings where the learner faces a strategic agent who manipulates the learning algorithm simply to optimize his own utility, as opposed to completely ruining the learner’s algorithm in adversarial ML. Such strategic interactions naturally arise in many decision-focused learning tasks including, e.g., learning to set a price for an unknown buyer and learning to defend against an unknown attacker. We describe a general framework to theoretically analyze the attacker’s optimal strategic attack, and then instantiate the framework and analysis in two basic scenarios. Finally, we consider how to defend against such strategic attacks and provide formal barriers to the design of optimal defense for the learner.