Abstract:
The fundamental task in machine learning is to predict future outcomes for given inputs. But when inputs describe actual human users who care about predictive outcomes – they may be prone to act in ways that best serve their own goals. In this talk I will present the problem of strategic classification, in which users can modify their features (at a cost) in response to a classifier if this provides them with favorable predictions. I will argue for strategic classification a useful formal framework for reasoning about learning under strategic user behavior, and present several of our recent works in this domain, aimed at expanding its scope, scrutinizing its assumptions, and `reversing’ the conventional role learning plays in this setting.
https://technion.zoom.us/j/94950420992