Congrats! Omer Ben-Porat received the prestigious J.P. Morgan PhD Fellowship 2019, given to PhD candidates around the world. The fellowship, awarded to 14 students this year, is given to students with an exceptional research in AI, machine learning, data privacy, cryptography, and ethics in computing.

Omer, a PhD student in the Industrial Engineering and Management faculty of the Technion under the guidance of Professor Moshe Tennenholtz, is conducting research in the intersection of Game Theory and Machine Learning. He shows that current learning algorithms can miserably fail in competitive scenarios, as they are mainly developed to work in isolation. In contrast, Omer relies on game theoretic foundations to devise algorithms that explicitly address market competition and perform significantly better in competitive.

An abstract of Omer’s research:

The 21 Century and especially its second decade, has seen the rise of machine learning (ML) and optimization algorithms for a variety of tasks. Social networks, recommender systems and advertising companies, among others, have changed dramatically by embracing tremendous amounts of data to enhance content quality, personalization and offer more accurate predictions. The basic ingredient in all such algorithms, in one way or another, is an objective function that should be optimized.

However, it seems that strategic use of ML algorithms have been neglected so far. To clarify this point, consider the following thought experiment: two candidates, Alice and Bob, compete in plurality elections. A data scientist (DS) is hired to assist Alice in the elections. The DS collects data, employs state-of-the-art prediction algorithm to identify swing voters, clusters voters, and uses the segmentation for better targeting. The DS does all that with one aim in mind: increasing the (expected) number of votes Alice gets. But hold on before you open the champagne. Is the expected number of votes the correct objective function to optimize? Can those supervised/unsupervised algorithms the DS employs win the election for Alice without taking into account Bob’s actions? The answer to the first question is no. One can come up with an example where Alice gets on average strictly more votes than Bob, yet she loses the elections with a probability that is arbitrarily close to one. So what went wrong? The objective function the DS should have had in mind is the probability of Alice being elected. This also relates to the second question: the number of votes and the probability of Alice getting elected depend heavily on the actions of Bob. Unfortunately, current clustering algorithms receive only information about the voters, which cannot explicitly assist in foreseeing Bob’s actions.

The crucial point behind this thought experiment is that many ML problems are carried out in isolation. More often than not, commercial companies use such algorithms to improve their products, and hope to, as a byproduct, improve their payoffs. However, a large body of work from Operations Research and Computer Science demonstrates that optimizing an objective function while disregarding incentives can turn destructive in a strategic environment.

With proper adjustments, impactful models and approaches studied in Game Theory extend fundamental problems in data science/ML to competitive settings.

My research proposal is aimed at better understanding the connection between prediction, segmentation and recommendation systems to market algorithms. This will be done by re-visiting fundamental ML settings, where prediction/clustering/recommendation are only an intermediate step for attracting more users under market competition and optimizing user welfare.