Wednesday, November 2, 2022 - 4:00pm to 4:30pm
Event Calendar Category
LIDS & Stats Tea
Speaker Name
Abhin Swapnil Shah
Affiliation
LIDS
Building and Room Number
LIDS Lounge
We are interested in the problem of unit-level counterfactual inference with unobserved confounders owing to the increasing importance of personalized decision-making in many domains: consider a recommender system interacting with a user over time where each user is provided recommendations based on observed demographics, prior engagement levels as well as certain unobserved factors. The system adapts its recommendations sequentially and differently for each user. Ideally, at each point in time, the system wants to infer each user's unknown engagement if it were exposed to a different sequence of recommendations while everything else remained unchanged. This task is challenging since: (a) the unobserved factors could give rise to spurious associations, (b) the users could be heterogeneous, and (c) only a single trajectory per user is available.
We model the underlying joint distribution through an exponential family. This reduces the task of unit-level counterfactual inference to simultaneously learning a collection of distributions of a given exponential family with different unknown parameters with single observation per distribution. We discuss a computationally efficient method for learning all of these parameters with estimation error scaling linearly with the metric entropy of the space of unknown parameters – if the parameters are s-sparse linear combination of k known vectors in p dimension, the error scales as O(s log k/p). En route, we derive sufficient conditions for compactly supported distributions to satisfy the logarithmic Sobolev inequality.
Based on a joint work with Raaz Dwivedi, Devavrat Shah and Greg Wornell.
Abhin Shah is a fifth-year Ph.D. student at LIDS advised by Prof. Devavrat Shah and Prof. Greg Wornell. Prior to MIT, he graduated from IIT Bombay with a Bachelor's degree in Electrical Engineering. His research interests include causal inference, statistical inference and algorithmic fairness.