Umberto, Pedram, Homayun, Katja, Fabio, Tomi, Kashyap, Matti, Annti K.
- Pedram/Kashyap: when focusing on the understanding part, we have a teacher that has several available models and the user that has none. The goal is to make the teacher to show to the user the most informative model. In order to do so, the KL divergence between p_user and p_teacher is calculated. As an example, let's consider a sparse linear regression based on sentiment analysis, where the prior is spike and slab. The result show that works such that love and amazing are relevant.
- Homayun: Making Black Box Model Interpretable by using the Bayesian Approach means understanding for the user what the model is doing. This might be of interest for debugging as well as generating trust. There are two types of interpretation: during training (updating) and after (post-doc). The idea is to approximate the black box model with a proxy model (e.g. CART models), balancing the tradeoff between interpretation and closeness to the black box model.
- Homayun to provide a longer update next week.