Interpreting, Explaining and Visualizing Deep Learning
... now what ?
NIPS 2017 Workshop, 9 December 2017, Long Beach, CA

Location: Hyatt Hotel, Regency Ballroom A+B+C

Invited Speakers | Accepted Papers | Schedule | Call for Papers | Organizers ]

While machine learning models have reached impressively high predictive accuracy, they are often perceived as black-boxes. In sensitive applications such as medical diagnosis or self-driving cars, the reliance of the model on the right features must be guaranteed. One would like to be able to interpret what the ML model has learned in order to identify biases and failure modes and improve models accordingly. Interpretability is also needed in the sciences, where understanding how the ML model relates the multiple physical and biological variables is a prerequisite for building meaningful scientific hypotheses.

The present workshop aims to review recent techniques and establish new theoretical foundations for interpreting and understanding deep learning models. However, it will not stop at the methodological level, but also address the “now what?” question, where we aim to take the next step by exploring and extending practical usefulness. The workshop will have speakers from various application domains (computer vision, NLP, neuroscience, medicine), it will provide an opportunity for participants to learn from each other and initiate new interdisciplinary collaborations.


For background material on the topic, see our reading list.

Edited Book

A edited book based on some of the workshop contributions as well as invited contributions is now available.

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

Editors: W Samek, G Montavon, A Vedaldi, LK Hansen, KR Muller

Lecture Notes in Computer Science (LNCS), vol. 11700, Springer, August 2019

Invited Speakers

  • Been Kim
    Google Brain

  • Dhruv Batra
    Georgia Tech

  • Sepp Hochreiter
    University Linz

  • Anh Nguyen
    Auburn University

  • Honglak Lee
    University of Michigan

  • Rich Caruana
    Microsoft

  • Trevor Darrell
    UC Berkeley

Accepted Papers

Schedule

Session 1: Foundations
08.15 - 08.45Opening RemarksKlaus-Robert Müller
08.45 - 09.15Invited Talk 1Been KimInterpretability for data and neutral network
09.15 - 09.45Invited Talk 2Dhruv Batra
09.45 - 10.30Methods Talks (3x15 min)Grégoire Montavon
Michael Y Tsang
Marco Ancona
10.30 - 11.00Coffee Break
11.00 - 11:15Methods Talks (1x15 min)Pieter-Jan Kindermans
11:15 - 11.45Invited Talk 3Sepp Hochreiter
11.45 - 12.15Posters session
12:15 - 13.15Lunch
Session 2: Applications
13.15 - 13.45Posters session
13.45 - 14.15Invited Talk 4Anh NguyenUnderstanding Neural Networks via Feature Visualization
14.15 - 14.45Invited Talk 5Honglak LeeHierarchical approaches for RL and generative models
14.45 - 15:00Application Talk (1x15 min)Wojciech Samek
15.00 - 15.30Coffee Break
15.30 - 15:45Application Talk (1x15 min)Samuel Greydanus
15.45 - 16.15Invited Talk 6Rich Caruana
16.15 - 16.45Invited Talk 7Trevor DarrellInterpreting and Justifying Visual Decisions and Actions
16.45 - 17:00Closing RemarksLars Kai Hansen

Call for Papers

We call for papers on the following topics: (1) interpretability of deep neural networks, (2) analysis and comparison of state-of-the-art models, (3) formalization of the interpretability problem, (4) interpretability for making ML socially acceptable, and (5) applications of interpretability.

Submissions are required to stick to the NIPS format. Papers are limited to eight pages (excluding references) and will go through a review process. A selection of accepted papers together with the invited contributions will be part of an edited book at Springer LNCS.
Submission website: https://cmt3.research.microsoft.com/IEVDL2017
Important dates
Submission deadline 01 November, 2017
Author notification 10 November, 2017
Camera-ready version 24 November, 2017
Workshop 09 December, 2017

Organizers