Visualizing and Understanding Deep Learning Models in NLP

演讲者: Jiwei Li

时间:2017-11-09 09:30-11:30

地点: 北京大学计算机科学技术研究所106会议室


Abstract: A long-standing criticism of neural network models is their lack of interpretability: results for neural models are hard to interpret.In this talk, I will discuss a few attempts trying to rationalize outputs from neural networks. Methods include simply calculating first-order gradients, computing the difference in log likelihood on gold-standard labels when some words are erased, and a more sophisticated model that uses a reinforcement learning model to find the minimal set of words that must be erased to change the model’s decision. 

The proposed attempts offer interpretable explanations for various aspects of neural models such as how words' meaning composes to form higher-level language units such as phrases or sentences; how neural models select and filter important words; and why some models perform better than others. More importantly, they provide efficient tools to conduct error analysis that can be used on different neural architectures across various NLP applications, which have potential to improve the effectiveness of a wide variety of NLP systems.

References:Understanding Neural Networks through Representation Erasure: https://arxiv.org/pdf/1612.08220.pdf
Visualizing and Understanding Neural Models in NLP: https://arxiv.org/pdf/1506.01066.pdf

Bio: Jiwei Li got his B.S. in Biology from Peking University (2008-2012) and Ph.D in Computer Science from Stanford University (2014-2017). He was a winner of Facebook Fellowship 2015 and Baidu Fellowship 2016. He works on Natural Language Processing, advised by Prof. Dan Jurafsky. 

CLOSE

上一篇 下一篇