Publications

All Publications

Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling

Abstract

Joint intent detection and slot filling is a key research topic in natural language understanding (NLU). Existing joint intent and slot filling systems compute features collectively for all slot types, and importantly, have no way to explain the slot filling model decisions. In this work, we propose a novel approach that: (i) learns to generate slot type specific features to improve accuracy and (ii) provides explanations of slot filling decisions for the first time in a joint NLU model. Further, the model is inherently explainable and does not need any post-hoc processing. We perform an additional constrained supervision using a set of binary classifiers to learn slot type specific features, thus ensuring appropriate attention weights are learned to explain slot filling decisions for utterances. We evaluate our approach on two widely used datasets and show accuracy improvements. Moreover, a detailed analysis is also provided for the exclusive slot explainability of our proposed model.

Author: Kalpa Gunaratna, Vijay Srinivasan, Retiree, Hongxia Jin

Published: Conference on Empirical Methods in Natural Language Processing (EMNLP)

Date: Dec 7, 2022