Publications

All Publications

An adversarial learning based multi-step spoken language understanding system through human-computer interaction

Abstract

Most of the existing spoken language understanding systems can perform only semantic frame parsing based on a singleround user query. They cannot take users’ feedback to update/add/remove slot values through multiround interaction with users. In this paper, we introduce a novel interactive adversarial reward learning-based spoken language understanding system that can leverage the multiround user’s feedback to update slot values. We perform two experiments on the benchmark ATIS dataset and demonstrate that the new system can improve parsing performance byat least 2:5% in terms of F1, with only one round of feedback. The improvement becomes even larger when the number of feedback rounds increases. Furthermore, we also compare the new system with state-of-the-art dialogue state tracking systems and demonstrate that the new interactive system can perform better on multiround spoken language understanding tasks in terms of slot- and sentence-level accuracy.

Author: Yu Wang, Yilin Shen, Hongxia Jin

Published: International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Date: Jun 11, 2021