Computing Seminar on Conversational Interfaces

Professor Oliver Lemon

10 May 2016

Professor Oliver Lemon will describe how conversational agents can be trained to interact naturally with humans, much like a child who experiments with generating new combinations of words to discover their usefulness. Building task-based conversational interfaces (such as Siri and Cortana) is difficult because domain-general, scalable methods for natural language understanding (NLU), dialogue management (DM), and language generation (NLG) are not available. Moreover, most methods for language processing are sentence-based and so can perform fairly well for processing written text, but they face difficulties in the case of spoken dialogue, because ordinary conversation is highly fragmentary and incremental: it naturally happens word-by-word, rather than sentence-by-sentence. Real human conversation uses half-starts, suggested extensions, pauses, interruptions, and corrections -- without respecting the boundaries of sentences, and it is these properties that contribute to the feeling of being engaged in a natural conversation, which current state-of-the-art speech interfaces fail to support.

The BABBLE project is developing machine learning methods (for example using Deep Reinforcement Learning) to automatically create natural conversational interfaces from data. Bio: Oliver Lemon is a Professor of Computer Science at Heriot-Watt University and Director of the Interaction Lab, which focuses on machine learning approaches to spoken and multimodal interaction, and socially intelligent human-robot interaction (HRI). His research interests lie in spoken dialogue systems, machine learning, HRI, multimodal interfaces, and Natural Language understanding and generation. He was previously a research fellow at Stanford and Edinburgh Universities, and has led several national and international research projects funded by EPSRC, the European Union, and ESRC, involving industrial partners such as BMW and France Telecom/Orange Labs. He has co-authored over 100 peer-reviewed papers and is co-author of the book "Reinforcement Learning for Adaptive Dialogue Systems", and co-editor of "Data-Driven Methods for Adaptive Spoken Dialogue Systems". He is an associate editor for ACM Transactions on Interactive Intelligent Systems. He starts a new Horizon-2020 project ("MuMMER") on socially intelligent robotics in 2016.


Can't find what you're looking for? Contact Us