Download as iCal file

Improving Human-Computer Interaction with Adaptive Spoken Dialogue Systems

By Eran Raveh
Location Bloomfield 527, Faculty of Industrial Engineering and Management
Academic Program: Please choose
 
Sunday 13 January 2019, 13:30 - 14:30

Abstract

Nowadays, we are witnessing an ever-growing presence of interactive devices, like personal assistants, speech-activated cars, hands-free medical assistants, intelligent tutoring systems, etc.

In human-human interaction (HHI), interlocutors mutually accommodate to contribute to the dialogue’s success. However, computers lack this natural dynamic, which put human-computer interaction (HCI) at a disadvantage. The challenge is, therefore, to integrate such human-like communication behaviors into such systems, to make the interaction with them more efficient and fluent.

In this talk, I will give research and technological overview of spoken dialogue systems (SDSs), the underlying architecture of systems like Amazon Alexa and Apple Siri, with emphasis on adaptive SDSs, and present ongoing work on speech adaptation and conversation analysis in HCI.

Joint work with Dr. Ingmar Steiner, Prof. Bernd Möbius, and Iona Gessinger.

Bio

Eran is a Ph.D. candidate in Dr. Ingmar Steiner's Multimodal Speech Processing group at Saarland University, and takes part in the project Phonetic Convergence in Human-Computer Interaction lead by Prof. Bernd Möbius.

He has a background in Natural Language Processing and Speech Processing, which he acquired at Trinity College Dublin, University of Tübingen, IMS Stuttgart, and at the start-up company Voysis that develops Voice AI technologies.

Before that, he studied music and worked as an automation engineer at GE Healthcare.