Download as iCal file

Domain Adaptation for Natural Language Processing: deep learning, representations and declarative knowledge

By Yftah Zisser
Location Bloomfield 424
Advisor(s): Roi Reichart
PhD
Academic Program: Please choose
 
Tuesday 25 June 2019, 15:30 - 16:30

Large amounts of manually annotated data are crucial for training many state-of-the-art algorithms in Natural Language Processing (NLP). Unfortunately, annotating sufficient amounts of data for NLP tasks is often costly and laborious. Hence, for many NLP applications labeled data is available in only a handful of domains. Domain adaptation – methods that adapt algorithms trained in one textual domain to perform well on another – is hence crucial for NLP.

           

In this PhD work we focus on domain adaptation through representation learning (ReL), a prominent approach in the Neural Network (NN) era. We build on methods that distinguish between pivot and non-pivot features in the input representation and extend them so that they apply for NNs. We describe methods based on auto-encoders and BiLSTMs, as well as extensions to cross-lingual learning and methods for stable training of our models.