Recent years have seen an increasing number of applications that have a Natural Language interface, either in the form of
chatbots or via personal assistants such as Alexa (Amazon), Google Assistant, Siri (Apple), and Cortana (Microsoft).
To use these applications, a basic dialog between the robot and the human is required.
While this kind of dialog exists today mainly within ”static” robots that do not make any movement in the household space,
the challenge of reasoning about the information conveyed by the environment increases significantly when dealing
with robots that can move and manipulate objects in our home environment.
In this paper, we focus on cognitive robots, which have some knowledge-based models of the world and operate by reasoning and planning with this model.
Thus, when the robot and the human communicate, there is already some formalism they can use – the robot’s knowledge representation formalism.
Our goal in this research is to translate Natural Language utterances into this robot’s formalism, allowing much more complicated household tasks to be completed.
We do so by combining off-the-shelf SOTA language models, planning tools, and the robot’s knowledge-base for better communication.
In addition, we analyze different directive types and illustrate the contribution of the world’s context to the translation process.