Breaking the fourth wall in AI reasoning
In theater, the 4th wall refers to an invisible, imagined wall separating the actors from the audience. A similar concept is prevalent in many AI reasoning mechanisms for multiagent systems, where a reasoning agent is assumed to be behind an imaginary wall that separates it from the actors it observes, preventing it from explicitly communicating its plans and beliefs.
This 4th wall can be manifested as a complete lack of communication or as an inability to observe and leverage existing communication mechanisms. The 4th wall assumption simplifies the agent’s task by limiting its options, but may also limit the agent’s ultimate effectiveness.
In my research I am interested in a methodical evaluation of such 4th wall assumptions in AI and how they can be relaxed in order to design better reasoning agents. In this talk, I will present two instances of AI problems where we relaxed the 4th wall assumption.
The first instance is plan recognition, where an observer hypothesizes about the plan of an actor given a sequence of observations. The actor can only communicate with the observer by executing a legible plan, but it is assumed that no explicit communication takes place. In a work called Sequential Plan Recognition, this assumption is relaxed by a sequential process that augments the recognition process with queries initiated by the observer regarding the actor's plans.
The second instance is ad hoc teamwork, the problem of collaborating with new agents without the ability to pre-coordinate. A common assumption in this research area is that the agents cannot communicate. However, just as two random people may speak the same language, autonomous teammates may also happen to share a communication protocol. Our recent work considers how such a shared protocol can be leveraged, introducing a means to reason about Communication in Ad Hoc Teamwork (CAT).